Repository logo
 
Loading...
Project Logo
Research Project

Blavigator: a cheap and reliable navigation aid for the blind

Authors

Publications

Minimalistic vision-based cognitive SLAM
Publication . Saleiro, Mário; Rodrigues, J. M. F.; du Buf, J. M. H.
The interest in cognitive robotics is still increasing, a major goal being to create a system which can adapt to dynamic environments and which can learn from its own experiences. We present a new cognitive SLAM architecture, but one which is minimalistic in terms of sensors and memory. It employs only one camera with pan and tilt control and three memories, without additional sensors nor any odometry. Short-term memory is an egocentric map which holds information at close range at the actual robot position. Long-term memory is used for mapping the environment and registration of encountered objects. Object memory holds features of learned objects which are used as navigation landmarks and task targets. Saliency maps are used to sequentially focus important areas for object and obstacle detection, but also for selecting directions of movements. Reinforcement learning is used to consolidate or enfeeble environmental information in long-term memory. The system is able to achieve complex tasks by executing sequences of visuomotor actions, decisions being taken by goal-detection and goal-completion tasks. Experimental results show that the system is capable of executing tasks like localizing specific objects while building a map, after which it manages to return to the start position even when new obstacles have appeared.
A biological and real-time framework for hand gestures and head poses
Publication . Saleiro, Mário; Farrajota, Miguel; Terzic, Kasim; Rodrigues, J. M. F.; du Buf, J. M. H.
Human-robot interaction is an interdisciplinary research area that aims at the development of social robots. Since social robots are expected to interact with humans and understand their behavior through gestures and body movements, cognitive psychology and robot technology must be integrated. In this paper we present a biological and real-time framework for detecting and tracking hands and heads. This framework is based on keypoints extracted by means of cortical V1 end-stopped cells. Detected keypoints and the cells’ responses are used to classify the junction type. Through the combination of annotated keypoints in a hierarchical, multi-scale tree structure, moving and deformable hands can be segregated and tracked over time. By using hand templates with lines and edges at only a few scales, a hand’s gestures can be recognized. Head tracking and pose detection are also implemented, which can be integrated with detection of facial expressions in the future. Through the combinations of head poses and hand gestures a large number of commands can be given to a robot.
Disparity energy model using a trained neuronal population
Publication . Martins, Jaime; Rodrigues, J. M. F.; du Buf, J. M. H.
Depth information using the biological Disparity Energy Model can be obtained by using a population of complex cells. This model explicitly involves cell parameters like their spatial frequency, orientation, binocular phase and position difference. However, this is a mathematical model. Our brain does not have access to such parameters, it can only exploit responses. Therefore, we use a new model for encoding disparity information implicitly by employing a trained binocular neuronal population. This model allows to decode disparity information in a way similar to how our visual system could have developed this ability, during evolution, in order to accurately estimate disparity of entire scenes
Multi-scale cortical keypoints for realtime hand tracking and gesture recognition
Publication . Farrajota, Miguel; Saleiro, Mário; Terzic, Kasim; Rodrigues, J. M. F.; du Buf, J. M. H.
Human-robot interaction is an interdisciplinary research area which aims at integrating human factors, cognitive psychology and robot technology. The ultimate goal is the development of social robots. These robots are expected to work in human environments, and to understand behavior of persons through gestures and body movements. In this paper we present a biological and realtime framework for detecting and tracking hands. This framework is based on keypoints extracted from cortical V1 end-stopped cells. Detected keypoints and the cells’ responses are used to classify the junction type. By combining annotated keypoints in a hierarchical, multi-scale tree structure, moving and deformable hands can be segregated, their movements can be obtained, and they can be tracked over time. By using hand templates with keypoints at only two scales, a hand’s gestures can be recognized.
Visual navigation for the blind: path and obstacle detection
Publication . José, João; Rodrigues, J. M. F.; du Buf, J. M. H.
We present a real-time vision system to assist blind and visually impaired persons. This system complements the white cane, and it can be used both indoor and outdoor. It detects borders of paths and corridors, obstacles within the borders, and it provides guidance for centering and obstacle avoidance. Typical obstacles are backpacks, trash cans, trees, light poles, holes, branches, stones and other objects at a distance of 2 to 5 meters from the camera position. Walkable paths are detected by edges and an adapted Hough transform. Obstacles are detected by a combination of three algorithms: zero crossings of derivatives, histograms of binary edges, and Laws’ texture masks.

Organizational Units

Description

Keywords

Contributors

Funders

Funding agency

Fundação para a Ciência e a Tecnologia

Funding programme

3599-PPCDT

Funding Award Number

RIPD/ADA/109690/2009

ID