Repository logo
 
Loading...
Profile Picture
Person

Farrajota, Miguel

Search Results

Now showing 1 - 5 of 5
  • The SmartVision navigation prototype for the blind
    Publication . du Buf, J. M. H.; Rodrigues, J. M. F.; Paredes, Hugo; Barroso, João; Farrajota, Miguel; José, João; Teixeira, Victor; Saleiro, Mário
    The goal of the project "SmartVision: active vision for the blind" is to develop a small and portable but intelligent and reliable system for assisting the blind and visually impaired while navigating autonomously, both outdoor and indoor. In this paper we present an overview of the prototype, design issues, and its different modules which integrate a GIS with GPS, Wi-Fi, RFID tags and computer vision. The prototype addresses global navigation by following known landmarks, local navigation with path tracking and obstacle avoidance, and object recognition. The system does not replace the white cane, but extends it beyond its reach. The user-friendly interface consists of a 4-button hand-held box, a vibration actuator in the handle of the cane, and speech synthesis. A future version may also employ active RFID tags for marking navigation landmarks, and speech recognition may complement speech synthesis.
  • A biological and real-time framework for hand gestures and head poses
    Publication . Saleiro, Mário; Farrajota, Miguel; Terzic, Kasim; Rodrigues, J. M. F.; du Buf, J. M. H.
    Human-robot interaction is an interdisciplinary research area that aims at the development of social robots. Since social robots are expected to interact with humans and understand their behavior through gestures and body movements, cognitive psychology and robot technology must be integrated. In this paper we present a biological and real-time framework for detecting and tracking hands and heads. This framework is based on keypoints extracted by means of cortical V1 end-stopped cells. Detected keypoints and the cells’ responses are used to classify the junction type. Through the combination of annotated keypoints in a hierarchical, multi-scale tree structure, moving and deformable hands can be segregated and tracked over time. By using hand templates with lines and edges at only a few scales, a hand’s gestures can be recognized. Head tracking and pose detection are also implemented, which can be integrated with detection of facial expressions in the future. Through the combinations of head poses and hand gestures a large number of commands can be given to a robot.
  • Multi-scale cortical keypoints for realtime hand tracking and gesture recognition
    Publication . Farrajota, Miguel; Saleiro, Mário; Terzic, Kasim; Rodrigues, J. M. F.; du Buf, J. M. H.
    Human-robot interaction is an interdisciplinary research area which aims at integrating human factors, cognitive psychology and robot technology. The ultimate goal is the development of social robots. These robots are expected to work in human environments, and to understand behavior of persons through gestures and body movements. In this paper we present a biological and realtime framework for detecting and tracking hands. This framework is based on keypoints extracted from cortical V1 end-stopped cells. Detected keypoints and the cells’ responses are used to classify the junction type. By combining annotated keypoints in a hierarchical, multi-scale tree structure, moving and deformable hands can be segregated, their movements can be obtained, and they can be tracked over time. By using hand templates with keypoints at only two scales, a hand’s gestures can be recognized.
  • The SmartVision Navigation Prototype for Blind Users
    Publication . du Buf, J. M. H.; Barroso, João; Rodrigues, J. M. F.; Paredes, Hugo; Farrajota, Miguel; Fernandes, Hugo; José, João; Teixeira, Victor; Saleiro, Mário
    The goal of the Portuguese project "SmartVision: active vision for the blind" is to develop a small, portable and cheap yet intelligent and reliable system for assisting the blind and visually impaired while navigating autonomously, both in- and outdoor. In this article we present an overview of the prototype, design issues, and its different modules which integrate GPS and Wi-Fi localisation with a GIS, passive RFID tags, and computer vision. The prototype addresses global navigation for going to some destiny, by following known landmarks stored in the GIS in combination with path optimisation, and local navigation with path and obstacle detection just beyond the reach of the white cane. The system does not replace the white cane but complements it, in order to alert the user to looming hazards. In addition, computer vision is used to identify objects on shelves, for example in a pantry or refrigerator. The user-friendly interface consists of a four-button hand-held box, a vibration actuator in the handle of the white cane, and speech synthesis. In the near future, passive RFID tags will be complemented by active tags for marking navigation landmarks, and speech recognition may complement or substitute the vibration actuator.
  • Biological models for active vision: towards a unified architecture
    Publication . Terzic, Kasim; Lobato, D.; Saleiro, Mário; Martins, Jaime; Farrajota, Miguel; Rodrigues, J. M. F.; du Buf, J. M. H.
    Building a general-purpose, real-time active vision system completely based on biological models is a great challenge. We apply a number of biologically plausible algorithms which address different aspects of vision, such as edge and keypoint detection, feature extraction,optical flow and disparity, shape detection, object recognition and scene modelling into a complete system. We present some of the experiments from our ongoing work, where our system leverages a combination of algorithms to solve complex tasks.