VeRLab
  • Laboratory
    • Laboratory’s Gallery
    • Infrastructure
    • Our Team
    • About Us
  • News
  • Research
    • Projects
      • Three-dimensional Mapping with Augmented Navigation Cost
      • Cooperative Area Coverage Using Hexagonal Segmentation
      • Fenrir: 3D Printed Robotic Arm
      • Determining the Association of Individuals and the Collective Social Space
      • Semantic Description of Objects in Images Based on Prototype Theory
      • Semantic Mapping for Visual Robot Navigation
      • Detecting Landmarks On Different Domains Faces
      • Underwater Imaging
      • Project system of identification and multiview tracking for equipment and objects in Construction and Manufacturing.
      • Advanced teleoperation of mining equipment: excavator
      • Three-Dimensional Reconstruction from Large Image Datasets
      • Scene Understanding
      • HeRo: An Open Platform for Robotics Research and Education
      • Semantic Hyperlapse for First-Person Videos
    • Publications
    • Events
  • Social Media
    • Linkedin
    • Instagram
    • Twitter
    • YouTube
  • Contact
  • Restrict Area

Things we have done

All Projects

Captar-Libras Project

  • Previous »
  • captar-dataset-images
  • captar-libras-overview

Communication between deaf patients and doctors, who are not fluent in sign language, requires a human interpreter to translate the conversation. The Captar-Libras is a research and development project that seeks to facilitate communication between deaf people and health professionals without needing a human interpreter. The project employs Computer Vision, Artificial Intelligence, and Human-Computer Interface techniques for translating communication in both directions, focused on Brazilian Sign Language (Libras).

More

Contact
  • UFMG
  • +55 (31) 3409-5856
  • verlab (at) dcc (dot) ufmg (dot) br
Stay Connected
  • Github
  • Youtube
Visitors in last month

  • VeRLab - Computer Vision and Robotics Lab
  • Contact