VeRLab
  • Laboratory
    • Laboratory’s Gallery
    • Infrastructure
    • Our Team
    • About Us
  • News
  • Research
    • Projects
      • Three-dimensional Mapping with Augmented Navigation Cost
      • Cooperative Area Coverage Using Hexagonal Segmentation
      • Fenrir: 3D Printed Robotic Arm
      • Determining the Association of Individuals and the Collective Social Space
      • Semantic Description of Objects in Images Based on Prototype Theory
      • Semantic Mapping for Visual Robot Navigation
      • Detecting Landmarks On Different Domains Faces
      • Underwater Imaging
      • Project system of identification and multiview tracking for equipment and objects in Construction and Manufacturing.
      • Advanced teleoperation of mining equipment: excavator
      • Three-Dimensional Reconstruction from Large Image Datasets
      • Scene Understanding
      • HeRo: An Open Platform for Robotics Research and Education
      • Semantic Hyperlapse for First-Person Videos
    • Publications
    • Events
  • Social Media
    • Linkedin
    • Instagram
    • Twitter
    • YouTube
  • Contact
  • Restrict Area

Things we have done

All Projects

Three-dimensional Mapping with Augmented Navigation Cost

  • » Next
  • Previous »
  • 3D_Augmented_Map_1_En

Unlike most indoor applications, where surfaces are usually human-made, flat, and structured, external environments may be unpredictable as to the types and conditions of the travel surfaces, such as traction characteristics and inclination. Attaining full autonomy in outdoor environments requires a mobile ground robot to perform the fundamental localization and mapping tasks in unfamiliar environments, but with the added challenge of unknown terrain conditions.

In this work we propose a multimodal representation of unknown terrain. The aforementioned representation is based on the prediction of inertial measurements from LiDAR data, regarding speed-invariant inertial signals. Our methodology trains a Convolutional Neural Network (CNN) on recorded LiDAR and IMU data, and learns to predict a navigation cost for previously-unseen terrain patches.

More

Contact
  • UFMG
  • +55 (31) 3409-5856
  • verlab (at) dcc (dot) ufmg (dot) br
Stay Connected
  • Github
  • Youtube
Visitors in last month

  • VeRLab - Computer Vision and Robotics Lab
  • Contact