Download PDFOpen PDF in browser

Deep Learning Methods on 3D-Data for Autonomous Driving

EasyChair Preprint 3665

10 pagesDate: June 22, 2020

Abstract

Computer vision tasks as semantic Instance segmentation play a key role in the most recent technological applications such as autonomous driving, robotics and augmented/virtual reality. With the aid of artificial intelligence, object classification and instance segmentation became more approachable tasks in comparison to the former classical methods. Over the past decade different Deep Learning (DL) architectures such as R-CNN family and RPN were introduced to address such tasks on 2D data representations. Recently after the availability of sensors that can capture 3D information. New DL architectures with a backbone of the RPN and R-CNN were developed to work on the different 3D data representations and address their challenges. Yet the challenges are mostly set by the nature of the 3D data obtained by different kinds of sensors such as LiDAR and stereo cameras. Which were mostly deployed in the Autonomous Driving field for acquiring 3D information. Respectively point clouds and RGB-D are the 3D data representations produced by these kinds of sensors. This paper contains a survey on the state-of-art DL approaches that directly process 3D data representations and preform object and instance segmentation tasks. The DL architectures discussed in this work are designed to process point cloud data directly. As Autonomous Driving rely mostly on LiDAR scanners for 3D data representation.

Keyphrases: 3D data, Artificial Intelligence, deep learning, instance segmentation, object detection

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:3665,
  author    = {Ahmed Elkhateeb},
  title     = {Deep Learning Methods on 3D-Data for Autonomous Driving},
  howpublished = {EasyChair Preprint 3665},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser