Braha Madar, M.Sc Thesis

Geometric parameters extraction for 3D model reconstruction from a scanned scene using Deep Learning methods

A wide variety of fields and applications, including industrial manufacturing, human machine interfaces or augmented reality depend on the analysis of a real-world object or environment. This information is collected by 3D scanning devices and consists of a point cloud sampled on the surface of the objects in the scene. The point cloud information needs to be processed in order to perform different analysis like registration, classification, segmentation, or modelling.

Currently, most Deep Learning (DL) techniques in the field of computer vision are developed for 2D data images.  Fortunately, 3D scanning devices are becoming more affordable and reliable, making a large amount of 3D data available. As a result, the interest of the 3D computer vision community to investigate DL architectures on 3D data has increased in the past years.

This research proposes to utilize DL techniques in order to reconstruct a 3D computerized model from a scanned scene. One of the major advantages of Neural networks is their ability to generalize information by learning from data example. Thus, the first step of this research is to build a synthetic dataset used to train our network.

The second step is to utilize a Neural Network that directly consumes point clouds in order to predict the geometric parameters of the 3D model. Moreover, we propose to compare two different architectures for the prediction of the 3D model parameters. The first network will directly predict the desired parameters in a supervised manner. The second network consists in an autoencoder that will learn the features of the 3D model in an unsupervised manner and a Multi Layer Perceptron will be used to predict the parameters.