Proposal - Project Model S

TrailerImg

Summary: We are parallelizing Variational Shape Approximation, a mesh simplifying algorithm to efficiently process large input meshes obtained from 3D scanners.

Background

Using 3D scanning to measure and reproduce drawings/models of existing objects has always been an important topic.

In the world of design, there has been an increasing trend to utilize 3D scanning to facilitate modeling. One of the most classical scenario was back in the 1990s, when Frank Gehry had Rick Smith digitize his physical model of Walt Disney Concert Hall, which is later used for computational modelling. This application of 3D scanning is very intriguing and promising for designers. However, designers are often frustrated by the fact that reconstructed meshes typically consist of too many faces, which is not friendly for remodelling.

Therefore, an efficient algorithm is needed for simplifying large input meshes while preserving most of the geometric features. After thorough and careful research, we decided to use Variational Shape Approximation (a.k.a. VSA) as the simplification algorithm.

VSA has the following advantages in terms of simplifying a mesh:

  1. It outputs polygon mesh, which is user-friendly to human editors.
  2. It adheres to the geometric structure of the original object.
  3. It enables interactive remeshing. User can point to parts of the original mesh to add a face to the simplified result.

Current problems with VSA are that:

  1. There is no working open source implementation of a whole set of VSA algorithm (presented in Cohen-Steiner et al.’s paper, including meshing).
  2. As pointed out, VSA is slower than other greedy mesh decimation algorithms because it involves multiple iterations, which opens opportunities to parallelize this algorithm using various techniques we learn in 15148.

Details of this research can be found in the More Background section.

Challenges

The major part of this algorithm being parallelized is the Lloyd iteration part, which is quite similar to the K-Means clustering algorithm that is widely used in machine learning. Although parallelization of K-Means has been an wellstudied topic, the algorithm we are using has some major differences from K-Means that makes our project quite challenging.

  1. While K-Means has this good data parallel property where each distance calculation can be performed in parallel, VSA relies on this region growing procedure that uses a global priority queue. How we program CUDA over this priority queue can be very challenging.
  2. In Scotty3D, mesh is stored with a pointer based halfedge data structure, which is inefficient in CUDA. Implementing an efficient halfedge-like data structure for CUDA is also a challenge.
  3. We are dealing with huge input sizes (presumably billions of faces in one single mesh). Memory efficiency is also a challenge that could directly limit our input size.
  4. (optional) We will also consider migrating of our code onto different modelling software platforms. Integrating CUDA on those platforms will be a problem.

Resources

Goals and Deliverables

The goal is to achieve good speed up of VSA without downgrading its output quality, and to enable processing of very large data sets.

It is desired that given large inputs, our CUDA VSA can beat CPU based greedy mesh decimation algorithms. Because the amount of computation is different between these two algorithms, VSA may not be able to compete with greedy method if both of them are implemented in CUDA.

We are using C++ with Scotty3D as our major platform for developing. We will further implement CUDA based parallel VSA and integrate it into Scotty3D. The CUDA VSA program will also be further extracted out as a stand alone program to be integrated on other modelling platforms.

Schedule

Nov 6 2017 - Environment set up
Nov 13 2017 - Working sequential algorithm on Scotty3D
Nov 20 2017 - Preliminary CUDA Integration
Nov 27 2017 - Support for Interactive Simplification
Dec 4 2017 - Parallel VSA finished
Dec 11 2017 - Prepare for presentation