What is it?

Training machine learning models requires large amounts of data that take a long time to create manually.

For this project we wanted to try out Unity’s Perception package. This allows you to create “synthetic” or computer-generated data automatically. Instead of hand labelling data that we personally collect, this package allows us to use 3D models inside a video game engine to create and label this data at the same time.


Why we built it

For this project we wanted to create a mobile application that would allow you to change the paint colour of your vehicle. To achieve this, we needed to train a machine learning model to create a per pixel mask that covered only the body panels of a vehicle, disregarding the wheels, glass, trim, etc. There are already models out there that can mask an entire vehicle, but we wanted to mask only parts of it.


What are the possible applications?

This project allowed us to create 4,000 labelled images for a semantic segmentation task in a matter of hours with one person. Doing this by hand would have taken days or weeks depending on how many people were working on the task.

In practice they still recommend mixing in hand labelled real-world data as well, however, even so this greatly reduces the amount of time and money required to create datasets capable of training machine learning models.

What is its status?

Currently we have shelved this project, however, we will be revisiting it periodically to tweak our dataset and continue to train the model until we get better results to turn it into a prototype application.