What is it?

For this project we set out to build a way to create a large set of “fake” machine learning data with minimal input.

Typically, when you want to undertake machine learning, you need to obtain images, label what each image is, and provide co-ordinates inside each image where the object is. This is a lot of work when you get into the thousands, so what if there was an easier way?

This project does exactly that. You provide a 3D scanned object, and the application will run through, generate fake environments this object sits in, automatically label & provide the co-ordinates where the object is in each image.


Why we built it

Tracking our own objects with machine learning has always been an area that we’ve wanted to look into, however the biggest challenge has always been obtaining our own training data.

In order to provide your own data, not only do you need to provide many iterations of the object in images, but you also need to provide it in all different environments, different lighting conditions, etc. Taking the pictures & hand labelling these is a very time-consuming process. With our application you could create the same amount of training data in a matter of minutes.


What are the possible applications?

This project allowed us to solve a large challenge behind running custom machine learning datasets, and this is the main purpose, however utilising a similar technique, this could be extended to other machine learning processes like image classification or image segmentation.

What is its status?

This project was built as a prototype and further development is ceased. Unity has since released an official project called “Perception” that does exactly this.