Earlier this month, Apple announced that it would begin sharing research papers from their artificial intelligence and machine learning teams. This is the first time in recent history where Apple is deciding to share information with the public. Now, a few week s later, Apple has just published its first research paper, focusing on the company’s research in the intelligent image recognition field.
The piece, which is titled “Learning from Simulated and Unsupervised Images through Adversarial Training,” described a program that’s able to decipher and understand digital images in a more advanced setting. The company began doing some basic tasks in this field such as the facial recognition feature found in the Photos app in iOS 10.
In the paper, Apple explains the upsides and downsides of using real images compared to “synthetic” or computer generated ones. For real images, annotations must be added and can be an “expensive and time-consuming task” that requires real humans to individually label objects in a picture. On the flip side, computer-generated images are much easier “because the annotations are automatically available.”
However, using synthetic images could lead to worse results. This is because “synthetic data is often not realistic enough” and would lead to an end-user experience that would only be able to detect computer-generated photos.
In this paper, we propose Simulated+Unsupervised (S+U) learning, where the goal is to improve the realism of synthetic images from a simulator using unlabeled real data. The improved realism enables the training of better machine learning models on large datasets without any data collection or human annotation effort.
We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study.