Apple is known to lack in its ability to detect your device lifestyle and Siri in its current state clearly shows that. However, in iOS 10, Apple is moving forward with machine learning, differential privacy, and an improved Siri.

Today, Steven Levy at Backchannel got the chance to interview a few Apple execs that took a behind-the-scenes look at Apple’s artificially intelligent assistant.

The interview begins with Siri’s voice recognition upgrade in 2014, when the system was moved ‘to a neural-net based system’ for users in the US, and says that another upgrade is being made in iOS 10.

Initial reviews were ecstatic, but over the next few months and years, users became impatient with its shortcomings. All too often, it erroneously interpreted commands. Tweaks wouldn’t fix it.

So Apple moved Siri voice recognition to a neural-net based system for US users on that late July day (it went worldwide on August 15, 2014.) Some of the previous techniques remained operational — if you’re keeping score at home, this includes “hidden Markov models” — but now the system leverages machine learning techniques, including deep neural networks (DNN), convolutional neural networks, long short-term memory units, gated recurrent units, and n-grams. (Glad you asked.) When users made the upgrade, Siri still looked the same, but now it was supercharged with deep learning.

Levy notes that Apple didn’t mention the upgrade (until now), but the improvements were dramatic for its time.

This was one of those things where the jump was so significant that you do the test again to make sure that somebody didn’t drop a decimal place,” says Eddy Cue, Apple’s senior vice president of internet software and services.

Apple wanted to make it clear that Siri isn’t the worst virtual assistant out there, rather it’s a real competitor to Google Now and Microsoft’s Cortana.

As we sat down, they handed me a dense, two-page agenda listing machine-learning-imbued Apple products and services — ones already shipping or about to — that they would discuss.

The message: We’re already here. A player. Second to none.

But we do it our way.

Apple SVP Phil Schiller’s take on AI:

“We’ve been seeing over the last five years a growth of this inside Apple,” says Phil Schiller. “Our devices are getting so much smarter at a quicker rate, especially with our Apple design A series chips. The back ends are getting so much smarter, faster, and everything we do finds some reason to be connected. This enables more and more machine learning techniques, because there is so much stuff to learn, and it’s available to [us].”

Federighi says that there isn’t one team working on machine learning. He says Apple applies machine learning to every team rather than having a single team working on it.

“We don’t have a single centralized organization that’s the Temple of ML in Apple,” says Craig Federighi. “We try to keep it close to teams that need to apply it to deliver the right user experience.”

How many people at Apple are working on machine learning? “A lot,” says Federighi after some prodding.

The Apple Pencil for iPad Pro is a good example.

One example of this is the Apple Pencil that works with the iPad Pro. In order for Apple to include its version of a high-tech stylus, it had to deal with the fact that when people wrote on the device, the bottom of their hand would invariably brush the touch screen, causing all sorts of digital havoc. Using a machine learning model for “palm rejection” enabled the screen sensor to detect the difference between a swipe, a touch, and a pencil input with a very high degree of accuracy. “If this doesn’t work rock solid, this is not a good piece of paper for me to write on anymore — and Pencil is not a good product,” says Federighi. If you love your Pencil, thank machine learning.

Lastly, iOS 10 will gain more upgrades to Siri once it releases this fall.

With iOS 10, scheduled for full release this fall, Siri’s voice becomes the last of the four components to be transformed by machine learning. Again, a deep neural network has replaced a previously licensed implementation. Essentially, Siri’s remarks come from a database of recordings collected in a voice center; each sentence is a stitched-together patchwork of those chunks. Machine learning, says Gruber, smooths them out and makes Siri sound more like an actual person.

The entire piece can be found here.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *