When the iPhone 3GS got Voice Control for the very first time, it’s fair to say, it sucked. Badly. I had such a negative view of it that by the time the iPhone 4 came out I’d virtually forgotten it existed.
I used to try using it to call my wife, but, as her name isn’t traditionally English/American, it could never figure it out. At one point I changed her name in my contacts list to “My Wife” just to ensure it always got through.
One of the few Android features I’ve been green-eyed over is that its users get, as standard, a great speech recognition system. In complete contrast to the iOS system, I found myself preferring to speak text messages than typing them on the virtual keyboard on my N1. It’s fair to say that Android’s native voice control system trounces that of iOS.
But is all that set to change? Recent news is that Apple has negotiated a deal with Nuance, a company which specializes in voice recognition. This said deal supposedly somehow involves, once again, the giant data center in North Carolina.
According to speculation, this center utilizes Nuance’s hardware, and software, but is owned by Apple. This would, of course, mean Apple can steer clear of using any third-party services for its own developments.
You may remember that, last year, Cupertino purchased Siri, a voice recognition app. Siri acts as a “personal assistant” and can complete much more complicated requests than “call My Wife!” It can search for local restaurants that serve specific foods, or find what’s on at the local theater.
It’s now expected that this will become a standard feature of iOS 5, which will be unveiled at WWDC in June this year.
What do you, our readers, think of this? Do you think our iOS needs a more advanced speech recognition tool? Comment below