Rumor: Apple Data Center For Speech Recognition Tech; Announcement At WWDC



Nuance, the company behind the Dragon Dictation app, are apparently going to the subject of a new, and rather important, partnership with Apple that is expected to be announced at the WWDC meeting in early June, according to TechCrunch. In further detail, apparently a large portion of Apple’s massive North Carolina data center will actually be used to support and host many of the services that the new speech recognition technology will employ.

The report comes as a follow up to a report last week which suggested that Nuance voice recognition technology would form a core part of the new iOS 5 operating system. The new technology is reportedly already in use in the NC data center, both at a hardware and software level although details on what it is being used for are a little sketchy.

The article suggests that Apple is establishing this technology partnership as a key part of their data center for a number of reasons:

“First, Apple will be able to process this voice information for iOS users faster. Second, it will prevent this data from going through third-party servers. And third, by running it on their own stack, Apple can build on top of the technology, and improve upon it as they see fit”

It is unclear at this stage how the implementation of the technology fits in with Apple’s acquisition of Siri, the personal organizer application.  However, it’s worth speculating that many of the features that are core to the Siri app could appear in the next version of the iOS operating system which is expected to be previewed at the WWDC event, the same event at which the Nuance/Apple partnership is due to be announced.

Hearing The Voices…

Also on AppleBitch.com:

This entry was posted in News, Rumor and tagged , . Bookmark the permalink.
  • Slphilips

    Since I can’t edit, I’ll clarify that I would also presume that the data center would use an OS other than Mac or Windows.
    Thank you.

  • Slphilips

    This somehow seems backward.
    Will the speech recognition technology employ the services? Or will various services employ speech recognition technology?
    And how could that technology even begin to take a substantial portion of the data center’s capacity? (Though perhaps many soon to be unveiled services that USE speech recognition could.)
    Also, unless I’m missing something, wouldn’t the use of Nuance at the data center imply the use of Mac or Windows software? It seems to me that there are already “industrial strength” speech recognition technologies that Nuance couldn’t touch.
    I hope you can find more detail to clarify this. Right now it just doesn’t feel right and I’ll stick with rumors that the Nuance connection is to improve SR on iOS – or even the Mac.