Apple's Siri May Soon Employ Neural Network Algorithms

According to Wired, Siri may soon undergo a long overdue overhaul that would boost the iOS digital assistant's accuracy in understanding human language. This would come via neural network algorithms and architecture, technology that is already being developed and utilized by many of the major players in the tech industry.

Neural networks may soon power Siri

Though they seemingly peaked in popularity thirty years ago, there has been a renaissance of research in neural networks in recent years. The term refers to a machine learning model inspired by biological central nervous systems, namely the brain, where the network consists of neurons. Neural networks have proven more

efficient in tasks such as speech recognition and computer vision than traditional rule based architectures, which has come to the attention of some big names in Silicon Valley.

Five years ago, a group of Microsoft researchers led by Peter Lee, who heads the software giants' research division, tested the neural network model with speech recognition. The results showed a 25% increase in accuracy. According to Lee, "We published those results, then the world changed."

Now, many companies are experimenting with neural networks. Microsoft uses them with its recently announced Skype Translator, software that will translate spoken languages in real time. Google has used the architecture to improve Android's speech recognition. IBM, too, is experimenting with the technology. Apple, however, has been conspicuously absent from the renewed interest in neural networks, or at least it seems so. Judging by recent activity, it may be changing its tune.

Apple has previously licensed speech recognition software from Nuance, a well know company in the field. According to artificial intelligence researchers in the know, however, this is about to change. It now appears that Apple is rapidly building a speech recognition team. According to Abdel-rahman Mohamed, a postdoctoral researcher at the University of Toronto, "Apple is not hiring only in the managerial level, but hiring also people on the team-leading level and the researcher level... they’re building a very strong team for speech recognition research.” Some new hires include Alex Acero, a top manager at Microsoft, who is now a senior director of Apple's Siri group, and Li Deng and Dong Yu, two speech recognition researchers at Microsoft. Apple has also poached one of Nuance's speech researchers, Gunnar Evermann, who now is a Siri manager. Arnab Ghoshal, a researcher from the University of Edinburgh is yet another addition.

At this point there is no telling when Siri's upgrade will manifest, but Lee thinks it will take about six months for Apple to start using neural nets.

tags: