AT&T Opens Up Watson Speech-Recognition Software API to App Developers

In a blog post today, AT&T SVP for technology and network operations John Donovan made the official announcement that the API for Watson (the company's proprietary voice-recognition software that transcribes spoken words into text) are now open to and available for app developers to access.

Donovan noted that Watson API can be used for app development in seven different consumer-benefitting contexts:

• Web Search – Search from within an app with the power of your voice. Web Search is trained to recognize several million mobile queries.

• Business Search – Trained on tens of millions of local business entries, this context lets you transcribe your search query to let you find what's in the area, from donuts to doctors.

• Voicemail to Text – No need to scribble down a message, this context-trained on a massive set of data acquired from call centers, mobile applications and the web-turns your voicemail into sharable text.

• SMS – Tuned to transcribe text, this context can deliver your spoken message as text through a messaging service of your choice.

• Question and Answer – Trained on over 10 million questions, this context accurately transcribes your question and returns the correct answer.

• TV – Searching for show titles, movies and actors? This context transcribes your spoken search queries to enable you to search the AT&T U-verse program guide.

• Generic – Automatically recognizes and transcribes English and Spanish languages. This context can also be used for general dictation, communication and search.


Through the end of the year, developers paying AT&T's $99 annual dues will have free access and unlimited access to Watson's application programming interfaces. Beginning in 2013, a low-cost point-based payment structure will be instated. [attinovationspace via GigaOm]

Share This Story

Get our newsletter



This is actually really similar to a lot that goes on when people process language, which makes this really cool!

Have you ever been approached by someone who used vocabulary you weren't really expecting, even regarding something you know really well? At times, you don't even really comprehend what the other person said, and you kind of have to put your mind into the context of what you'll be talking about to make sure you're absorbing it all. This happens a lot with people who are bilingual, as well, when they have to think or speak in a different language.

I know little about the brain and how it processes things, but there seems to be a portion of the brain that is active when this kind of "switch" occurs:

The fact that they are using context-specific "vocabularies" for their speech recognition software makes a lot of sense. Now let's hope this becomes common enough that we don't have to be stuck with AT&T APIs just to use it. :P