siri.png Siri co-founder and CTO Tom Gruber is excited about the “big think small screen” concept, and he expects to translate that excitement to others when he gives a keynote address at WebMediaBrands’ Web 3.0 conference next month.

So, what exactly is the idea about? Roughly, it amounts to a combination of semantic web technology, more bandwidth, and big servers that—thanks to the cloud computing surge—together enable an increasing amount of intelligence in the hands of consumers, and literally so through mobile devices.


“The idea is the trends have come together where you have semantic technologies that are usually big server things, with lots of computers and time involved in pre-processing and organizing things and making sense of data,” he says. “Then the 3G network gives us a pipeline to brilliant smart phones like iPhones, and cloud computing is the ecosystem that makes this possible. Moore’s law, cloud computing and the network bandwidth aspect means we now can have the lab supercomputer-based ‘Gee Whiz’ technology and stick it in our phone and by the way it’s mostly free.”

Critical to turning that ‘Gee Whiz’ awe into ‘Gee, look what I can do Whiz’ practicality is the smart interface that lets people put that intelligence to use on small form factors. Siri is built on that idea of intelligence at the interface , automating what a consumer can do with a 24-inch screen, lots of time, and both hands flying on the keyboard researching a bunch of different web sites and combining results to come up with the answers they need and then communicate that information to whomever needs to know it. “We automate that down to a 1 minute interaction with your phone where you call your virtual assistant up,” Gruber says.

The threads of semantic technology in this consist, of course, of better search, based on search engines’ being able to really understand what you’re trying to do (e.g. ‘I want a romantic place for a date this Saturday’) and delivering data links that accurately meet those requirements. But it doesn’t stop at content concentration. “It’s not just taking a concentrated map of input to content,” Gruber says, “but concentrating content results” in the form of intelligent mash-ups that automatically make connections, take action and communicate information based on dimensions such as personal data, theme or task awareness, time and location awareness (finding a lunch meeting locale near your office, scheduling it, inviting attendees, and sending directions to them) much the same way a real live virtual assistant working the Internet could.

The next level from joining things based on associations, class and clustering is, he says, true API integration. “That API level is the new thing on the semantic stack, even above semantic web standards that just address content tagging,” Gruber says. In Siri, that could evidence, for instance, as a dining booking service that combines semantically with the other kinds of services you need to find the restaurant where you want to reserve a table. “The programmable web and people in that value chain, like Mashery, have a good handle on the future, on the fact that APIs are growing greatly and they are inherently semantic levels of things.”

Siri, Gruber reports, should launch the first quarter of next year, and the wait can be attributed to how much effort goes into pulling together everything from semantic data to API services to speech to text to dimensional awareness in order to deliver the world in your hand as he describes it above. “Siri has been incubating for a long time relative to classic Web. 2.0 services,” he says. “It’s interesting why. We have been combining so many technologies….It’s a phase shift to get all those parts working together.” He likens the Siri Virt Personal Assistant to the iPhone, as something that can’t just be “eked out. The iPhone with just a web browser at the start would have been cool but it wouldn’t have had the same effect. Hopefully we will have a full-blown experience when we come out.”

RELATED: