Signe Brewster of Gigaom recently wrote, “In 2012, Google hired Ray Kurzweil to build a computer capable of thinking as powerfully as a human. It would require at least one hundred trillion calculations per second — a feat already accomplished by the fastest supercomputers in existence. The more difficult challenge is creating a computer that has a hierarchy similar to the human brain. At the Google I/O conference Wednesday, Kurzweil described how the brain is made up of a series of increasingly more abstract parts. The most abstract — which allows us to judge if something is good or bad, intelligent or unintelligent — is an area that has been difficult to replicate with a computer. A computer can calculate 10 x 20 or tell the difference between a person and a table, but it can’t judge if a person is kind or mean. To get there, humans will need to build computers that can build abstract consciousness from a more concrete level. Humans will program them to recognize patterns, and then from those patterns they will need to be smart enough to learn to understand more.”
Posts Tagged ‘Google I/O’
Another announcement by Google this week – one that didn’t get quite as much play as the launch at I/O of Google Play Music All Access and improvements to its search, map and Google + services – was this: Support for JSON-LD markup in Gmail.
Manu Sporny, who has been instrumental in JSON-LD’s development and is one of the authors of the draft, heralds the news here in his blog, noting that it means that Gmail now will be able to recognize people, places, events and a variety of other Linked Data objects, and that actions may be taken on the Linked Data objects embedded in an e-mail. “For example, if someone sends you an invitation to a party, you can do a single-click response on whether or not you’ll attend a party right from your inbox. Doing so will also create a reminder for the party in your calendar,” he writes.
The news was greeted with enthusiasm on a W3C JSON LD message round, as, as Sporny describes it, “pretty big validation of the technology.”
While noting that Google followed the standard closely, Sporny does point out some issues with the implementation – including a major one that Google isn’t using the JSON-LD @context parameter correctly in its markup examples:
“With more features in the Knowledge Graph and more languages, with conversational voice search and hot-wording coming to Chrome on desktops and laptops, and with new Now functionality like reminders….search is becoming a really beautiful and ubiquitous experience that intelligently answers your questions and assists you throughout the day across all screens.”
That’s how Google Fellow Amit Singhal summed up the evolving search experience at today’s Google I/O event. Here’s more about the latest features:
- Google’s Knowledge Graph, now some 570 million entities strong and growing, is taking it to the stats. Now, users will get important stats powered by the Knolwedge Graph, he said. “Already you can find answers to questions like what is the population of India,” he told the audience, “but starting today we will anticipate your next question,” which may be how that population compares to the population of other countries. So, you’ll get the answer alongside the trends line and see all that in comparison to the population of the two countries whose population is most often compared to India, China and the U.S. Google Knoweldge Graph is also boosting its language support, adding to the existing eight Polish, Turkish, simplified and traditional Chinese.
- Users in the Gmail search trial already have the capability of finding answers – like when is their upcoming flight or restaurant reservation — without having to sift through email, docs and calendar data. But, said Singhal, things can get better when it comes to letting users get those answers in the most natural way possible, which means Google has been working hard on technologies like voice recognition and natural language understanding. To that end, conversational search, already available on Android and iOS, is coming to all desktops and laptops through Chrome, he said.
- Joining conversation search is hot-wording, a new interface, or, as he calls it, a “no interface,” where users can ask their search questions without clicking on the mike. Just preface a voice question with, “OK Google,” and Google will speak back the answer to you, drawing among other sources on its Knowledge Graph for the response. Google product manager Johanna Wright gave a demo of the voice experience courtesy of Chrome on a mobile device, working her way through planning a day trip to Santa Cruz through to images of its beach boardwalk, asking “OK Google, how far from here to it?,” where Google, in speaking back the answer, recognized that it referred to the boardwalk and that here was her current location.
- Enter Google Now: Singhal talked up anticipation (it’s more fun if you pronounce it like Tim Curry in the Rocky Horror Picture Show number), and the usefulness of having the right answer suggested at the right time, even before a user asks. That’s what is set to happen with an on-the-way feature that lets users set reminders in Google Now to show up when they need them. Also launching on the Google Now front are other new cards: public transit commute time cards and more cards for music albums, tv shows, and video games. Google is now “even more useful as an assisted tool,” he said.
Of the new age of search, Singhal said it’s not around the corner, that it will be some time before this becomes the predominant search experience. “There are lots of complex and scientific problems to solve, but our investment and commitment to getting there sooner rather than later is immense.”