Jamie Carter of Tech Radar recently wrote, “With iOS devices now allowing the sending of voice messages and predictions for self-driving cars and voice-activated doors, lights and elevators (cue the internet of things), it’s clear that the future will be spoken, not written. The technology behind this shift in how we interact with our surroundings is natural language processing, a technology that enables computers to understand the meaning of our words and recognise the habits of our speech.” Read more
Posts Tagged ‘Google Now’
Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.
Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.
Bill Roberts, CEO, Swirrl: Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems. I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.
Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:
Phil Archer, Data Activity Lead, W3C:
For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.
I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!
[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.
Interested in how schema.org has trended in the last couple of years since its birth? If you were at The International Semantic Web Conference event in Sydney a couple of weeks back, you may have caught Google Fellow Ramanathan V. Guha — the mind behind schema.org — present a keynote address about the initiative.
Of course, Australia’s a far way to go for a lot of people, so The Semantic Web Blog is happy to catch everyone up on Guha’s thoughts on the topic.
We caught up with him when he was back stateside:
The Semantic Web Blog: Tell us a little bit about the main focus of your keynote.
Guha: The basic discussion was a progress report on schema.org – its history and why it came about a couple of years ago. Other than a couple of panels at SemTech we’ve maintained a rather low profile and figured it might be a good time to talk more about it, and to a crowd that is different from the SemTech crowd.
The short version is that the goal, of course, is to make it easier for mainstream webmasters to add structured data markup to web pages, so that they wouldn’t have to track down many different vocabularies, or think about what Yahoo or Microsoft or Google understands. Before webmasters had to champion internally which vocabularies to use and how to mark up a site, but we have reduced that and also now it’s not an issue of which search engine to cater to.
It’s now a little over two years since launch and we are seeing adoption way beyond what we expected. The aggregate search engines see about 15 percent of the pages we crawl have schema.org markup. This is the first time we see markup approximately on the order of the scale of the web….Now over 5 million sites are using it. That’s helped by the mainstream platforms like Drupal and WordPress adopting it so that it becomes part of the regular workflow. Read more
Barbara Starr of Search Engine Land recently wrote, “When it comes to search, we are accustomed to queries that are initiated client-side and not server-side. But, Google Now and similar services are altering this long-standing trend. Search, by definition, implies user-initiated actions. How is this changed by technology such as Google Now and Google’s Knowledge Graph? First, what is Google Now? Available within the Google Search mobile app, Google Now not only answers user-generated queries but also uses predictive technology to provide the user with information he or she might need throughout the day in the form of ‘cards.’ According to Google: ‘Google Now cards are displayed when you’re most likely to need them. Most are based on information available to your Google account, such as your current location, recent searches, or calendar entries’.” Read more
The next few days will see Google upping the search ante again, whether you’re looking for information in Gmail, Google Calendar or Google+. In Google Search, users will be able to ask questions like what is their flight status or when an expected package will arrive, without having to troll through their emails or delivery tracking information, according to the company’s blog.
Essentially, Google Now capabilities for Android, iPhones and iPads, is coming to Google Search, for all U.S. English-speaking users on tablets, smartphones and desktops too. Both voice and typed search queries are supported. According to the blog, users will be able to get information on their upcoming flights and live status on current flights; see dining plans or hotel stays by querying for their reservations; see what’s on the charge card and order status by asking about their purchases; view their upcoming schedules by asking about tomorrow’s plans; or explore images – by what’s in them or their relationship to trips or events – that they’ve uploaded to Google Plus.
Google is pulling from its swath of connections “trying to understand you,” says David Amerland, author of the new book, Google Semantic Search.
There are new Motorola Droid devices in town: The three Verizon Android 4.2 smartphones unveiled at a press event yesterday include the Motorola Droid Mini, Ultra and Maxx. The line includes what the company touts as the longest-lasting 4G LTE smartphone in the Maxx, with the vendor claiming 48 hours on a single charge, and what it says is the thinnest 4G LTE smartphone around in the Ultra. The smartphones reportedly all come with a unique Kevlar fiber 3D unibody design and a few months’ free Google Music All Access subscription, too. But what will catch the eyes of readers of this blog is the proprietary Motorola X8 Mobile Computing System that’s behind the sleek-looking handsets.
In addition to the graphics and application processor cores found within the eight-core System are two new low-power cores, one to power contextual computing and one aimed at natural language processing. Read more
“With more features in the Knowledge Graph and more languages, with conversational voice search and hot-wording coming to Chrome on desktops and laptops, and with new Now functionality like reminders….search is becoming a really beautiful and ubiquitous experience that intelligently answers your questions and assists you throughout the day across all screens.”
That’s how Google Fellow Amit Singhal summed up the evolving search experience at today’s Google I/O event. Here’s more about the latest features:
- Google’s Knowledge Graph, now some 570 million entities strong and growing, is taking it to the stats. Now, users will get important stats powered by the Knolwedge Graph, he said. “Already you can find answers to questions like what is the population of India,” he told the audience, “but starting today we will anticipate your next question,” which may be how that population compares to the population of other countries. So, you’ll get the answer alongside the trends line and see all that in comparison to the population of the two countries whose population is most often compared to India, China and the U.S. Google Knoweldge Graph is also boosting its language support, adding to the existing eight Polish, Turkish, simplified and traditional Chinese.
- Users in the Gmail search trial already have the capability of finding answers – like when is their upcoming flight or restaurant reservation — without having to sift through email, docs and calendar data. But, said Singhal, things can get better when it comes to letting users get those answers in the most natural way possible, which means Google has been working hard on technologies like voice recognition and natural language understanding. To that end, conversational search, already available on Android and iOS, is coming to all desktops and laptops through Chrome, he said.
- Joining conversation search is hot-wording, a new interface, or, as he calls it, a “no interface,” where users can ask their search questions without clicking on the mike. Just preface a voice question with, “OK Google,” and Google will speak back the answer to you, drawing among other sources on its Knowledge Graph for the response. Google product manager Johanna Wright gave a demo of the voice experience courtesy of Chrome on a mobile device, working her way through planning a day trip to Santa Cruz through to images of its beach boardwalk, asking “OK Google, how far from here to it?,” where Google, in speaking back the answer, recognized that it referred to the boardwalk and that here was her current location.
- Enter Google Now: Singhal talked up anticipation (it’s more fun if you pronounce it like Tim Curry in the Rocky Horror Picture Show number), and the usefulness of having the right answer suggested at the right time, even before a user asks. That’s what is set to happen with an on-the-way feature that lets users set reminders in Google Now to show up when they need them. Also launching on the Google Now front are other new cards: public transit commute time cards and more cards for music albums, tv shows, and video games. Google is now “even more useful as an assisted tool,” he said.
Of the new age of search, Singhal said it’s not around the corner, that it will be some time before this becomes the predominant search experience. “There are lots of complex and scientific problems to solve, but our investment and commitment to getting there sooner rather than later is immense.”
Stuart Dredge of The Guardian writes, “Google has launched its Google Now service for iOS devices, as an update to its existing Google Search app. Accessed by swiping upwards from the bottom of the app’s homescreen, Google Now learns about its user through their activities and their history in various Google services. It then serves up weather forecasts, traffic reports, boarding passes, sports scores and other information when they may be relevant. On iOS, it’s the sole new feature in version 3.0.0 of the Google Search app. Available for Android devices since the Android 4.1 Jelly Bean software was released in 2012, Google Now’s iOS incarnation has been subject to speculation this year.” Read more
The NY Times reports today that Google acknowledged it had violated people’s privacy during its StreetView mapping project. Thirty-eight states had brought a case against Google on the grounds that the project resulted in people’s passwords and other personal information being unknowingly recorded by the search giant. Google has agreed to settle it by paying a $7 million fine as well as by becoming more aggressive in ensuring that its employees’ efforts don’t violate privacy and informing the public about how to avoid having their privacy compromised.
In its discussion of the settlement, the article brings up that the way now is paved for another privacy battle, this time over Google Glass. Concerns are that Google Glass eyewear also can be used to record photos, videos and audios of the wearer’s surroundings, without the permission of the individuals featured in those surroundings. With Google Glass, users can use their voice to input commands to take a picture or make a video, as well as to take steps less likely to compromise privacy, such as search for facts about landmarks or events.
How that privacy question plays out is yet to be seen. But concerns aren’t stoping the project – which was demonstrated at last week’s SXSW conference – from moving ahead. Google yesterday announced that the glasses will accommodate frames and lenses that match users’ eye prescriptions, for example.
Getting Google Glass to respond to voice commands and searches appears to leverage capabilities it has developed for its Voice Search App for Android, as well as its semantically-driven Knowledge Graph database of hundreds of millions of entities and billions of facts, and their relationships to each other.
NEXT PAGE >>