Stephanie Mlot of PC Mag reports, “Rovi announced today that it will acquire Veveo in order to boost its search and analytics tools. The $62 million cash purchase comes with Veveo’s Knowledge Graph-driven semantic tech and natural-language controls, the latter which likely means integrated voice search. Massachusetts-based Veveo—not to be confused with YouTube’s Vevo music service—launched 10 years ago with the hope of bridging connected devices with intelligent discovery solutions. Over the last decade, the company has acquired 50 patents and works with the likes of AT&T, Cablevision, and Verizon. Now, it will bring its semantic solutions to Rovi, which licenses data for TV guides, and intends to use Veveo’s technology to tailor entertainment suggestions to each individual user.” Read more
Posts Tagged ‘knowledge graph’
Last week news came from SindiceTech about the availability of its SindiceTech Freebase Distribution for the cloud (see our story here). SindiceTech has finalized its separation from the university setting in which it incubated, the former DERI institute, now a part of the Insight Center for Data Analytics, and now is re-launching its activities, with more new solutions and capabilities on the way.
“The first thing was to launch the Knowledge Graph distribution in the cloud,” says CEO Giovanni Tummarello. “The Freebase distribution showcases how it is possible to quickly have a really large Knowledge Graph in one’s own private cloud space.” The distribution comes instrumented with some of the tools SindiceTech has developed to help users both understand and make use of the data, he says, noting that “the idea of the Knowledge Graph is to have a data integration space that makes it very simple to add new information, but all that power is at risk of being lost without the tools to understand what is in the Knowledge Graph.”
Included in the first round of the distribution’s tools for composing queries and understanding the data as a whole are the Data Types Explorer (in both tabular and graph versions), and the Assisted SPARQL Query Editor. The next releases will increase the number of tools and provide updated data. “Among the tools expected is an advanced Knowledge Graph entity search system based on our newly released SIREn search system,” he says.
With the support of Google Developers, SindiceTech has announced the availability of its Freebase Distribution for the cloud. According to SindiceTech, “Freebase is an amazing data resource at the core of Google’s ‘Knowledge Graph’. Freebase data is available for full download but today, using it ‘as a whole’ is all but simple. The SindiceTech Freebase distribution solves that by providing all the Freebase knowledge preloaded in an RDF specific database (also called triplestore) and equipped with a set of tools that make it much easier to compose queries and understand the data as a whole.”
Your Own Private Freebase
Kerstin Recker recently wrote for Wired, “Search has changed dramatically over the past year and semantic technology has been at the center of it all. Consumers increasingly expect search engines to understand natural language and perceive the intent behind the words they type in, and search engine algorithms are rising to this challenge. This evolution in search has dramatic implications for marketers, consumers, technology developers and content creators — and it’s still the early days for this rapidly changing environment. Here is an overview of how search technology is changing, how these changes may affect you and what you can do to market your business more effectively in the new era of search.” Read more
Tom Simonite of the MIT Technology Review recently wrote, “For all its success, Google’s famous Page Rank algorithm has never understood a word of the billions of Web pages it has directed people to over the years. That’s why in 2010 Google acquired Metaweb, a company building a database intended to give computers the ability to understand the world. Two years later the company’s technology resurfaced as the Knowledge Graph. John Giannandrea, vice president of engineering at Google and a Metaweb cofounder, says that will lead to Google’s future products being able to truly understand the people who use them and the things they care about. He told MIT Technology Review’s Tom Simonite how a data store designed to link together all the knowledge on Earth might do that.” Read more
Google’s Knowledge Graph took on some new work this week, driving popups of information about some of the website sources that users see in their search results.
According to a posting at Google’s Search blog, clicking on the name of the information source that appears next to the link delivers details about that source, as in the picture at left. “You’ll see this extra information when a site is widely recognized as notable online, when there is enough information to show or when the content may be handy for you,” reports Bart Niechwiej, the software engineer who wrote up the news.
The feature’s been getting a lot of buzz. Much of the information informing Google’s Knowledge Graph comes from Wikipedia, as well as from Freebase and the CIA World FactBook. And, when it comes to a popup source of information you’re likely to see show up somewhere in most searches’ results, Wikipedia likely will be among them. In fact, observers like Matt McGee over at Search Engine Land have noted about the new feature that “the popups rely heavily on Wikipedia.”
Ron Callari of InventorSpot recently wrote, “In the foreseeable future, search on smartphones will allow for intuitive logic not just text-matching based on keywords. But what will our smartphones offer in the Web 3.0 world of Semantic Technology, Augmented Reality and the Internet of Things? Semantic search is already being addressed by the major search engines and social networks to understand the intent of the user. All the major players are competing for dominance of semantic computing because it’s been identified as the solution to the demanding needs of big data.” Read more
Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.
Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.
Bill Roberts, CEO, Swirrl: Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems. I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.
Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:
Phil Archer, Data Activity Lead, W3C:
For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.
I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!
[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.
NEXT PAGE >>