rsz_lookahead2

Courtesy: Flickr/faul

Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.

Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.

Bill Roberts, CEO, Swirrl:   Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems.  I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.

Amit Sheth, LexisNexis Ohio Eminent Scholar, Kno.e.sis Center, Wright State University

For the new year, here are my top three predictions.

● Growing pains for Linked Data: I expect growing pains for Linked Data. The easy part of putting out (and sometimes dumping of) data and simplistic (albeit not very useful forms of) alignment of Linked Data has been widely practiced. And more datasets, especially open government data, will continue to be put out. But now comes the hard part of requiring better understanding of the quality of data and alignments or mapping involving richer relationships needed for real-world applications.

● Continued slow progress for OWL, and even RDF: I expect continued slow-paced advances for OWL-reliant approaches to semantic web as a few more people learn the tools of the trade.  To my surprise, the growth of RDF is also underwhelming. I think several factors are at play: one is lack of skilled people, second is fear that RDF-based solutions will not scale, and third is growing popularity of graph databases that many may feel can adequately provide the needed functionality with perceived ease of use and scalability. This is likely to persist in the coming year.

● Break out – Smart Data: I expect this to be a break out year for Smart Data (2004-2005 view, 2013 retake), where recent progress in semantic annotation and knowledge-based tagging will enable an enrichment of a wide variety of data (especially traditional unstructured text, semi-structured data, social and sensor data) that will afford semantics-enhanced search, integration, personalization, analysis, and advertisement.

Nova Spivack, technology futurist, serial entrepreneur, angel investor and CEO, Bottlenose:

I think we’re starting to see the beginnings of a new attempt at what the Semantic Web was trying to do. The real goal of the Semantic Web movement was to make applications smarter, by enabling them to access external sources of intelligence. Programmers could simply enable their apps to understand RDF, OWL and SPARQL, and their apps could suck in knowledge and intelligence capabilities on an as-needed basis. Now we’re starting to see a second wave emerge — cognition-as-a-service (CaaS).

It won’t be as democratic as the vision of the Semantic Web, but may get more widely adopted. The Semantic Web sought to enable a universal reasoning client, just as the browser is a universal content rendering client. Any app that contained the basic capabilities of reasoning and operating on RDF and OWL would be able to reason about any topic, by pulling in the appropriate ontologies and knowledge bases (in RDF and OWL).

In the CaaS approach, instead of a universal reasoning capability based on open ontologies and shared linked data sets, there will be highly proprietary silos of intelligence and knowledge that provide their capabilities as services via APIs to apps. Developers will be able to integrate with these to reason about particular subjects or in particular ways. Examples of this are Wolfram’s upcoming API’s, or those announced by IBM Watson.

In CaaS-enabled apps the intelligence is still largely outside, but it’s proprietary and can be monetized by API providers. Because creating intelligent services and APIs is costly, and there is no open-source alternative, businesses can be formed with barriers to entry around them that provide commercial advantages, ultimately bringing about wider venture capital interest, and broader creation and adoption CaaS than the Semantic Web. I wrote about this here.

Dominiek ter Heide, CTO, Bottlenose:

The CaaS approach will make some of the original Semantic Web goals a reality. CaaS makes apps smarter and will increase the effectiveness of their users. These apps will be domain-specific and will have a lot of vertical knowledge embedded in them. CaaS will mix in recommender systems, machine learning and the pragmatic parts of the Semantic Web, like graph databases.

Furthermore, I personally think the longer-term trajectory of the “Intelligent Web” will diverge significantly from that of the traditional Semantic Web. We live in a world where we have vast amounts of data being created. A lot of these are small snippets of conversation or attention like tweets on Twitter or comments on Facebook. This is making the “Now” more relevant than ever. Any successful semantic web needs to be able to harness this data. A lot of the top-down expert knowledge from Wikipedia can only be very narrowly used. It has become Obsoledge to a certain extent. This means ontologies need some real rethinking. Here’s a nice article about some of this.

[Also] being able to understand human emotions and intent is key to building the intelligent web. In the example of the article I mentioned, this sentence is referenced: ”I paddled out today, and dude, I look like a lobster.” Yes, you could build out a better ontology to be able to identify that “lobster” means being sunburned. However, the fact that “paddled” in this case means paddle-surfing can only be known by having a machine understand this person. For this you would need to build out a super dynamic ontology of all his activity. This is no longer an ontology, but a system that continuously learns and builds out a cognitive map of that person. Personalization and understanding people is key.

[Additionally] deriving structured information from Wikipedia documents – a common practice – is fundamentally flawed. Not only does this create a web of boring facts, it assumes that documents are the source of knowledge somehow. They’re not. They are only a small sliver of the stuff that matters. And it’s the underlying conversation and activity that matters. XML, RDF, OWL, all those things are also very document-centric. We need to move away from documents. A new and better approach is to derive meaning from small forms of content. But this of course requires new tools and methodologies.

2014 is the year where we will start seeing “Big Data” being married to the “Semantic Web.” This means harnessing tools like Hadoop, Cassandra and Elasticsearch to turn unstructured chaos into higher level insights. One project to watch in particular is TitanDB, which is a GraphDB solution that can scale horizontally using Big Data storage engines.

One less obvious problem is one of information retrieval. Keyword search is now fundamentally broken. The more information is out there, the worse keyword search performs. Advanced query systems like Facebook’s Graph Search or Wolfram Alpha are only marginally better than keyword search. Even conversation engines like Siri have a fundamental problem. No one knows what questions to ask. We need a web in which information (both questions and answers) finds you based on how your attention, emotions and thinking interconnects with the rest of the world. CaaS apps will take a hybrid approach of alerting you of things that are relevant and allowing you to do advanced search. Some of these thoughts I’ve written up here.

 

The discussion doesn’t have to end here. Share with us your own thoughts on what’s ahead for 2014!