ccimageIn mid-July Dataversity.net, the sister site of The Semantic Web Blog, hosted a webinar on Understanding The World of Cognitive Computing. Semantic technology naturally came up during the session, which was moderated by Steve Ardire, an advisor to cognitive computing, artificial intelligence, and machine learning startups. You can find a recording of the event here.

Here, you can find a more detailed discussion of the session at large, but below are some excerpts related to how the worlds of cognitive computing and semantic technology interact.

One of the panelists, IBM Big Data Evangelist James Kobielus, discussed his thinking around what’s missing from general discussions of cognitive computing to make it a reality. “How do we normally perceive branches of AI, and clearly the semantic web and semantic analysis related to natural language processing and so much more has been part of the discussion for a long time,” he said. When it comes to finding the sense in multi-structured – including unstructured – content that might be text, audio, images or video, “what’s absolutely essential is that as you extract the patterns you are able to tag the patterns, the data, the streams, really deepen the metadata that gets associated with that content and share that metadata downstream to all consuming applications so that they can fully interpret all that content, those objects…[in] whatever the relevant context is.”

Kobielus noted that it’s in the semantic web community where the standards and technologies – RDF, OWL, ontologies and taxonomies – to support that have resided, and that this needs to become a bigger part of the overall cognitive computing discussion.

“Many people [who aren’t experts in the area] think of cognitive computing as structured data and decision automation on structured data, like standard business applications,” he said. But a lot or most of the new applications of cognitive computing are completely unstructured sources, “and sense needs to be extracted from the content and mapped into semantic vocabularies of one sort or another and managed in repositories. For cognitive computing to achieve its promise we need a thick metadata layer that incorporate semantic tagging formats.”

Another panelist, Tony Sarris, founder and principal at semantic technology consultancy N2Semantics, added his thoughts that the industry should be cognizant that there are multiple approaches to getting to cognitive computing. A lot of the focus is on machine learning, especially as things move to really analyzing and building explicit knowledge models, but other areas that should be included in the cognitive computing mix include constructive ontologies and constructive knowledge modeling, “whether it’s done by groups or individuals or crowd-sourced in the case of the semantic web.

The Linked Open Data models are very valuable ontologies that can be used for cognitive computing today and practical business value can come from that,” he said. “I think we have to bring to bear a lot of different approaches, and based on the use case, that will dictate the approach and how much we want to invest in that and what the business return is on it.”

You’ll also have the opportunity to dive deep into both worlds at the upcoming Cognitive Computing Forum and the Semantic Technology Business Conference, both taking place in San Jose next month.