Jack Flanagan of Real Business reports, “The future of the web is semantic – at least according to French tech startup Sépage, which specialises in semantic technologies for travel websites. However the little known, little understood technology is still crossing the distance between science and business. Real Business sought comment from Sépage on what this is, and how they’ve built it. Sepage told Real Business, “We believe the potential is immense. Most of today’s digital marketing approaches aren’t actually personalised, even though that’s what they claim ; comparing your basket to thousands of others and cluster you in groups of ‘similar individuals’ can’t really be called personalisation.” Read more
Posts Tagged ‘Semantic Web’
A Drupal ++ platform for semantic web biomedical data – that’s how Sudeshna Das describes eXframe, a reusable framework for creating online repositories of genomics experiments. Das – who among other titles is affiliate faculty of the Harvard Stem Cell Institute – is one of the developers of eXframe, which leverages Stéphane Corlosquet’s RDF module for Drupal to produce, index (into an RDF store powered by the ARC2 PHP library) and publish semantic web data in the second generation version of the platform.
“We used the RDF modules to turn eXframe into a semantic web platform,” says Das. “That was key for us because it hid all the complexities of semantic technology.”
One instance of the platform today can be found in the repository for stem cell data as part of the Stem Cell Commons, the Harvard Stem Cell Institute’s community for stem cell bioinformatics. But Das notes the importance of the reusability aspect of the software platform to build genomics repositories that automatically produce Linked Data as well as a SPARQL endpoint, is that it becomes easy to build new repository instances with much less effort. Working off Drupal as its base, eXframe has been customized to support biomedical data and to integrate biomedical ontologies and knowledge bases.
Exchange Magazine recently wrote, “Sir Tim Berners-Lee invented the World Wide Web 25 years ago. So it’s worth a listen when he warns us: There’s a battle ahead. Eroding net neutrality, filter bubbles and centralizing corporate control all threaten the web’s wide-open spaces. It’s up to users to fight for the right to access and openness. The question is, What kind of Internet do we want? Tim Berners-Lee invented the World Wide Web. He leads the World Wide Web Consortium (W3C), overseeing the Web’s standards and development.” Read more
XSB and SemanticWeb.Com Partner In App Developer Challenge To Help Build The Industrial Semantic Web
An invitation was issued to developers at last week’s Semantic Technology and Business Conference: XSB and SemanticWeb.com have joined to sponsor the Semantic Web Developer Challenge, which asks participants to build sourcing and product life cycle management applications leveraging XSB’s PartLink Data Model.
XSB is developing PartLink as a project for the Department of Defense Rapid Innovation Fund. It uses semantic web technology to create a coherent Linked Data model for all part information in the Department of Defense’s supply chain – some 40 million parts strong.
“XSB recognized the opportunity to standardize and link together information about the parts, manufacturers, suppliers, materials, [and] technical characteristics using semantic technologies. The parts ontology is deep and detailed with 10,000 parts categories and 1,000 standard attributes defined,” says Alberto Cassola, vp sales and marketing at XSB, a leading provider of master data management solutions to large commercial and government entities. PartLink’s Linked Data model, he says, “will serve as the foundation for building the industrial semantic web.”
Mark Albertson of the Examiner recently wrote, “It was an unusual sight to be sure. Standing on a convention center stage together were computer engineers from the four largest search providers in the world (Google, Yahoo, Microsoft Bing, and Yandex). Normally, this group couldn’t even agree on where to go for dinner, but this week in San Jose, California they were united by a common cause: the Semantic Web… At the Semantic Technology and Business Conference is San Jose this week, researchers from around the world gathered to discuss how far they have come and the mountain of work still ahead of them.” Read more
These vistas will be explored in a session hosted by Kevin Ford, digital project coordinator at the Library of Congress at next week’s Semantic Technology & Business conference in San Jose. The door is being opened by the Bibliographic Framework Initiative (BIBFRAME) that the LOC launched a few years ago. Libraries will be moving from the MARC standards, their lingua franca for representing and communicating bibliographic and related information in machine-readable form, to BIBFRAME, which models bibliographic data in RDF using semantic technologies.
If you’re interested in Linked Data, no doubt you’re planning to listen in on next week’s Semantic Web Blog webinar, Getting Started With The Linked Data Platform (register here), featuring Arnaud Le Hors, Linked Data Standards Lead at IBM and chair of the W3C Linked Data Platform WG and the OASIS OSLC Core TC. It also may be on your agenda to attend this month’s Semantic Web Technology & Business Conference, where speakers including Le Hors, Manu Sporny, Sandro Hawke, and others will be presenting Linked Data-focused sessions.
In the meantime, though, you might enjoy reviewing the results of the LOD2 Project, the European Commission co-funded effort whose four-year run, begun in 2010, aimed at advancing RDF data management; extracting, creating and enriching structured RDF data; interlinking data from different sources; and authoring, exploring and visualizing Linked Data. To that end, why not take a stroll through the recently released Linked Open Data – Creating Knowledge Out of Interlinked Data, edited by LOD2 Project participants Soren Auer of the Institut für Informatik III Rheinische Friedrich-Wilhelms-Universität; Volha Bryl of the University of Mannheim, and Sebastian Tramp of the University of Leipzig?
Is SPARQL the SQL for NoSQL? The question will be discussed at this month’s Semantic Technology & Business Conference in San Jose by Arthur Keen, vp of solution architecture of startup SPARQL City.
It’s not the first time that the industry has considered common database query languages for NoSQL (see this story at our sister site Dataversity.net for some perspective on that). But as Keen sees it, SPARQL has the legs for the job. “What I know about SPARQL is that for every database [SQL and NoSQL alike] out there, someone has tried to put SPARQL on it,” he says, whereas other common query language efforts may be limited in database support. A factor in SPARQL’s favor is query portability across NoSQL systems. Additionally, “you can achieve much higher performance using declarative query languages like SPARQL because they specify the ‘What’ and not the ‘How’ of the query, allowing optimizers to choose the best way to implement the query,” he explains.
Context is king – at least when it comes to enterprise search. “Organizations are no longer satisfied with a list of search results — they want the single best result,” wrote Gartner in its latest Magic Quadrant for Enterprise Search report, released in mid-July. The report also says that the research firm estimates the enterprise search market to reach $2.6 billion in 2017.
The leaders list this time around includes Google with its Search Appliance, which Google touts as benefitting from Google.com’s continually evolving technology, thanks to machine learning from billions of search queries. Also on that part of the quadrant is HP Autonomy, which Gartner says is “exceptionally good at handling searches driven by queries that include surmised or contextual information;” and Coveo and Perceptive Software, both of which are quoted as offering “considerable flexibility for the design of conversational search capabilities, to reduce the ambiguity of results.”
In mid-July Dataversity.net, the sister site of The Semantic Web Blog, hosted a webinar on Understanding The World of Cognitive Computing. Semantic technology naturally came up during the session, which was moderated by Steve Ardire, an advisor to cognitive computing, artificial intelligence, and machine learning startups. You can find a recording of the event here.
Here, you can find a more detailed discussion of the session at large, but below are some excerpts related to how the worlds of cognitive computing and semantic technology interact.
One of the panelists, IBM Big Data Evangelist James Kobielus, discussed his thinking around what’s missing from general discussions of cognitive computing to make it a reality. “How do we normally perceive branches of AI, and clearly the semantic web and semantic analysis related to natural language processing and so much more has been part of the discussion for a long time,” he said. When it comes to finding the sense in multi-structured – including unstructured – content that might be text, audio, images or video, “what’s absolutely essential is that as you extract the patterns you are able to tag the patterns, the data, the streams, really deepen the metadata that gets associated with that content and share that metadata downstream to all consuming applications so that they can fully interpret all that content, those objects…[in] whatever the relevant context is.”
NEXT PAGE >>