Posts Tagged ‘Eric Prud’hommeaux’
Today, the World Wide Web Consortium announced that R2RML has achieved Recommendation status. As stated on the W3C website, R2RML is “a language for expressing customized mappings from relational databases to RDF datasets. Such mappings provide the ability to view existing relational data in the RDF data model, expressed in a structure and target vocabulary of the mapping author’s choice.” In the life cycle of W3C standards creation, today’s announcement means that the specifications have gone through extensive community review and revision and that R2RML is now considered stable enough for wide-spread distribution in commodity software.
Richard Cyganiak, one of the Recommendation’s editors, explained why R2RML is so important. “In the early days of the Semantic Web effort, we’ve tried to convert the whole world to RDF and OWL. This clearly hasn’t worked. Most data lives in entrenched non-RDF systems, and that’s not likely to change.”
SPARQL is the standardized query language for RDF, the same way SQL is the standardized query language for relational databases. If this is the first time you look at SPARQL, but you’re familiar with SQL, you will see some similarities because it shares several keywords such as
WHERE, etc. It also has new keywords that you have never seen if you come from a SQL world such as
FILTER and much more.
Recall that RDF is a triple comprised of a subject, predicate and object. A SPARQL query consists of a set of triples where the subject, predicate and/or object can consist of variables. The idea is to match the triples in the SPARQL query with the existing RDF triples and find solutions to the variables. A SPARQL query is executed on a RDF dataset, which can be a native RDF database, or on a Relational Database to RDF (RDB2RDF) system, such as Ultrawrap. These databases have SPARQL endpoints which accept queries and return results via HTTP.
A basic example
Date: May 26, 2010, 11:00AM (1 hour)
Register: View the recorded webcast
In 2008-2009, Lee Feigenbaum gave a series of presentations (PART I; PART II) for Semantic Universe on SPARQL, the query language of the Semantic Web. This year, SPARQL is going through some major updates with the release of SPARQL 1.1, and in this webcast, Lee has agreed to revisit the topic to discuss specifically some of what has changed. Lee will use real queries that can be run against real data on the Web to demonstrate the structure and features of SPARQL. SUPPLEMENTARY MATERIALS: "What's coming in SPARQL 1.1?" http://www.slideshare.net/LeeFeigenbaum/sparql2-status "SPARQL By Example" http://www.cambridgesemantics.com/2008/09/sparql-by-example "SPARQL Cheat Sheet" http://www.slideshare.net/LeeFeigenbaum/sparql-cheat-sheet * At SemTech 2010, June 21-25, Lee and his colleague Eric Prud'hommeaux, will give a full half-day, hands-on tutorial on SPARQL 1.1.
Lee Feigenbaum has been using Semantic Web technologies to architect and develop enterprise middleware and applications since 2003. He brings this expertise to his role as Cambridge Semantics’s VP of Technology and Standards, where he is responsible for the design and development of the Anzo family of semantic applications and middleware. Lee is the author of Glitter, a pluggable SPARQL engine designed to query multiple data sources. Lee served as Chair of the W3C RDF Data Access Working Group, publishing the SPARQL query language and protocol specifications. Lee co-authored "The Semantic Web in Action," a December 2007 article in Scientific American. Before joining Cambridge Semantics, Lee spent five years as an engineer with IBM’s Advanced Internet Technology Group. There, his experiences spanned knowledge management and annotation systems, instant-messaging software, and Web-based client application runtimes. Lee writes about Semantic Web technologies at his blog, TechnicaLee Speaking.
The HCLS IG gave a tutorial at the C-SHALS Conference last week. It was very well attended and consisted of participants from pharma, payers, health care organizations, technology companies, and academia.
The first half of the tutorial began with a primer on the Semantic Web that was delivered by Lee Feigenbaum. He did an excellent job of introducing the technology, and answering a broad range of good questions from the participants.
The second half of the tutorial began with Eric Prud’hommeaux (W3C) introducing HCLS. He highlighted that the mission of the group is to develop, advocate for, and support the use of Semantic Web technologies for biological science, translational medicine, and health care; and described the strong need for interoperability within these domains. He highlighted that almost 100 individuals are now participating in the interest group.
The tutorial then provided an overview of the activities being undertaken by the different tasks within HCLS. Vipul Kashyap (Cigna) described how the Clinical Observations Interoperability task built a demo that enables querying across electronic health records that are in different formats. John Madden (Duke) presented on work within the Terminology task to represent SNOMED within Semantic Web representations, and compared benefits of SKOS to OWL. Susie Stephens (Lilly) presented on making publicly available data sets about drugs available within the Linked Data cloud, which is ongoing work within the Linking Open Drug Data task. She also briefly introduced the new Pharma Ontology task which has the goal of creating a high-level, patient-centric ontology for translational medicine. Tim Clark (Harvard) represented the Scientific Discourse task and described their approach for integrating knowledge relating to hypotheses derived from literature and experiments using SWAN, SIOC, and myExperiment ontologies. The tutorial concluded with Kei Cheung (Yale) providing a description of the accomplishments on aTags and federated query within the BioRDF task.
Slides are available from the tutorial on the HCLS Wiki.