Posts Tagged ‘SPARQL 1.1’

W3C’s Semantic Web Activity Folds Into New Data Activity

rsz_w3clogoThe World Wide Web Consortium has headline news today: The Semantic Web, as well as eGovernment, Activities are being merged and superseded by the Data Activity, where Phil Archer serves as Lead.  Two new workgroups also have been chartered: CSV on the Web and Data on the Web Best Practices.

What’s driving this? First, Archer explains, the Semantic Web technology stack is now mature, and it’s time to allow those updated standards to be used. With RDF 1.1, the Linked Data Platform, SPARQL 1.1, RDB To RDF Mapping Language (R2RML), OWL 2, and Provenance all done or very close to it, it’s the right time “to take that very successful technology stack and try to implement it in the wider environment,” Archer says, rather than continue tinkering with the standards.

The second reason, he notes, is that a large community exists “that sees Linked Data, let alone the full Semantic Web, as an unnecessarily complicated technology. To many developers, data means JSON — anything else is a problem. During the Open Data on the Web workshop held in London in April, Open Knowledge Foundation co-founder and director Rufus Pollock said that if he suggested to the developers that they learn SPARQL he’d be laughed at – and he’s not alone.” Archer says. “We need to end the religious wars, where they exist, and try to make it easier to work with data in the format that people like to work in.”

The new CSV on the Web Working Group is an important step in that direction, following on the heels of efforts such as R2RML. It’s about providing metadata about CSV files, such as column headings, data types, and annotations, and, with it, making it easily possible to convert CSV into RDF (or other formats), easing data integration. “The working group will define a metadata vocabulary and then a protocol for how to link data to metadata (presumably using HTTP Link headers) or embed the metadata directly. Since the links between data and metadata can work in either direction, the data can come from an API that returns tabular data just as easily as it can a static file,” says Archer. “It doesn’t take much imagination to string together a tool chain that allows you to run SPARQL queries against ’5 Star Data’ that’s actually published as a CSV exported from a spreadsheet.”

Read more

A Look Into Learning SPARQL With Author Bob DuCharme

Cover of Learning SPARQL - Second Edition, by Bob DuCharmeThe second edition of Bob DuCharme’s Learning SPARQL debuted this summer. The Semantic Web Blog connected with DuCharme – who is director of digital media solutions at TopQuadrant, the author of other works including XML: The Annotated Specification, and also a welcome speaker both at the Semantic Technology & Business Conference and our Semantic Web Blog podcasts – to learn more about the latest version of the book.

Semantic Web Blog: In what I believe has been two years since the first edition was published, what have been the most significant changes in the ‘SPARQL space’ – or the semantic web world at large — that make this the right time for an expanded edition of Learning SPARQL?

DuCharme: The key thing is that SPARQL 1.1 is now an actual W3C Recommendation. It was great to see it so widely implemented so early in its development process, which justified the release of the book’s first edition so long before 1.1 was set in stone, but now that it’s a Recommendation we can release an edition of the book that is no longer describing a moving target. Not much in SPARQL has changed since the first edition – the VALUES keyword replaced BINDINGS, with some tweaks, and some property path syntax details changed – but it’s good to know that nothing in 1.1 can change now.

Read more

Eleven SPARQL 1.1 Specifications are W3C Recommendations

SPARQL LogoThe W3C has announced that eleven specifications of SPARQL 1.1 have been published as recommendations. SPARQL is the Semantic Web query language.  We caught up with Lee Feigenbaum, VP Marketing & Technology at Cambridge Semantics Inc. to discuss the significance of this announcement. Feigenbaum is a SPARQL expert who currently serves as the Co-Chair of the W3C’s SPARQL Working Group, leading the design of SPARQL.

Feigenbaum says, “SPARQL 1.1 is a huge leap forward in providing a standard way to access and update Semantic Web data. By reaching W3C Recommendation status, Semantic Web developers, vendors, publishers and consumers have a stable, well-vetted, and interoperable set of standards they can rely on for the foreseeable future.”

Read more

W3C Advances SPARQL 1.1 to ‘Proposed Recommendation’

Ivan Herman reports, “The W3C SPARQL Working Group has published today a set of eleven documents, advancing most of SPARQL 1.1 to Proposed Recommendation. Building on the success of SPARQL 1.0, SPARQL 1.1 is a full-featured standard system for working with RDF data, including a query/update language, two HTTP protocols (one full-featured, one using basic HTTP verbs), three result formats, and other features which allow SPARQL endpoints to be combined and work together. Most features of SPARQL 1.1 have already been implemented by a range of SPARQL suppliers, as shown in our table of implementations and test results.” Read more

Stardog Meets SPARQL

Kendall Clark recently discussed what users can expect from Stardog next. Clark wrote, “The most pressing need in Stardog is support for SPARQL 1.1. We got stuck between the devil and the deep blue sea—trying to push the 1.0 release before the SPARQL Working Group was completely finished with the SPARQL 1.1 spec. We were motivated to avoid reimplementing any parts of SPARQL 1.1 because the spec shifted. So we decided SPARQL 1.10 for the Stardog 1.0 release. Then we told everyone that SPARQL 1.1 would be the highest priority item for the post-1.0 release cycle. And so it’s been.” Read more

Somewhere Over the Semantic Horizon

Courtesy: Flickr/prusakolep

What’s on the horizon for the semantic web? It was a question pondered by expert panelists last week at the Semantic Technology and Business Conference in San Francisco.

Siri – and its possible clones and descendents – came up as a signpost on the road ahead, pushing the notion of personal assistance ever forward.  “Siri made a huge splash. It opened eyes to the idea of not just using semantics to move information around but using a natural language system with some semantic interpretation to perform actions, like putting reminders on your phone,” noted Mark Greaves, director of knowledge systems at Vulcan.

Read more

Highlights from WWW 2012 Conference

Juan Sequeda photoThis year was the 21st World Wide Web Conference located in Lyon, France. This conference is a unique forum for discussion about how the Web is evolving. There were hundreds of talks over 3 days. Let me summarize some Semantic Web presentations I was able to attend.

NautiLOD

Programmers daily use the wget tool to specify and retrieve data on the Web. However, wget is limited since it cannot dig into the semantics of Web data to do the job. What if you were to add semantics to wget? This is the question that Valeria Fionda, Claudio Gutierrez and Giuseppe Pirró asked themselves. They took that question to the next level: imagine a semantic wget on top of Linked Data. They wanted to create a language to declaratively specify portions of the Web of Data, define routes and instruct agents that can do things for you on the Web. All this by exploiting the semantics of information (RDF data) found in online data sources. For example, find all the Wikipedia pages of directors that have been influenced by Stanley Kubrick and send them to my email; retrieving information about David Lynch from different information providers only gives a hint of what can be done. The researchers developed a simple, generic declarative language, NautiLOD and implemented it in swget (semantic wget). swget comes in two flavors: a simple command line tool (to give the Web back to users) and a GUI. This is not a fantasy anymore. Check it our for yourself (http://swget.wordpress.com).

Read more

New Last Call Working Drafts for SPARQL 1.1

The W3C’s SPARQL Working Group has published a second round of last call working drafts for five SPARQL 1.1 documents. The documents include SPARQL 1.1 Update, SPARQL 1.1 Service Description, SPARQL 1.1 Query Language, SPARQL 1.1 Protocol, and SPARQL 1.1 Entailment Regimes. Comments are welcome on each of the five documents through February 6, 2012. Read more

SPARQL Working Group Publishes Working Draft on SPARQL 1.1

The W3C reports that “The SPARQL Working Group has published a Last Call Working Draft of SPARQL 1.1 Query Results JSON Format. This was also the First Public Draft of that document. This document describes the representation of SELECT and ASK query results using JSON. Comments are welcome through 26 October.” The project page notes, “Publication as a Last Call Working Draft indicates that the SPARQL Working Group believes it has addressed all substantive issues and that the document is stable. The Working Group expects to advance this specification to Recommendation Status.” Read more

Introduction to: SPARQL

Hello, my name is SPARQL
SPARQL is the standardized query language for RDF, the same way SQL is the standardized query language for relational databases. If this is the first time you look at SPARQL, but you’re familiar with SQL, you will see some similarities because it shares several keywords such as SELECTWHERE, etc. It also has new keywords that you have never seen if you come from a SQL world such as OPTIONALFILTER and much more.

Recall that RDF is a triple comprised of a subject, predicate and object. A SPARQL query consists of a set of triples where the subject, predicate and/or object can consist of variables. The idea is to match the triples in the SPARQL query with the existing RDF triples and find solutions to the variables. A SPARQL query is executed on a RDF dataset, which can be a native RDF database, or on a Relational Database to RDF (RDB2RDF) system, such as Ultrawrap.  These databases have SPARQL endpoints which accept queries and return results via HTTP.

A basic example

Read more

NEXT PAGE >>