Greg Slabodkin of Health Data Management recently wrote, “At a minimum, there are three types of interoperability required to achieve an interoperable health IT ecosystem, according to Doug Fridsma, M.D., ONC’s outgoing chief science officer. Speaking this week at AHIMA’s 2014 conference in San Diego, Fridsma made the case that health IT requires all three types of interoperability–semantic, syntactic, and information exchange. ‘If you exchange the information and the codes don’t match or it’s a proprietary set of codes, you’ve got the information but you have no idea what those codes mean,’ he argued. ‘Semantic interoperability is about the vocabularies and syntactic interoperability is about the structure’.” Read more
There is no doubt about it: Schema.org is a big success. It has motivated hundreds of thousands of Web site owners to add structured data markup to their HTML templates and brought the idea of exchanging structured data over the WWW from the labs and prototypes to real business.
Unfortunately, the support for information about the sales and rental of vehicles, namely cars, motorbikes, trucks, boats, and bikes has been insufficient for quite a while. Besides two simple classes for http://schema.org/Vehicle and http://schema.org/Car with no additional properties, there was nothing in the vocabulary that would help marking up granular vehicle information in new or used car listing sites or car rental offers.
Recently, Mirek Sopek, Karol Szczepański and I have released a fully-fledged extension proposal for schema.org that fixes this shortcoming and paves the ground for much better automotive Web sites in the light of marketing with structured data.
This proposal builds on the following vehicle-related extensions for GoodRelations, the e-commerce model of schema.org:
- Vehicle Sales Ontology (VSO), http://purl.org/vso/ns
- Volkswagen Vehicles Ontology (VVO), http://purl.org/vvo/ns
- Used Cars Ontology (UCO), http://purl.org/uco/ns
It adds the core classes, properties and enumerated values for describing cars, trucks, busses, bikes, and boats and their features. For describing commercial aspects of related offers, http://schema.org/Offer already provides the necessary level of detail. Thus, our proposal does not add new elements for commercial features.
Almost exactly 10 years after the publication of RDF 1.0 (10 Feb 2004, http://www.w3.org/TR/rdf-concepts/), the World Wide Web Consortium (W3C) has announced today that RDF 1.1 has become a “Recommendation.” In fact, the RDF Working Group has published a set of eight Resource Description Framework (RDF) Recommendations and four Working Group Notes. One of those notes, the RDF 1.1 primer, is a good starting place for those new to the standard.
Lanthaler said of the recommendation, “Semantic Web technologies are often criticized for their complexity–mostly because RDF is being conflated with RDF/XML. Thus, with RDF 1.1 we put a strong focus on simplicity. The new specifications are much more accessible and there’s a clear separation between RDF, the data model, and its serialization formats. Furthermore, the primer provides a great introduction for newcomers. I’m convinced that, along with the standardization of Turtle (and previously JSON-LD), this will mark an important point in the history of the Semantic Web.”
We reported yesterday on the news that JSON-LD has reached Recommendation status at W3C. Three formal vocabularies also reached that important milestone yesterday:
The W3C Documentation for The Data Catalog Vocabulary (DCAT), says that DCAT “is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web….By using DCAT to describe datasets in data catalogs, publishers increase discoverability and enable applications easily to consume metadata from multiple catalogs. It further enables decentralized publishing of catalogs and facilitates federated dataset search across sites. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation.”
Meanwhile, The RDF Data Cube Vocabulary addresses the following issue: “There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations. The Data Cube vocabulary is a core foundation which supports extension vocabularies to enable publication of other aspects of statistical data flows or other multidimensional data sets.”
Lastly, W3C now recommends use of the Organization Ontology, “a core ontology for organizational structures, aimed at supporting linked data publishing of organizational information across a number of domains. It is designed to allow domain-specific extensions to add classification of organizations and roles, as well as extensions to support neighbouring information such as organizational activities.”
JSON-LD has reached the status of being an official “Recommendation” of the W3C. JSON-LD provides yet another way for web developers to add structured data into web pages, joining RDFa.The W3C documentation says, “JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.” This addition should be welcome news for Linked Data developers familiar with JSON and/or faced with systems based on JSON.
SemanticWeb.com caught up with the JSON-LD specfication editors to get their comments…
Manu Sporny (Digital Bazaar), told us, “When we created JSON-LD, we wanted to make Linked Data accessible to Web developers that had not traditionally been able to keep up with the steep learning curve associated with the Semantic Web technology stack. Instead, we wanted people that were comfortable working with great solutions like JSON, MongoDB, and REST to be able to easily integrate Linked Data technologies into their day-to-day work. The adoption of JSON-LD by Google and schema.org demonstrates that we’re well on our way to achieving this goal.”
EU Initiative OpenCube partner consortium to develop software tools for publishing and reusing Linked Open Statistical Data
Thermi, Thessaloniki, Greece, January 14th, 2014 – A consortium of partners headed by the Centre for Research and Technology – Hellas (CERTH), recently launched the OpenCube project, an EU Initiative for Publishing and Enriching Linked Open Statistical Data for the Development of Data Analytics and Enhanced Visualization Services. The project intends to make Linked Open Statistical Data (LOSD) more accessible to publishers and users and to facilitate mining these data so as to enable the extraction of interesting and previously hidden insights. As part of the project, these innovative new technologies will be tested at four pilot sites: three government agencies from across Europe and a large financial institution.
Linked Statistical Data
Governments, organizations and companies are increasingly releasing their data for others to reuse. A major part of open data concerns statistics, such as population figures and economic and social indicators. Analysis of statistical open data can create value for citizens and businesses in areas ranging from business intelligence to epidemiological studies and evidence-based policy-making.
Recently, Linked Data emerged as a promising paradigm to enable use of the web as a platform for data integration. Linked Statistical Data has been proposed as the most suitable way to publish open data on the web. However, publishing and mining LOSD faces particular challenges as it requires appropriate tools and methods.
“There is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new order of things…. Whenever his enemies have the ability to attack the innovator, they do so with the passion of partisans, while the others defend him sluggishly, so that the innovator and his party alike are vulnerable.”
–Niccolò Machiavelli, The Prince (1513)
The Semantic Web is not here yet.
Additionally, neither are flying cars, the cure for cancer, humans traveling to Mars or a bunch of other futuristic ideas that still have merit.
A problem with many of these articles is that they conflate the Vision of the Semantic Web with the practical technologies associated with the standards. While the Whole Enchilada has yet to emerge (and may never do so), the individual technologies are finding their way into ever more systems in a wide variety of industries. These are not all necessarily on the public Web, they are simply Webs of Data. There are plenty of examples of this happening and I won’t reiterate them here.
Instead, I want to highlight some other things that are going on in this discussion that are largely left out of these narrowly-focused, provocative articles.
First, the Semantic Web has a name attached to its vision and it has for quite some time. As such, it is easy to remember and it is easy to remember that it Hasn’t Gotten Here Yet. Every year or so, we have another round of articles that are more about cursing the darkness than lighting candles.
In that same timeframe, however, we’ve seen the ascent and burn out failure of Service-Oriented Architectures (SOA), Enterprise Service Buses (ESBs), various MVC frameworks, server side architectures, etc. Everyone likes to announce $20 million sales of an ESB to clients. No one generally reports on the $100 million write-downs on failed initiatives when they surface in annual reports a few years later. So we are left with a skewed perspective on the efficacy of these big “conventional” initiatives.
Silver Spring, MD (PRWEB) October 30, 2013 — PhUSE and CDISC are happy to announce the completion of Phase I of the FDA/PhUSE Semantic Technology Working Group Project. The PhUSE Semantic Technology Working Group aims to investigate how formal semantic standards can support the clinical and non-clinical trial data life cycle from protocol to submission. This deliverable includes a draft set of existing CDISC standards represented in RDF. Read more
NEXT PAGE >>