Rancho Cordova, CA (PRWEB) April 01, 2014–The pharmaceutical community, health care organizations, and software providers are coming together at the OASIS open standards consortium to define a machine-readable content classification standard for the interoperable exchange of clinical trial data via content management systems. The work of the new OASIS Electronic Trial Master File (eTMF) Standard Technical Committee will promote interoperability across diverse computing platforms and cloud networks within the clinical trials community. Read more
Almost exactly 10 years after the publication of RDF 1.0 (10 Feb 2004, http://www.w3.org/TR/rdf-concepts/), the World Wide Web Consortium (W3C) has announced today that RDF 1.1 has become a “Recommendation.” In fact, the RDF Working Group has published a set of eight Resource Description Framework (RDF) Recommendations and four Working Group Notes. One of those notes, the RDF 1.1 primer, is a good starting place for those new to the standard.
Lanthaler said of the recommendation, “Semantic Web technologies are often criticized for their complexity–mostly because RDF is being conflated with RDF/XML. Thus, with RDF 1.1 we put a strong focus on simplicity. The new specifications are much more accessible and there’s a clear separation between RDF, the data model, and its serialization formats. Furthermore, the primer provides a great introduction for newcomers. I’m convinced that, along with the standardization of Turtle (and previously JSON-LD), this will mark an important point in the history of the Semantic Web.”
We reported yesterday on the news that JSON-LD has reached Recommendation status at W3C. Three formal vocabularies also reached that important milestone yesterday:
The W3C Documentation for The Data Catalog Vocabulary (DCAT), says that DCAT “is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web….By using DCAT to describe datasets in data catalogs, publishers increase discoverability and enable applications easily to consume metadata from multiple catalogs. It further enables decentralized publishing of catalogs and facilitates federated dataset search across sites. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation.”
Meanwhile, The RDF Data Cube Vocabulary addresses the following issue: “There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations. The Data Cube vocabulary is a core foundation which supports extension vocabularies to enable publication of other aspects of statistical data flows or other multidimensional data sets.”
Lastly, W3C now recommends use of the Organization Ontology, “a core ontology for organizational structures, aimed at supporting linked data publishing of organizational information across a number of domains. It is designed to allow domain-specific extensions to add classification of organizations and roles, as well as extensions to support neighbouring information such as organizational activities.”
JSON-LD has reached the status of being an official “Recommendation” of the W3C. JSON-LD provides yet another way for web developers to add structured data into web pages, joining RDFa.The W3C documentation says, “JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.” This addition should be welcome news for Linked Data developers familiar with JSON and/or faced with systems based on JSON.
SemanticWeb.com caught up with the JSON-LD specfication editors to get their comments…
Manu Sporny (Digital Bazaar), told us, “When we created JSON-LD, we wanted to make Linked Data accessible to Web developers that had not traditionally been able to keep up with the steep learning curve associated with the Semantic Web technology stack. Instead, we wanted people that were comfortable working with great solutions like JSON, MongoDB, and REST to be able to easily integrate Linked Data technologies into their day-to-day work. The adoption of JSON-LD by Google and schema.org demonstrates that we’re well on our way to achieving this goal.”
EU Initiative OpenCube partner consortium to develop software tools for publishing and reusing Linked Open Statistical Data
Thermi, Thessaloniki, Greece, January 14th, 2014 – A consortium of partners headed by the Centre for Research and Technology – Hellas (CERTH), recently launched the OpenCube project, an EU Initiative for Publishing and Enriching Linked Open Statistical Data for the Development of Data Analytics and Enhanced Visualization Services. The project intends to make Linked Open Statistical Data (LOSD) more accessible to publishers and users and to facilitate mining these data so as to enable the extraction of interesting and previously hidden insights. As part of the project, these innovative new technologies will be tested at four pilot sites: three government agencies from across Europe and a large financial institution.
Linked Statistical Data
Governments, organizations and companies are increasingly releasing their data for others to reuse. A major part of open data concerns statistics, such as population figures and economic and social indicators. Analysis of statistical open data can create value for citizens and businesses in areas ranging from business intelligence to epidemiological studies and evidence-based policy-making.
Recently, Linked Data emerged as a promising paradigm to enable use of the web as a platform for data integration. Linked Statistical Data has been proposed as the most suitable way to publish open data on the web. However, publishing and mining LOSD faces particular challenges as it requires appropriate tools and methods.
“There is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new order of things…. Whenever his enemies have the ability to attack the innovator, they do so with the passion of partisans, while the others defend him sluggishly, so that the innovator and his party alike are vulnerable.”
–Niccolò Machiavelli, The Prince (1513)
The Semantic Web is not here yet.
Additionally, neither are flying cars, the cure for cancer, humans traveling to Mars or a bunch of other futuristic ideas that still have merit.
A problem with many of these articles is that they conflate the Vision of the Semantic Web with the practical technologies associated with the standards. While the Whole Enchilada has yet to emerge (and may never do so), the individual technologies are finding their way into ever more systems in a wide variety of industries. These are not all necessarily on the public Web, they are simply Webs of Data. There are plenty of examples of this happening and I won’t reiterate them here.
Instead, I want to highlight some other things that are going on in this discussion that are largely left out of these narrowly-focused, provocative articles.
First, the Semantic Web has a name attached to its vision and it has for quite some time. As such, it is easy to remember and it is easy to remember that it Hasn’t Gotten Here Yet. Every year or so, we have another round of articles that are more about cursing the darkness than lighting candles.
In that same timeframe, however, we’ve seen the ascent and burn out failure of Service-Oriented Architectures (SOA), Enterprise Service Buses (ESBs), various MVC frameworks, server side architectures, etc. Everyone likes to announce $20 million sales of an ESB to clients. No one generally reports on the $100 million write-downs on failed initiatives when they surface in annual reports a few years later. So we are left with a skewed perspective on the efficacy of these big “conventional” initiatives.
Silver Spring, MD (PRWEB) October 30, 2013 — PhUSE and CDISC are happy to announce the completion of Phase I of the FDA/PhUSE Semantic Technology Working Group Project. The PhUSE Semantic Technology Working Group aims to investigate how formal semantic standards can support the clinical and non-clinical trial data life cycle from protocol to submission. This deliverable includes a draft set of existing CDISC standards represented in RDF. Read more
The W3C announced today that three specifications have reached recommendation status:
RDFa 1.1 Core – Second Edition
XHTML+RDFa 1.1 – Second Edition
As the W3C website explains, “The last couple of years have witnessed a fascinating evolution: while the Web was initially built predominantly for human consumption, web content is increasingly consumed by machines which expect some amount of structured data. Sites have started to identify a page’s title, content type, and preview image to provide appropriate information in a user’s newsfeed when she clicks the ‘Like’ button. Search engines have started to provide richer search results by extracting fine-grained structured details from the Web pages they crawl. In turn, web publishers are producing increasing amounts of structured data within their Web content to improve their standing with search engines.”
“A key enabling technology behind these developments is the ability to add structured data to HTML pages directly. RDFa (Resource Description Framework in Attributes) is a technique that allows just that: it provides a set of markup attributes to augment the visual information on the Web with machine-readable hints. ”
Manu Sporny, the editor of the HTML+RDFa 1.1 specification, told us that, “The release of RDFa 1.1 for HTML5 establishes it as the first HTML-based Linked Data technology to achieve recognition as an official Web standard by the World Wide Web Consortium.” Read more
Danny Bradbury of our sister site, CoinDesk, recently wrote about Manu Sporny‘s presentation at the recent Inside Bitcoins Conference. Bradbury writes, “A representative working loosely with the Web’s standards body set out his vision for a web-based payments standard at the Inside Bitcoins conference today. Manu Sporny, who works with the World Wide Web consortium (W3C), is part of a working group on Web Payments. He advocated a standard payment mechanism that would be currency-agnostic, and which would do away with traditional online payment methods such as entering credit card data, or making electronic payments which proprietary networks such as PayPal. ‘Credit card numbers are effectively passwords to your bank account. You’re giving that password away to every merchant you do business with,’ said Sporny.” Read more
Ivan Herman of the W3C reports, “The W3C RDF Working Group has published two Last Call Working Drafts: (1) A Last Call Working Draft of RDF 1.1 Concepts and Abstract Syntax. The Resource Description Framework (RDF) is a framework for representing information in the Web. Comments are welcome through 6 September. (2) A Last Call Working Draft of RDF 1.1 Semantics. This document describes a precise semantics for the Resource Description Framework 1.1 and RDF Schema. It defines a number of distinct entailment regimes and corresponding patterns of entailment. It is part of a suite of documents which comprise the full specification of RDF 1.1. Comments are welcome through 6 September.” Read more
NEXT PAGE >>