Posts Tagged ‘W3C’

W3C Publishes Linked Data Platform Best Practices and Guidelines

Photo of Arnaud Le Hors presenting the LDP at SemTechBiz 2014The W3C’s Linked Data Platform (LDP) Working Group has published a document outlining best practices and guidelines for implementing Linked Data Platform servers and clients. The document was edited by Cody Burleson, Base22, and Miguel Esteban Gutiérrez and Nandana Mihindukulasooriya of the Ontology Engineering Group, Universidad Politécnica de Madrid.

For those new to LDP, SemanticWeb.com has recently published the following materials:

WEBINAR: “Getting Started with the Linked Data Platform (LDP)” with LDP Working Group Chair, Arnaud Le Hors, IBM (pictured above presenting LDP work at the SemTechBiz conference last week).

ARTICLE: “Introduction to: Linked Data Platform” by Cody Burleson, Base 22

Those ready to dive into the nuts and bolts of the document will find detailed guidance on topics such as:

  • Predicate URIs
  • Use of relative URIs
  • Hierarchy and container URIs
  • Working with fragments
  • Working with standard datatypes
  • Representing relationships between resources
  • Finding established vocabularies

…and much more. See the full document at http://www.w3.org/TR/ldp-bp/

SemanticWeb.com congratulates the Working Group on this step and looks forward to reporting on use cases and implementations of LDP.

WEBINAR: Getting Started with the Linked Data Platform (LDP)

WEBINAR Title slide: Getting Started with the Linked Data PlatformIn case you missed Monday’s webinar, “Getting Started with the Linked Data Platform (LDP)” delivered by Arnaud Le Hors of IBM, the recording and slides are now available (and posted below). The webinar was co-produced by SemanticWeb.com and DATAVERSITY.net and runs for one hour, including a Q&A session with the audience that attended the live broadcast.

The presenter will also deliver a session that offers a deeper dive into LDP at the upcoming Semantic Technology & Business Conference: “The W3C Linked Data Platform,” and immediately following that session, Sandro Hawke, W3C staff, will present, “Building Social Applications with the W3C Linked Data Platform (LDP).

Registration for the conference is now open.

If you watch this webinar, please use the comments section below to share your questions, comments, and ideas for webinars you would like to see in the future.

About the Webinar

Linked Data Platform (LDP), the latest W3C standard for Linked Data, brings REST to Linked Data. LDP defines a standard way to access, create, and update RDF resources over HTTP. With this new capability, businesses can use Linked Data for data integration in read/write mode.

This webinar will introduce you to this new standard, explaining what’s in it and how it fits with other standards like SPARQL. You will have a basic understanding of what you can expect to be able to do with this new technology so you can plan on how to best leverage it in your future business applications.

(Presentation Video and Slides after the jump…)

The Video:

Read more

Introduction to: Linked Data Platform

Nametag: Hello, my name is Linked Data PlatformIn its ongoing mission to lead the World Wide Web to its full potential, the W3C recently released the first specification for an entirely new kind of system. Linked Data Platform 1.0 defines a read-write Linked Data architecture, based on HTTP access to web resources described in RDF. To put that more simply, it proposes a way to work with pure RDF resources almost as if they were web pages.

Because the Linked Data Platform (LDP) builds upon the classic HTTP request and response model, and because it aligns well with things like REST, Ajax, and JSON-LD, mainstream web developers may soon find it much easier to leverage the power and benefits of Linked Data. It’s too early to know how big of an impact it will actually make, but I’m confident that LDP is going to be an important bridge across the ever-shrinking gap between todays Web of hyperlinked documents and the emerging Semantic Web of Linked Data. In today’s post, I’m going to introduce you to this promising newcomer by covering the most salient points of the LDP specification in simple terms. So, let’s begin with the obvious question…

 

What is a Linked Data Platform?

A Linked Data Platform is any client, server, or client/server combination that conforms in whole or in sufficient part to the LDP specification, which defines techniques for working with Linked Data Platform Resources over HTTP. That is to say, it allows Linked Data Platform Resources to be managed using HTTP methods (GET, POST, PUT, etc.). A resource is either something that can be fully represented in RDF or otherwise something like a binary file that may not have a useful RDF representation. When both are managed by an LDP, each is referred to as a Linked Data Platform Resource (LDPR), but further distinguished as either a Linked Data Platform RDF Source (LDP-RS) or a Linked Data Platform Non-RDF Source (LDP-NR).

Read more

Government Linked Data Goes With George Thomas

[EDITOR'S NOTE: Thank you to John Breslin for authoring this guest post remembering our friend and colleague, George Thomas.]

Photo of George ThomasWhen writing about a person’s significant achievements, it would be so much better if the person themselves could hear the good things you were saying about them. Unfortunately, the person I am writing about, George Thomas, passed away last week after a long battle with cancer. However, I think it is important to note the huge impact that George had on Government Linked Data, Linked Data in general, and on his friends and colleagues in the Semantic Web space. If there’s one name that Government Linked Data ‘goes with’, it would be George Thomas.

Although I only physically met George a handful of times, I would count him as one of those who influenced me the most – through his visionary ideas, his practical nature, his inspiring talks at conferences like SemTechBiz, and his willingness to build bridges between people, communities, and of course data.

For those who may not have met him, George worked in the US Government for the past 12 years – most recently as an enterprise architect in the US Department of Health and Human Services (HHS) – and previously he held Chief Architect/CTO roles in other agencies and various private companies.

I first came across George when he was Chief Architect at the CIO’s office in the US General Services Administration. He had given a presentation about how Semantic Web technologies similar to SIOC could potentially be used to “track the dollar instead of the person” on Recovery.gov. Later on, DERI’s Owen Sacco and I collaborated with George on a system to create and enforce fine-grained access control policies (using PPO/PPM) for the HHS’s Government Linked Data on IT investments and assets stored in multiple sources. (George also sung DERI’s praises in a blog post on Data.gov – “Linked Data Goes With DERI” – echoed in this article’s title.)

Read more

Schema.org Takes Action

actionstatusThis week saw schema.org introduce vocabulary that enables websites to describe the actions they enable and how these actions can be invoked, in the hope that these additions will help unleash new categories of applications, according to a new post by Dan Brickley.

This represents an expansion of the vocabulary’s focus point from describing entities to taking action on these entities. The work has been in progress, Brickley explains here, for the last couple of years, building on the http://schema.org/Action types added last August by providing a way of describing the capability to perform actions in the future.

The three action status type now includes PotentialActionStatus for a description of an action that is supported, ActiveActionStatus for an in-progress action, and CompletedActionStatus, for an action that has already taken place.

 Read more

The Web Is 25 — And The Semantic Web Has Been An Important Part Of It

web25NOTE: This post was updated at 5:40pm ET.

Today the Web celebrates its 25th birthday, and we celebrate the Semantic Web’s role in that milestone. And what a milestone it is: As of this month, the Indexed Web contains at least 2.31 billion pages, according to WorldWideWebSize.  

The Semantic Web Blog reached out to the World Wide Web Consortium’s current and former semantic leads to get their perspective on the roads The Semantic Web has traveled and the value it has so far brought to the Web’s table: Phil Archer, W3C Data Activity Lead coordinating work on the Semantic Web and related technologies; Ivan Herman, who last year transitioned roles at the W3C from Semantic Activity Lead to Digital Publishing Activity Lead; and Eric Miller, co-founder and president of Zepheira and the leader of the Semantic Web Initiative at the W3C until 2007.

While The Semantic Web came to the attention of the wider public in 2001, with the publication in The Scientific American of The Semantic Web by Tim Berners-Lee, James Hendler and Ora Lassila, Archer points out that “one could argue that the Semantic Web is 25 years old,” too. He cites Berners-Lee’s March 1989 paper, Information Management: A Proposal, that includes a diagram that shows relationships that are immediately recognizable as triples. “That’s how Tim envisaged it from Day 1,” Archer says.

Read more

The Audible Web?

4435941091_0b71388773

Reuven Cohen of Forbes recently wrote, “New audible interaction methods and API standards could be poised to usher in a new generation of web technology. Technology specifically tailored to interact with us as individuals rather than having us adapt to interact with the web. At the heart of this transformation is a new crop of technologies focused on natural language interaction through the use of verbal commands. In its most simple form, speech recognition is the ability to translate spoken words into text. The technology is certainly not a new concept; it has been around for almost 60 years. In 1954, the so-called Georgetown-IBM experiment was an influential demonstration of the first machine-based translation program.” Read more

RDF 1.1 is a W3C Recommendation

RDF 1.1Almost exactly 10 years after the publication of RDF 1.0 (10 Feb 2004, http://www.w3.org/TR/rdf-concepts/), the World Wide Web Consortium (W3C) has announced today that RDF 1.1 has become a “Recommendation.” In fact, the RDF Working Group has published a set of eight Resource Description Framework (RDF) Recommendations and four Working Group Notes. One of those notes, the RDF 1.1 primer, is a good starting place for those new to the standard.

SemanticWeb.com caught up with Markus Lanthaler, co-editor of the RDF 1.1 Concepts and Abstract Syntax document, to discuss this news.

photo of Markus LanthalerLanthaler said of the recommendation, “Semantic Web technologies are often criticized for their complexity–mostly because RDF is being conflated with RDF/XML. Thus, with RDF 1.1 we put a strong focus on simplicity. The new specifications are much more accessible and there’s a clear separation between RDF, the data model, and its serialization formats. Furthermore, the primer provides a great introduction for newcomers. I’m convinced that, along with the standardization of Turtle (and previously JSON-LD), this will mark an important point in the history of the Semantic Web.”

Read more

New Vocabularies Are Now W3C Recommendations

W3C LogoWe reported yesterday on the news that JSON-LD has reached Recommendation status at W3C. Three formal vocabularies also reached that important milestone yesterday:

The W3C Documentation for The Data Catalog Vocabulary (DCAT), says that DCAT “is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web….By using DCAT to describe datasets in data catalogs, publishers increase discoverability and enable applications easily to consume metadata from multiple catalogs. It further enables decentralized publishing of catalogs and facilitates federated dataset search across sites. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation.”

Meanwhile, The RDF Data Cube Vocabulary  addresses the following issue: “There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations. The Data Cube vocabulary is a core foundation which supports extension vocabularies to enable publication of other aspects of statistical data flows or other multidimensional data sets.”

Lastly, W3C now recommends use of the Organization Ontology, “a core ontology for organizational structures, aimed at supporting linked data publishing of organizational information across a number of domains. It is designed to allow domain-specific extensions to add classification of organizations and roles, as well as extensions to support neighbouring information such as organizational activities.”

 

JSON-LD is an official Web Standard

JSON-LD logo JSON-LD has reached the status of being an official “Recommendation” of the W3C. JSON-LD provides yet another way for web developers to add structured data into web pages, joining RDFa.The W3C documentation says, “JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.” This addition should be welcome news for Linked Data developers familiar with JSON and/or faced with systems based on JSON.

SemanticWeb.com caught up with the JSON-LD specfication editors to get their comments…

photo of Manu Sporny Manu Sporny (Digital Bazaar), told us, “When we created JSON-LD, we wanted to make Linked Data accessible to Web developers that had not traditionally been able to keep up with the steep learning curve associated with the Semantic Web technology stack. Instead, we wanted people that were comfortable working with great solutions like JSON, MongoDB, and REST to be able to easily integrate Linked Data technologies into their day-to-day work. The adoption of JSON-LD by Google and schema.org demonstrates that we’re well on our way to achieving this goal.”

Read more

NEXT PAGE >>