Posts Tagged ‘Linked Data Platform’

Introduction to: Linked Data Platform

Nametag: Hello, my name is Linked Data PlatformIn its ongoing mission to lead the World Wide Web to its full potential, the W3C recently released the first specification for an entirely new kind of system. Linked Data Platform 1.0 defines a read-write Linked Data architecture, based on HTTP access to web resources described in RDF. To put that more simply, it proposes a way to work with pure RDF resources almost as if they were web pages.

Because the Linked Data Platform (LDP) builds upon the classic HTTP request and response model, and because it aligns well with things like REST, Ajax, and JSON-LD, mainstream web developers may soon find it much easier to leverage the power and benefits of Linked Data. It’s too early to know how big of an impact it will actually make, but I’m confident that LDP is going to be an important bridge across the ever-shrinking gap between todays Web of hyperlinked documents and the emerging Semantic Web of Linked Data. In today’s post, I’m going to introduce you to this promising newcomer by covering the most salient points of the LDP specification in simple terms. So, let’s begin with the obvious question…

 

What is a Linked Data Platform?

A Linked Data Platform is any client, server, or client/server combination that conforms in whole or in sufficient part to the LDP specification, which defines techniques for working with Linked Data Platform Resources over HTTP. That is to say, it allows Linked Data Platform Resources to be managed using HTTP methods (GET, POST, PUT, etc.). A resource is either something that can be fully represented in RDF or otherwise something like a binary file that may not have a useful RDF representation. When both are managed by an LDP, each is referred to as a Linked Data Platform Resource (LDPR), but further distinguished as either a Linked Data Platform RDF Source (LDP-RS) or a Linked Data Platform Non-RDF Source (LDP-NR).

Read more

Hello 2014 (Part 2)

rsz_lookahead2

Courtesy: Flickr/faul

Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.

Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.

Bill Roberts, CEO, Swirrl:   Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems.  I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.

Read more

Hello 2014

rsz_lookaheadone

Courtesy: Flickr/Wonderlane

Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:

Phil Archer, Data Activity Lead, W3C:

For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.

I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!

[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.

Read more

Good-Bye 2013

Courtesy: Flickr/MadebyMark

Courtesy: Flickr/MadebyMark

As we prepare to greet the New Year, we take a look back at the year that was. Some of the leading voices in the semantic web/Linked Data/Web 3.0 and sentiment analytics space give us their thoughts on the highlights of 2013.

Read on:

 

Phil Archer, Data Activity Lead, W3C:

The completion and rapid adoption of the updated SPARQL specs, the use of Linked Data (LD) in life sciences, the adoption of LD by the European Commission, and governments in the UK, The Netherlands (NL) and more [stand out]. In other words, [we are seeing] the maturation and growing acknowledgement of the advantages of the technologies.

I contributed to a recent study into the use of Linked Data within governments. We spoke to various UK government departments as well as the UN FAO, the German National Library and more. The roadblocks and enablers section of the study (see here) is useful IMO.

Bottom line: Those organisations use LD because it suits them. It makes their own tasks easier, it allows them to fulfill their public tasks more effectively. They don’t do it to be cool, and they don’t do it to provide 5-Star Linked Data to others. They do it for hard headed and self-interested reasons.

Christine Connors, founder and information strategist, TriviumRLG:

What sticks out in my mind is the resource market: We’ve seen more “semantic technology” job postings, academic positions and M&A activity than I can remember in a long time. I think that this is a noteworthy trend if my assessment is accurate.

There’s also been a huge increase in the attentions of the librarian community, thanks to long-time work at the Library of Congress, from leading experts in that field and via schema.org.

Read more

Keep It Simple, Smarty: How the BBC Is Expanding Their Linked Data Platform

At the Semantic Technology and Business Conference last June in San Francisco, David Rogers of the BBC was on-hand to educate attendees on the origins and progress of the BBC’s impressive Linked Data Platform. In his presentation, Rogers — who serves as Senior Technical Architect for BBC Future Media (News & Knowledge) — explained how the news giant’s use of semantic technologies has evolved since they first turned to Linked Data to better report on the 2010 World Cup. Currently, the BBC Linked Data Platform works with content across the company, including news, sports, music, location, and learning.

Rogers started things off with a little history. Leading up to the 2010 World Cup, the BBC wanted to create a sports website, and they found that RDF triplestores were the best way to connect and organize their player, team, and tournament information. Pleased with what they’d accomplished, the BBC amped things up a few notches for the 2012 Olympics. Everything got scaled up, their data became more dynamic, and all while relying on the simplest metadata possible. Before the Olympics, the BBC Sports website had information on 300 athletes. By the end, 1,100 athletes were covered semantically, allowing the BBC’s vast pool of reporters to all draw upon the same data and interlink their content in an easily discoverable manner. Read more

Linked Data at the BBC: The Latest Advances

Oli Bartlett of the BBC recently discussed the latest uses of linked data at the BBC. He writes, “The Linked Data Platform is one of the legacies of the BBC Sport 2012 Olympics website. You may have read my blog post on the work we did for the Olympic Data Service. One aspect of the service delivered the semantic framework for the 10,000 athlete pages and a page per event, discipline, country and venue. This framework provides the semantic graph of data (the linked data containing the athletes, events and venues and their associations with each other) and the APIs on this data. It was all built on the Dynamic Semantic Publishing (DSP) platform which facilitates the publication of automated metadata driven web pages and had originally been developed for the football World Cup in 2010.” Read more

Linked Data Platform 1.0 Working Draft Published

World Wide Web Consortium LogoThe W3C has published a Working Draft of the Linked Data Platform 1.0

Earlier this year, the Linked Data Working Group was formed in response to a member submission (Full Disclosure: SemanticWeb.com was a co-sponsor of that submission).  The original proposal put forward by IBM stated the need as, “We believe that Linked Data has the potential to solve some important problems that have frustrated the IT industry for many years, or at least to make significant advances in that direction. But this potential will be realized only if we can establish and communicate a much richer body of knowledge about how to exploit these technologies. In some cases, there also are gaps in the Linked Data standards that need to be addressed.”

As a working draft, the document published yesterday is open for public review, but The Linked Data Platform work represents a significant move forward in the creation of a standard for building enterprise systems around Linked Data. The document lays out “A set of best practices and simple approach for a read-write Linked Data architecture, based on HTTP access to web resources that describe their state using RDF.”

Read more

Announcement of Linked Data Basic Profile 1.0 Submission

In December 2011, the World Wide Web Consortium held “The Linked Enterprise Data Patterns workshop,” and that workshop ended with unanimous agreement that “the W3C should create a Working Group to produce a W3C Recommendation which defines a Linked Data Platform.”

In response to that call, SemanticWeb.com has joined experts from fellow W3C member Organizations IBM, DERI, EMC, Oracle, Red Hat, and Tasktop in submitting the “Linked Data Basic Profile 1.0” specification as a W3C Member Submission.  This specification defines a set of best practices and a simple approach for a read-write Linked Data architecture, based on HTTP access to web resources that describe their state using RDF.  The specification builds on the four principles Tim Berners-Lee used to define “Linked Data” and provides some new rules as well as clarifications and extensions to achieve greater interoperability between Linked Data implementations.

Read more here