Posts Tagged ‘Standards’

It’s Time for Developers to Take Linked Data Seriously

9965173654_7bf862d89d_nCandice McMillan of Programmable Web reports, “If you aren’t familiar with linked data or the Semantic Web, it’s probably a good time to get better acquainted with the concepts, as it seems to be a popular theme that is rippling through the Web development world, with promises of exciting possibilities. At this year’s APIcon in London, Markus Lanthaler, inventor of Hydra and one of the core designers of JSON-LD, talked about the importance of optimizing data architecture for an integrated future, focusing on linked data as a promising solution.” Read more

WEBINAR: Getting Started with the Linked Data Platform (LDP)

WEBINAR Title slide: Getting Started with the Linked Data PlatformIn case you missed Monday’s webinar, “Getting Started with the Linked Data Platform (LDP)” delivered by Arnaud Le Hors of IBM, the recording and slides are now available (and posted below). The webinar was co-produced by SemanticWeb.com and DATAVERSITY.net and runs for one hour, including a Q&A session with the audience that attended the live broadcast.

The presenter will also deliver a session that offers a deeper dive into LDP at the upcoming Semantic Technology & Business Conference: “The W3C Linked Data Platform,” and immediately following that session, Sandro Hawke, W3C staff, will present, “Building Social Applications with the W3C Linked Data Platform (LDP).

Registration for the conference is now open.

If you watch this webinar, please use the comments section below to share your questions, comments, and ideas for webinars you would like to see in the future.

About the Webinar

Linked Data Platform (LDP), the latest W3C standard for Linked Data, brings REST to Linked Data. LDP defines a standard way to access, create, and update RDF resources over HTTP. With this new capability, businesses can use Linked Data for data integration in read/write mode.

This webinar will introduce you to this new standard, explaining what’s in it and how it fits with other standards like SPARQL. You will have a basic understanding of what you can expect to be able to do with this new technology so you can plan on how to best leverage it in your future business applications.

(Presentation Video and Slides after the jump…)

The Video:

Read more

New Vocabularies Are Now W3C Recommendations

W3C LogoWe reported yesterday on the news that JSON-LD has reached Recommendation status at W3C. Three formal vocabularies also reached that important milestone yesterday:

The W3C Documentation for The Data Catalog Vocabulary (DCAT), says that DCAT “is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web….By using DCAT to describe datasets in data catalogs, publishers increase discoverability and enable applications easily to consume metadata from multiple catalogs. It further enables decentralized publishing of catalogs and facilitates federated dataset search across sites. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation.”

Meanwhile, The RDF Data Cube Vocabulary  addresses the following issue: “There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations. The Data Cube vocabulary is a core foundation which supports extension vocabularies to enable publication of other aspects of statistical data flows or other multidimensional data sets.”

Lastly, W3C now recommends use of the Organization Ontology, “a core ontology for organizational structures, aimed at supporting linked data publishing of organizational information across a number of domains. It is designed to allow domain-specific extensions to add classification of organizations and roles, as well as extensions to support neighbouring information such as organizational activities.”

 

JSON-LD is an official Web Standard

JSON-LD logo JSON-LD has reached the status of being an official “Recommendation” of the W3C. JSON-LD provides yet another way for web developers to add structured data into web pages, joining RDFa.The W3C documentation says, “JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.” This addition should be welcome news for Linked Data developers familiar with JSON and/or faced with systems based on JSON.

SemanticWeb.com caught up with the JSON-LD specfication editors to get their comments…

photo of Manu Sporny Manu Sporny (Digital Bazaar), told us, “When we created JSON-LD, we wanted to make Linked Data accessible to Web developers that had not traditionally been able to keep up with the steep learning curve associated with the Semantic Web technology stack. Instead, we wanted people that were comfortable working with great solutions like JSON, MongoDB, and REST to be able to easily integrate Linked Data technologies into their day-to-day work. The adoption of JSON-LD by Google and schema.org demonstrates that we’re well on our way to achieving this goal.”

Read more

Linked Data for Librarians: BIBFRAME Isn’t the Whole Story

4683685_2941e9ce00

Dorothea Salo of Library Journal recently wrote, “American catalogers and systems librarians can be forgiven for thinking that all the linked-data action lies with the BIBFRAME development effort. BIBFRAME certainly represents the lion’s share of what I’ve bookmarked for next semester’s XML and linked-data course. All along, I’ve wondered where the digital librarians, metadata librarians, records managers, and archivists—information professionals who describe information resources but are at best peripheral to the MARC establishment—were hiding in the linked-data ferment, as BIBFRAME certainly isn’t paying them much attention. After attending Semantic Web in Libraries 2013 (acronym SWIB because the conference takes place in Germany, where the word for libraries is “bibliotheken”), I know where they are and what they’re making: linked data that lives in the creases, building bridges across boundaries and canals through liminal spaces.” Read more

PhUSE and CDISC Announce Draft RDF Representation of Existing CDISC Data Standards

phuse

Silver Spring, MD (PRWEB) October 30, 2013 — PhUSE and CDISC are happy to announce the completion of Phase I of the FDA/PhUSE Semantic Technology Working Group Project. The PhUSE Semantic Technology Working Group aims to investigate how formal semantic standards can support the clinical and non-clinical trial data life cycle from protocol to submission. This deliverable includes a draft set of existing CDISC standards represented in RDF. Read more

FIBO Summit Opening Remarks by EDMC Managing Director Mike Atkin

[Editor’s Note: As our own Jennifer Zaino recently reported, the Enterprise Data Management (EDM) Council, a not-for-profit trade association dedicated to addressing the practical business strategies and technical implementation realities of enterprise data management held a two day FIBO Technology Summit in conjunction with MediaBistro’s Semantic Technology & Business (SemTechBiz) Conference, June 7th and 8th in San Francisco, California.  SemTechBiz was chosen for the summit because of its close proximity to the leading minds in Silicon Valley.
 
In afternoon and morning sessions, lead by distinguished academic and industry leaders, 60 top developers discussed 4 key technology challenges and developed plans that will lead to solutions critical to simultaneously lowering the cost of operations in financial institutions and ensuring the transparency required by regulations put in place since the beginning of the financial crisis of 2008.
 
Michael Atkin, EDM Council Managing Director began the deliberations with the following charge to the assembled experts.]

Photo of Mike Atkin, Managing Director, EDM CouncilI spent the majority of my professional life as the scribe, analyst, advocate, facilitator and therapist for the information industry.   I started with the traditional publishers and then moved on to my engagement in the financial information industry.  I watched the business of information evolve through lots of IT revolutions … from microfiche to Boolean search to CD-ROM to videotext to client server architecture to the Internet and beyond.

At the baseline of everything was the concept of data tagging – as the key to search, retrieval and data value.  I saw the evolution from SGML (which gave rise to the database industry).  I witnessed the separation of content from form with the development of HTML.  And now we are standing at the forefront of capturing meaning with formal ontologies and using inference-based processing to perform complex analysis.

I have been both a witness to (and an organizer of) the information industry for the better part of 30 years.  It is my clear opinion that this development – and by that I mean the tagging of meaning and semantic processing is the most important development I have witnessed.  It is about the representation of knowledge.  It is about complex analytical processing.  It is about the science of meaning.  It is about the next phase of innovation for the information industry.

Let me see if I can put all of this into perspective for you.  Because my goal is to enlist you into our journey.  Read more

Meritora, First Commercial Implementation of Universal Payment Standard PaySwarm, Goes Live

Today sees the launch of Meritora, the first commercial implementation of the universal payment standard PaySwarm (initially discussed in this blog here and here). The creation of Digital Bazaar, the company founded and CEO’d by Manu Sporny – whose W3C credentials include being founder of both the Web Payments Community Group and JSON-LD Community Group, as well as chair of the RDF Web Applications Working Group – Meritora is designed to ease what is still a surprisingly arduous task of buying and selling on the web. The service is starting with a simple asset hosting feature for helping vendors sell digital content on WordPress-powered sites, and support for decentralized web app stores so that app creators can put their work on their web sites, set a price for them, and let them be bought there, at a web app store, or anywhere on the web.

The name Meritora points to the service’s underlying purpose of rewarding greatness, coming from the bases ‘merit’ and ‘ora,’ the latter of which has been used across a number of cultures to express a unit of value, Sporny says (noting that it means ‘golden’ in Esperanto, and was also used as a unit of currency among Anglo-Saxons). That’s a big name to live up to, but the service hopes to do so by making Web payments work simply, securely, quickly, with low fees and no vendor lock-in for buyers and sellers on the digital content scene.

There’s Linked Data to thank for what Meritora, and PaySwarm, can do, with Sporny describing the system as “the world’s first payment solution where the core of the technology is powered by Linked Data.”

Read more

Eleven SPARQL 1.1 Specifications are W3C Recommendations

SPARQL LogoThe W3C has announced that eleven specifications of SPARQL 1.1 have been published as recommendations. SPARQL is the Semantic Web query language.  We caught up with Lee Feigenbaum, VP Marketing & Technology at Cambridge Semantics Inc. to discuss the significance of this announcement. Feigenbaum is a SPARQL expert who currently serves as the Co-Chair of the W3C’s SPARQL Working Group, leading the design of SPARQL.

Feigenbaum says, “SPARQL 1.1 is a huge leap forward in providing a standard way to access and update Semantic Web data. By reaching W3C Recommendation status, Semantic Web developers, vendors, publishers and consumers have a stable, well-vetted, and interoperable set of standards they can rely on for the foreseeable future.”

Read more

Introduction to: OWL Profiles

Name Tag: Hello, we are the OWL familyOWL, the Web Ontology Language has been standardized by W3C as a powerful language to represent knowledge (i.e. ontologies) on the Web. OWL has two functionalities. The first functionality is to express knowledge in an unambiguous way. This is accomplished by representing knowledge as set of concepts within a particular domain and the relationship between these concepts. If we only take into account this functionality, then the goal is very similar to that of UML or Entity-Relationship diagrams. The second functionality is to be able to draw conclusions from the knowledge that has been expressed. In other words, be able to infer implicit knowledge from the explicit knowledge. We call this reasoning and this is what distinguishes OWL from UML or other modeling languages.

OWL evolved from several proposals and became a standard in 2004. This was subsequently extended in 2008 by a second standard version, OWL 2. With OWL, you have the possibility of expressing all kinds of knowledge. The basic building blocks of an ontology are concepts (a.k.a classes) and the relationships between the classes (a.k.a properties).  For example, if we were to create an ontology about a university, the classes would include Student, Professor, Courses while the properties would be isEnrolled, because a Student is enrolled in a Course, and isTaughtBy, because a Professor teaches a Course.

Read more

NEXT PAGE >>