These vistas will be explored in a session hosted by Kevin Ford, digital project coordinator at the Library of Congress at next week’s Semantic Technology & Business conference in San Jose. The door is being opened by the Bibliographic Framework Initiative (BIBFRAME) that the LOC launched a few years ago. Libraries will be moving from the MARC standards, their lingua franca for representing and communicating bibliographic and related information in machine-readable form, to BIBFRAME, which models bibliographic data in RDF using semantic technologies.
Posts Tagged ‘Richard Wallis’
During the recent Semantic Technology and Business Conference in San Francisco, a motley crew of expert presenters got up in front of a packed room, took a deep breath, and spoke passionately about the semantic projects nearest and dearest to their hearts while the unforgiving clock ticked their five precious minutes away. At the conference I shared highlights from some of those aptly named Lightning Sessions. Here are a few more snappy sessions that captivated the room that day:
Semantic Technology to Shed Light on Big Dark Data with Ben Zamanzadeh, DataPop
DataPop is a startup in the field of semantic advertising. The company seeks to create actionable insights for clients with semantics. As Ben put it, “Ad data is still dark data. Consumer actions are very hard to understand and even harder to predict.” The talk description explains DataPop’s approach: “DataPop’s Semantic Advertising Technology uses Machine Learned Semantic Models to build and analyze advertising campaigns that surpasses conventional advertising capabilities. Composite Semantic Data Models are used to translate Big piles of Data into meaningful entities, then Inference Engines transcribe information such that decisions and strategies can be formed. Semantic Methods has made it possible for us to explain the reasoning behind ‘why’ things happen.” Read more
At The Semantic Technology and Business conference in San Francisco Monday, OCLC technology evangelist Richard Wallis broke the news that Content-negotiation was implemented for the publication of Linked Data for WorldCat resources. Last June, WorldCat.org began publishing Linked Data for its bibliographic treasure trove, a global catalog of more than 290 million library records and some 2 billion holdings, leveraging schema.org to describe the assets.
“Now you can use standard Linked Data technologies to bring back information in RDF/ XML, JSON, or Turtle,” Wallis said. Or triples. “People can start playing with this today.” As he writes in his blog discussing the news, they can manually specify their preferred serialization format to work with or display, or do it from within a program by specifying to the http protocol for the format to accept from accessing the URI.
“Two hundred ninety million records on the web of Linked Data is a pretty good chunk of stuff when you start talking content negotiation,” Wallis told the Semantic Web Blog.
Richard Wallis of DataLiberate recently wrote, “Back in September I formed a W3C Group – Schema Bib Extend. To quote an old friend of mine ‘Why did you go and do that then?‘ Well, as I have mentioned before Schema.org has become a bit of a success story for structured data on the web. I would have no hesitation in recommending it as a starting point for anyone, in any sector, wanting to share structured data on the web. This is what OCLC did in the initial exercise to publish the 270+ million resources in WorldCat.org as Linked Data. At the same time, I believe that summer 2012 was a bit of a watershed for Linked Data in the library world. Over the preceding few years we have had various national libraries publishing linked data (British Library, Bibliothèque nationale de France, Deutsche National Bibliothek, National Library of Sweden, to name just a few). Read more
Richard Wallis has followed up his recent announcement that WorldCat data can now be downloaded as RDF triples with an explanation of how to put that data into a triple store. He begins: “Step 1: Choose a triplestore. I followed my own advise and chose 4Store. The main reasons for this choice were that it is open source yet comes from an environment where it was the base platform for a successful commercial business, so it should work. Also in my years rattling around the semantic web world, 4Store has always been one of those tools that seemed to be on everyone’s recommendation list.” Read more
Richard Wallis has written an article about the latest updates to WorldCat.org. He writes, “After we experimentally added RDFa embedded linked data, using Schema.org markup and some proposed Library extensions, to WorldCat pages, one the most often questions I was asked was where can I get my hands on some of this raw data? We are taking the application of linked data to WorldCat one step at a time so that we can learn from how people use and comment on it. So at that time if you wanted to see the raw data the only way was to use a tool [such as the W3C RDFA 1.1 Distiller] to parse the data out of the pages, just as the search engines do.” Read more
NEXT PAGE >>