Posts Tagged ‘linked data’

The Importance of the Semantic Web To Our Cultural Heritage

oldmasterpaintingEarlier this year The Semantic Web Blog reported that the Getty Research Institute has released the Art & Architecture Thesaurus (AAT) as Linked Open Data. One of the external advisors to its work was Vladimir Alexiev, who leads the Data and Ontology Management group at Ontotext and works on many projects related to cultural heritage.

Ontotext’s OWLIM family of semantic repositories supports large-scale knowledge bases of rich semantic information, and powerful reasoning. The company, for example, did the first working implementation of CIDOC CRM search; CIDOC CRM is one of these rich ontologies for cultural heritage.

We caught up with Alexiev recently to gain some insight into semantic technology’s role in representing the cultural heritage sphere. Here are some of his thoughts about why it’s important for cultural institutions to adopt Linked Open Data and semantic technologies to enhance our digital understanding of cultural heritage objects and information:

Read more

IBM Watson is Going to the Dermatologist

2989654915_e2e31e36be_z

Neal Ungerleider of Fast Company reports, “Big Blue wants you to get to know Watson better. And now that means you could soon encounter the super-computer in a most intimate place–your dermatologist’s office. Early last year IBM announced plans to invest $1 billion into its cognitive-computing platform Watson. That money included $100 million in venture capital for companies developing new ways to use Watson. Today IBM reveals that one of the companies they are investing in will bring artificial intelligence into dermatologists’ offices. Modernizing Medicine, a Florida-based firm which produces iPad software for electronic medical record-keeping, is partnering with IBM to integrate Watson into their software package for dermatologists.” Read more

How The Huffington Post Uses Semantic Technology

huffpo

Alastair Reid of Journalism.co.uk reports, “In the last two and a half years, The Huffington Post has launched in 11 markets and doubled traffic to its sites from 45 million to 90 million unique monthly visitors. Jimmy Maymann, chief executive of The Huffington Post, shared those figures while speaking at the Reuters’s Institute Big Data for Media conference in London today. For Maymann, the key is using data to improve reader experience, a tactic that will bring both editorial and business benefits. ‘Because of how media has changed in the last five years with social and search we’ve gone from producing 500 to 1,600 news stories every day,’ Maymann told delegates, and editors have access to data that can inform newsroom decisions in a real-time analytics dashboard. The content is ‘optimised’ by data, he said, so the editor can understand reader habits better and respond accordingly.” Read more

WorldCat Releases 197 Million Nuggets of Linked Data

worldcatRichard Wallis of OCLC reports on his Data Liberate blog, “A couple of months back I spoke about the preview release of Works data from WorldCat.org.  Today OCLC published a press release announcing the official release of 197 million descriptions of bibliographic Works. A Work is a high-level description of a resource, containing information such as author, name, descriptions, subjects etc., common to all editions of the work.  The description format is based upon some of the properties defined by the CreativeWork type from the Schema.org vocabulary.  In the case of a WorldCat Work description, it also contains [Linked Data] links to individual, OCLC numbered, editions already shared from WorldCat.org.” Read more

University Libraries Adopts VIVO Application for Faculty Collaboration

vi

Marketing and Communications, April 10, 2014 — The Texas A&M University Libraries is preparing to launch VIVO, a web-based community of research profiles to enhance faculty collaboration. By providing standard research profiles for all university faculty and graduate students, researchers can discover and contact individuals with similar interests whether they are across campus or at another VIVO institution. The data entry and standardization will continue through the summer with the VIVO debut planned for Open Access Week in October 2014.  Read more

Rewarding Improved Access to Linked Data

swj

A new paper out of the Semantic Web journal shares a proposed system, Five Stars of Linked Data Vocabulary Use. The paper was written by Krzysztof Janowicz, Pascal Hitzler, Benjamin Adams, Dave Kolas, and Charles Vardeman II. The abstract states, “In 2010 Tim Berners-Lee introduced a 5 star rating to his Linked Data design issues page to encourage data publishers along the road to good Linked Data. What makes the star rating so effective is its simplicity, clarity, and a pinch of psychology — is your data 5 star?” Read more

Why Librarians Should Embrace Linked Data

6140017255_815f69e70e_z

David Stuart of Research Information recently wrote, “If libraries are to realise the value of the data they have been building and refining over many years, then it is not enough for them to just embrace the web of documents, they must also embrace the web of data. The associated technologies may seem complex and impenetrable but the idea of libraries embracing the web of data doesn’t have to mean that every librarian has to embrace every bit of technology. The web of data refers to the publication of data online in a machine-readable format, so that individual pieces of information can be both linked to and read automatically.” Read more

Smart Search Relies On Getting That Product Data Right

cameraSupply chain and products standards organization GS1 – which this week joined the World Wide Web Consortium (W3C) to contribute to work on improving global commerce and logistics – also now has released the GTIN (Global Trade Item Number) Validation Guide. In the states the GTIN, which is the GS1-developed numbering sequence within bar codes for identifying products at point of sale, is known as the Universal Product Code (UPC).

The guide is part of the organization’s effort to drive awareness about “the business importance of having accurate product information on the web,” says Bernie Hogan, Senior Vice President, Emerging Capabilities and Industries. The guide has the endorsement of players including Google, eBay and Walmart, which are among the retailers that require the use of GTINs by onboarding suppliers, and support GTIN’s extension further into the online space to help ensure more accurate and consistent product descriptions that link to images and promotions, and help customers better find, compare and buy products.

“This is an effort to help clean up the data and get it more accurate,” he says. “That’s so foundational to any kind of commerce, because if it’s not the right number, you can have the best product data and images and the consumer still won’t find it.” The search hook, indeed, is the link between the work that GS1 is doing to encourage using GS1 standards online for improved product identification data with semantic web efforts such as schema.org, which The Semantic Web discussed with Hogan here.

Read more

RDF 1.1 and the Future of Government Transparency

rdf11-shdw

Following the newly minted “recommendation” status of RDF 1.1, Michael C. Daconta of GCN has asked, “What does this mean for open data and government transparency?” Daconta writes, “First, it is important to highlight the JSON-LD serialization format.  JSON is a very simple and popular data format, especially in modern Web applications.  Furthermore, JSON is a concise format (much more so than XML) that is well-suited to represent the RDF data model.  An example of this is Google adopting JSON-LD for marking up data in Gmail, Search and Google Now.  Second, like the rebranding of RDF to ‘linked data’ in order to capitalize on the popularity of social graphs, RDF is adapting its strong semantics to other communities by separating the model from the syntax.  In other words, if the mountain won’t come to Muhammad, then Muhammad must go to the mountain.” Read more

CHAIN-REDS Project Enhances Semantic Search And Extends Reproducibility Of Scientific Data

chainredspixThe CHAIN-REDS FP7 project, co-funded by the European Commission, has as a goal building a knowledge base of information, gathered both from dedicated surveys and other web and document sources, for largely more than half of the countries in the world, which it presents to visitors through geographic maps and tables. Earlier this month, its Knowledge Base and Semantic Search Engine for exploring the more than 30 million documents in its Open Access Document Repositories (OADR) and Data Repositories (DR) became available in a smartphone and tablet app, while the results of its Semantic Search Engine also now are ranked according to the January 2014 Ranking Web of Repositories. So, users conducting searches should see results in the order of the highest-ranked repositories.

The project has its roots in using semantic web technologies to correlate the data used to write scientific papers with the documents themselves whenever possible, says Prof. Roberto Barbera, of the Department of Physics and Astronomy at the University of Catania, as well as with applications that can be used to analyse the information. To drive to these ends, the CHAIN-REDS consortium semantically enriched its repositories and built its search engine on the related Linked Data. Users in search of information can get papers and data and, if applications are available, can be redirected to them on the project’s cloud infrastructure to reproduce and reanalyze the data.

“There is a huge effort in the scientific world about the reproducibility of science,” says Barbera.

Read more

<< PREVIOUS PAGENEXT PAGE >>