Posts Tagged ‘linked open data’

Easing The Way To Linked Open Data In The Geosciences Domain

ocenapixThe OceanLink Project is bringing semantic technology to the geosciences domain – and it’s doing it with the idea in mind of not forcing that community to have to become experts in semtech in order to realize value from its implementation. Project lead Tom Narock of Marymount University, who recently participated in an online webinar that discussed how semantics is being implemented to integrate ocean science data repositories, library holdings, conference abstracts, and funded research awards, noted that this effort is “tackling a particular problem in ocean sciences, but [can be part of a] more general change for researchers in discovering and integrating interdisciplinary resources, [when you] need to do federated and complex searches of available resources.”

The project has an interest in using more formal, stronger semantics – working with OWL, RDF, reasoners – but also an acknowledgement that a steeper learning curve comes with the territory. How to balance that with what the community is able to implement and use? The answer: “In addition to exposing our data using semantic technologies, a big part of Oceanlink is building cyber infrastructure that will help lessen the burden on our end users.”

Read more

The Importance of the Semantic Web To Our Cultural Heritage

oldmasterpaintingEarlier this year The Semantic Web Blog reported that the Getty Research Institute has released the Art & Architecture Thesaurus (AAT) as Linked Open Data. One of the external advisors to its work was Vladimir Alexiev, who leads the Data and Ontology Management group at Ontotext and works on many projects related to cultural heritage.

Ontotext’s OWLIM family of semantic repositories supports large-scale knowledge bases of rich semantic information, and powerful reasoning. The company, for example, did the first working implementation of CIDOC CRM search; CIDOC CRM is one of these rich ontologies for cultural heritage.

We caught up with Alexiev recently to gain some insight into semantic technology’s role in representing the cultural heritage sphere. Here are some of his thoughts about why it’s important for cultural institutions to adopt Linked Open Data and semantic technologies to enhance our digital understanding of cultural heritage objects and information:

Read more

Studio Ousia Envisions A World Of Semantic Augmented Reality

Image courtesy: Flickr/by Filter Forge

Image courtesy: Flickr/by Filter Forge

Ikuya Yamada, co-founder and CTO of Studio Ousia, the company behind Linkify – the technology to automatically extract certain keywords and add intelligent hyperlinks to them to accelerate mobile search – recently sat down with The Semantic Web Blog to discuss the company’s work, including its vision of Semantic AR (augmented reality).

The Semantic Web Blog: You spoke at last year’s SEEDS Conference on the subject of linking things and information and the vision of Semantic AR, which includes the idea of delivering additional information to users before they even launch a search for it. Explain your technology’s relation to that vision of finding and delivering the information users need while they are consuming content – even just looking at a word.

Yamada: The main focus of our technology is extracting accurately only a small amount of interesting keywords from text [around people, places, or things]. …We also develop a content matching system that matches those keywords with other content on the web – like a singer [keyword] with a song or a location [keyword] with a map. By combining keyword extraction and the content matching engine, we can augment text using information on the web.

Read more

194 Million Linked Open Data Bibliographic Work Descriptions Released by OCLC

OCLC WorldCat logoYesterday, Richard Wallis gave a peek into some exciting new developments in the OCLC’s Linked Open Data (LOD) efforts.  While these have not yet been formally announced by OCLC, they represent significant advancements in WorldCat LOD. Our reporting to date on LOD at WorldCat is here.

Most significantly, OCLC has now released 194 Million Linked Open Data Bibliographic Work descriptions. According to Wallis, “A Work is a high-level description of a resource, containing information such as author, name, descriptions, subjects etc., common to all editions of the work.” In his post, he uses the example of “Zen and the Art of Motorcycle Maintenance” as a Work.

Read more

First of Four Getty Vocabularies Made Available as Linked Open Data

Getty Vocabularies - Linked Open Data logoJim Cuno, the President and CEO of the Getty, announced yesterday that the Getty Research Institute has released the Art & Architecture Thesaurus (AAT) ® as Linked Open Data. Cuno said, “The Art & Architecture Thesaurus is a reference of over 250,000 terms on art and architectural history, styles, and techniques. It’s one of the Getty Research Institute’s four Getty Vocabularies, a collection of databases that serves as the premier resource for cultural heritage terms, artists’ names, and geographical information, reflecting over 30 years of collaborative scholarship.”

The data set is available for download at vocab.getty.edu under an Open Data Commons Attribution License (ODC BY 1.0). Vocab.getty.edu offers a SPARQL endpoint, as well as links to the Getty’s Semantic Representation documentation, the Getty Ontology, links for downloading the full data sets, and more.

Read more

Ivan Herman Discusses Lead Role At W3C Digital Publishing Activity — And Where The Semantic Web Can Fit In Its Work

rsz_w3clogoThere’s a (fairly) new World Wide Web Consortium (W3C) activity, the Digital Publishing Activity, and it’s headed up by Ivan Herman, formerly the Semantic Web Activity Lead there. That activity was subsumed in December by the W3c Data Activity, with Phil Archer taking the role as Lead (see our story here).

Begun last summer, the Digital Publishing Activity has, as Herman describes it, “millions of aspects, some that have nothing to do with the semantic web.” But some, happily, that do – and that are extremely important to the publishing community, as well.

Read more

Dandelion’s New Bloom: A Family Of Semantic Text Analysis APIs

rsz_dandyDandelion, the service from SpazioDati whose goal is to delivering linked and enriched data for apps, has just recently introduced a new suite of products related to semantic text analysis.

Its dataTXT family of semantic text analysis APIs includes dataTXT-NEX, a named entity recognition API that links entities in the input sentence with Wikipedia and DBpedia and, in turn, with the Linked Open Data cloud and dataTXT-SIM, an experimental semantic similarity API that computes the semantic distance between two short sentences. TXT-CL (now in beta) is a categorization service that classifies short sentences into user-defined categories, says SpazioDati.CEO Michele Barbera.

“The advantage of the dataTXT family compared to existing text analysis’ tools is that dataTXT relies neither on machine learning nor NLP techniques,” says Barbera. “Rather it relies entirely on the topology of our underlying knowledge graph to analyze the text.” Dandelion’s knowledge graph merges together several Open Community Data sources (such as DBpedia) and private data collected and curated by SpazioDati. It’s still in private beta and not yet publicly accessible, though plans are to gradually open up portions of the graph in the future via the service’s upcoming Datagem APIs, “so that developers will be able to access the same underlying structured data by linking their own content with dataTXT APIs or by directly querying the graph with the Datagem APIs; both of them will return the same resource identifiers,” Barbera says. (See the Semantic Web Blog’s initial coverage of Dandelion here, including additional discussion of its knowledge graph.)

Read more

Redlink Brings The Semantic Web To Integrators

rsz_relinkA cloud-based platform for semantic enrichment, linked data publishing and search technologies is underway now at startup Redlink, which bills it as the world’s first project of its kind.

The company has a heritage in the European Commission-funded IKS (Interactive Knowledge Stack) Open Source Project, created to provide a stack of semantic features for use in content management systems and the birthplace of Apache Stanbol, as well as in the Linked Media Framework project, from which Apache Marmotta derived. The founding developers of those open source projects are founders of Redlink, and Apache Stanbol, which provides a set of reusable components for semantic content management (including adding semantic information to “non-semantic” pieces of content), and Apache Marmotta, which provides Linked Data platform capabilities, are core to the platform, as is Apache Solr for enterprise search.

Read more

Perfect Memory’s Goal: Help You To Make More From Your Media Content

rsz_pmemThere’s a growing focus on the opportunity for semantic technology to help out with managing media assets – and with making money off of them, too. Last week, The Semantic Web Blog covered the EU-funded project Media Mixer for repurposing and reusing media fragments across borders on the Web. Also hailing from Europe – France, to be exact – is Perfect Memory, which aims to support the content management, automatic indexing and asset monetizing of large-scale multimedia.

Perfect Memory, which was a finalist at this spring’s SemTechBiz semantic start-up competition, has implemented its platform at Belgian TV broadcaster and radio RTBF, for its GEMS semantic-based multimedia browser prototype that was a runner-up at IBC 2013 this fall. In September it also received a 600K Euro investment from SOFIMAC Partners to extend its development efforts, platform, and market segments, as well as protect its innovations with patent filings.

“Our idea is to reinvent media asset management systems,” says Steny Solitude, CEO of Perfect Memory.

Read more

Fighting Global Hunger with Semantics, And How You Can Help

Hunger is a critical issue affecting approximately 870 million people worldwide. With new technologies, research, and telecommunication, we as a global population have the power to significantly reduce the levels of hunger around the world. But in order to accomplish this, the people who have control of the aforementioned research and technology will need to share their data and combine forces to create direct solutions to this global problem.

This is precisely what the good people at the International Food Policy Research Institute (IFPRI) are working toward. What the IFPRI has to offer is data–data on every country around the world, data about malnutrition, child mortality rates, ecology, rainfall, and much more. With the help of Web Portal Specialists like Soonho Kim, they are working on making that data open and easily accessible, but they are currently facing a number of challenges along the way. Soonho spoke to an intimate group of semantic technology experts at the recent Semantic Technology Conference, sharing the successes of the IFPRI thus far and the areas where they could use some help. Read more

NEXT PAGE >>