Posts Tagged ‘linked open data’
Earlier this year The Semantic Web Blog reported that the Getty Research Institute has released the Art & Architecture Thesaurus (AAT) as Linked Open Data. One of the external advisors to its work was Vladimir Alexiev, who leads the Data and Ontology Management group at Ontotext and works on many projects related to cultural heritage.
Ontotext’s OWLIM family of semantic repositories supports large-scale knowledge bases of rich semantic information, and powerful reasoning. The company, for example, did the first working implementation of CIDOC CRM search; CIDOC CRM is one of these rich ontologies for cultural heritage.
We caught up with Alexiev recently to gain some insight into semantic technology’s role in representing the cultural heritage sphere. Here are some of his thoughts about why it’s important for cultural institutions to adopt Linked Open Data and semantic technologies to enhance our digital understanding of cultural heritage objects and information:
Ikuya Yamada, co-founder and CTO of Studio Ousia, the company behind Linkify – the technology to automatically extract certain keywords and add intelligent hyperlinks to them to accelerate mobile search – recently sat down with The Semantic Web Blog to discuss the company’s work, including its vision of Semantic AR (augmented reality).
The Semantic Web Blog: You spoke at last year’s SEEDS Conference on the subject of linking things and information and the vision of Semantic AR, which includes the idea of delivering additional information to users before they even launch a search for it. Explain your technology’s relation to that vision of finding and delivering the information users need while they are consuming content – even just looking at a word.
Yamada: The main focus of our technology is extracting accurately only a small amount of interesting keywords from text [around people, places, or things]. …We also develop a content matching system that matches those keywords with other content on the web – like a singer [keyword] with a song or a location [keyword] with a map. By combining keyword extraction and the content matching engine, we can augment text using information on the web.
Ivan Herman Discusses Lead Role At W3C Digital Publishing Activity — And Where The Semantic Web Can Fit In Its Work
There’s a (fairly) new World Wide Web Consortium (W3C) activity, the Digital Publishing Activity, and it’s headed up by Ivan Herman, formerly the Semantic Web Activity Lead there. That activity was subsumed in December by the W3c Data Activity, with Phil Archer taking the role as Lead (see our story here).
Begun last summer, the Digital Publishing Activity has, as Herman describes it, “millions of aspects, some that have nothing to do with the semantic web.” But some, happily, that do – and that are extremely important to the publishing community, as well.
Dandelion, the service from SpazioDati whose goal is to delivering linked and enriched data for apps, has just recently introduced a new suite of products related to semantic text analysis.
Its dataTXT family of semantic text analysis APIs includes dataTXT-NEX, a named entity recognition API that links entities in the input sentence with Wikipedia and DBpedia and, in turn, with the Linked Open Data cloud and dataTXT-SIM, an experimental semantic similarity API that computes the semantic distance between two short sentences. TXT-CL (now in beta) is a categorization service that classifies short sentences into user-defined categories, says SpazioDati.CEO Michele Barbera.
“The advantage of the dataTXT family compared to existing text analysis’ tools is that dataTXT relies neither on machine learning nor NLP techniques,” says Barbera. “Rather it relies entirely on the topology of our underlying knowledge graph to analyze the text.” Dandelion’s knowledge graph merges together several Open Community Data sources (such as DBpedia) and private data collected and curated by SpazioDati. It’s still in private beta and not yet publicly accessible, though plans are to gradually open up portions of the graph in the future via the service’s upcoming Datagem APIs, “so that developers will be able to access the same underlying structured data by linking their own content with dataTXT APIs or by directly querying the graph with the Datagem APIs; both of them will return the same resource identifiers,” Barbera says. (See the Semantic Web Blog’s initial coverage of Dandelion here, including additional discussion of its knowledge graph.)
There’s a growing focus on the opportunity for semantic technology to help out with managing media assets – and with making money off of them, too. Last week, The Semantic Web Blog covered the EU-funded project Media Mixer for repurposing and reusing media fragments across borders on the Web. Also hailing from Europe – France, to be exact – is Perfect Memory, which aims to support the content management, automatic indexing and asset monetizing of large-scale multimedia.
Perfect Memory, which was a finalist at this spring’s SemTechBiz semantic start-up competition, has implemented its platform at Belgian TV broadcaster and radio RTBF, for its GEMS semantic-based multimedia browser prototype that was a runner-up at IBC 2013 this fall. In September it also received a 600K Euro investment from SOFIMAC Partners to extend its development efforts, platform, and market segments, as well as protect its innovations with patent filings.
“Our idea is to reinvent media asset management systems,” says Steny Solitude, CEO of Perfect Memory.
Hunger is a critical issue affecting approximately 870 million people worldwide. With new technologies, research, and telecommunication, we as a global population have the power to significantly reduce the levels of hunger around the world. But in order to accomplish this, the people who have control of the aforementioned research and technology will need to share their data and combine forces to create direct solutions to this global problem.
This is precisely what the good people at the International Food Policy Research Institute (IFPRI) are working toward. What the IFPRI has to offer is data–data on every country around the world, data about malnutrition, child mortality rates, ecology, rainfall, and much more. With the help of Web Portal Specialists like Soonho Kim, they are working on making that data open and easily accessible, but they are currently facing a number of challenges along the way. Soonho spoke to an intimate group of semantic technology experts at the recent Semantic Technology Conference, sharing the successes of the IFPRI thus far and the areas where they could use some help. Read more
NEXT PAGE >>