Posts Tagged ‘DBpedia’

Art Lovers Will See There’s More To Love With Linked Data

The team behind the data integration tool Karma this week presented at LODLAM (Linked Open Data in Libraries, Archives & Museums), illustrating how to map museum data to the Europeana Data Model (EDM) or CIDOC CRM (Conceptual Reference Model). This came on the heels of its earning the best-in-use paper award at ESWC2013 for its publication about connecting Smithsonian American Art Museum (SAAM) data to the LOD cloud.

The work of Craig KnoblockPedro SzekelyJose Luis AmbiteShubham GuptaMaria MusleaMohsen Taheriyan, and Bo Wu at the Information Sciences InstituteUniversity of Southern California, Karma lets users integrate data from a variety of data sources (hierarchical and dynamic ones too) — databases, spreadsheets, delimited text files, XML, JSON, KML and Web APIs — by modeling it according to an ontology of their choice. A graphical user interface automates much of the process. Once the model is complete, users can publish the integrated data as RDF or store it in a database.

The Smithsonian project builds on the group’s work on Karma for mapping structured sources to RDF. For the Smithsonian project (whose announcement we covered here), Karma converted more than 40,000 of the museum’s holdings, stored in more than 100 tables in a SQL Server Database, to LOD, leveraging EDM, the metamodel used in the Europeana project to represent data from Europe’s cultural heritage institutions.

Read more

Self Medicating? Stay Safe With Semantic Tech’s Help

It’s pretty common these days for people to hit the web in search of medication advice to deal with symptoms they’re experiencing.  The trouble is, most people don’t approach the process in a truly safe manner.

Semantic technology can help rectify the situation. In fact, it’s already doing so in France, where Olivier Curé, an associate professor in computer science at the University of Paris-Est in France, created a web application based on the book by pharmacology expert and educator Jean-Paul Giroud Médicaments sans ordonnance: Les bons et les mauvais!, of which he is a co-author with Catherine Cupillard. The app is made available to their consumers via three big insurance companies there, in order to help the companies save costs on reimbursing them for buying drugs that won’t actually help their condition, or direct them to the appropriate drugs at pharmacies with which the insurers may have relationships to supply them at lower costs. An iPhone version of the app was just released to accompany the web version.

Read more

Dandelion Geo And Linked Data Marketplace Private Beta On The Way

This week Dandelion, which bills itself as the one-stop shop for smart, high-quality Geo and Linked Data from trusted sources, starts its private beta. The service, which promises end users quality, normalized, linked and enriched data for their apps and reports; developers a simple API for any kind of language on any kind of platform; and corporate and government entities a way to publish and profit from their data, comes from SpazioDati.

That company is the creation of four Italian entrepreneurs – CEO Michele Barbera, president Gabriele Antonelli, partnerships director Andrea Di Benedetto, and Luca Pieraccini – who lived first-hand the frustrating experience of trying to find and leverage useful data for the custom web and mobile apps they were developing while running and working in small IT consulting companies. In an attempt to reverse the ratio of finding and cleaning data to actually building apps, says Barbera, the founders began participating in several EU-funded research projects and in the Open Data movement in Europe and Italy, including founding the non-profit Linked Open Data Italy. They also started experimenting with Semantic Web technologies.

“Open Data helps us to find valuable data and to build value-added web and mobile apps,” says Barbera. “So, let’s say that we solved partly the first problem of finding data, but not the second one, normalizing and cleaning data, since it is still very difficult to merge different data sources to put data in context.” Read more

Forage Through More Than A Century Of Nobel Prize Awards

When the Nobel Prize winners for 2013 are announced in the fall, perhaps there also will be some challenges issued to the worldwide community of data enthusiasts to see what they can do with open Linked Data about the prizes that have been awarded since the beginning of the 20th century.

Right now that’s just on the wish lists of Matthias Palmér and Hannes Ebner, co-founders of MetaSolutions AB, a spin-off from the Royal Institute of Technology in Stockholm and Uppsala University focused on semantic and scalable web apps. But a solid start has been made through their work with Nobel Media AB, which develops and manages programs, productions and media rights of the Nobel Prize within the areas of digital and broadcast media, including the Nobelprize.org domain, on the Nobel Prize Linked Data set.

Read more

Time To Take On A Taxonomy: Pingar Customizes and Automates The Task

There’s more than one way to get a taxonomy. A company can go out and buy one for its industry, for instance, but the risk is that the terms may not relate to how it talks about content in its own organization, and the hierarchy may not be the right fit either. That sets up two potential outcomes, says Chris Riley, VP of marketing at Pingar: You wind up having to customize it, or with users who just ignore it.

It’s possible to build one, but that’s a big job and a costly one, too – especially for many enterprises, where there hasn’t traditionally been a focus on structuring content and so the skills to do it aren’t necessarily there. While industries like publishing, oil and gas, life sciences, and pharma have that bent, many other verticals do not. In fact, Riley notes, they may realize they have a content organization problem, but not that what they’d benefit from to address it even goes by the name ‘taxonomy.’

Pingar’s looking to help out those enterprises that want to bring organization to their content, whether or not they’re familiar with the concept of a taxonomy. It just launched its automated Taxonomy Generator Service that uses an organization’s own content to build a taxonomy that mirrors its own way of talking about things and its understanding of relationships between child and parent terms.

Read more

EventMedia Live, Winner of ISWC Semantic Web Challenge, Starts New Project With Nokia Maps, Extends Architecture Flexibility

The winner of the Semantic Web Challenge at November’s International Semantic Web Conference (ISWC) was EventMedia Live, a web-based environment that exploits real-time connections to event and media sources to deliver rich content describing events that are associated with media, and interlinked with the Linked Data cloud.

This week, it will begin a one-year effort under a European Commission-funded project to align its work with the Nokia Maps database of places, so that mobile users of the app can quickly get pictures of these venues that were taken by users with EventMedia’s help.

A project of EURECOM, a consortium combining seven European universities and nine international industrial partners, EventMedia Live has its origins in the “mismatch between those sites specializing in announcing upcoming events and those other sites where users share photos, videos and document those events,” explains Raphaël Troncy, assistant professor at the EURECOM: School of Engineering & Research CenterMultimedia Communications, and one of the project’s leaders.

Read more

Help DBpedia: Participate in the Evaluation Campaign

DBpedia has begun an Evaluation Campaign with the aim of “evaluating DBpedia resources in order to assess and thereby improve the quality of DBpedia.” It works like this: “First, please authenticate yourself with a google account. This will not only help prevent spam but also help us keep track of how many resources you evaluated. After you click ‘Start’, you will be provided with a list of classes from DBpedia wherein you may choose the ones you are most familiar with. There are three options: (1) Any: where a completely random resource will be retrieved, (2) Class: where you have the option to choose any class from the DBpedia ontology and a random resource belonging to that class will be retrieved, (3) Manual: where you can manually put in the DBpedia URI of a resource of your choice.” Read more

Tuning In Social Media To Turn On What You’ll Like

A European Union-funded project to bring the web and TV closer together, called NoTube, wrapped up earlier this year. But its legacy lives on in the form of Beancounter.io from Sourcesense. The company, which was one of the project’s co-founders, had a role in the NoTube effort around integrating viewers’ social web activities as part of the platform to deliver TV content in personalized ways to users.

Leveraging the open source software, libraries and best practices that were outcomes of the project, Sourcesense has continued to move forward to deliver a commercial, scalable Web API platform that offers semantically enriched user profiles built from users’ activities performed on the Social Web. One of the first customers of its efforts is one of the largest Italian broadcast companies, RAI, which was also involved in the EU project.

Beancounter is powering a second- screen service on top of the platform to provide its 5 million viewers information on related content that may be of interest to them based on profiling their social activities (with their permission).

Read more

Linking XBRL to RDF: The Road To Extracting Financial Data For Business Value

Dr. Graham G. Rong, founder of IKA LLC, and senior industrial liaison officer at the MIT Corporate Relations Office, leading collaboration between the institute and industry, has been working on a semantic web approach to social and financial analysis based on digital financial data and other information related to companies that can be found on the Internet. The approach first turns XBRL data from SEC reports into RDF format, and then links that with the relevant social information in the company’s ecosystem, to deliver more business value.

The project, which began at MIT (see our earlier story here), has advanced to the application stage, and the software is moving from a JAVA to a browser-based interface. Rong says the team also is developing a web services API for the system.

“Current XBRL technology primary collects financial data for reporting, and secondarily, as more XBRL-based financial data becomes available, it will need to effectively extract financial data for value,” says Rong. Semantic web technology lets the focus be on the latter.

Read more

Rhizomer Wants Users To Revel In Working With Linked Data

How can users – especially those who don’t have deep roots within the semantic web community – make Linked Data useful to them? It’s not always apparent, says Roberto Garcia, a mind behind the Rhizomik initiative that has produced a tool called Rhizomer. Its approach is to take advantage of the structures that organize data (schemas, thesaurus, ontologies, and so on) and use them to drive the  automatic generation of user interfaces tailored to each semantic data-set to be explored.

A project led by members of the GRIHO (Human-Computer Interaction and data integration) research group that is assigned to the Computer Science and Industrial Engineering Department of the University of Lleida, where Garcia is associate professor, the initiative also has led to projects including ReDeFer, a set of tools to move data in an out of the Semantic Web, and various ontologies for multimedia, e-business and news. As for Rhizomer, it accommodates publishing and exploration of Linked Data, with data-set exploration helped by features including an overview to get a full picture of the data-set at hand; zooming and filtering to zoom in on items of interest and filter out uninteresting items; details to arrive at concrete resources of interest; and visualizations tailored to the kind of resource at hand, as the site explains.

In other words, its features are “organized so they support the typical data analysis tasks,” he says. “We are more a contributor from the user perspective of how you interact with that data.”

Read more

<< PREVIOUS PAGENEXT PAGE >>