Posts Tagged ‘linked open data’

Redlink Brings The Semantic Web To Integrators

rsz_relinkA cloud-based platform for semantic enrichment, linked data publishing and search technologies is underway now at startup Redlink, which bills it as the world’s first project of its kind.

The company has a heritage in the European Commission-funded IKS (Interactive Knowledge Stack) Open Source Project, created to provide a stack of semantic features for use in content management systems and the birthplace of Apache Stanbol, as well as in the Linked Media Framework project, from which Apache Marmotta derived. The founding developers of those open source projects are founders of Redlink, and Apache Stanbol, which provides a set of reusable components for semantic content management (including adding semantic information to “non-semantic” pieces of content), and Apache Marmotta, which provides Linked Data platform capabilities, are core to the platform, as is Apache Solr for enterprise search.

Read more

Perfect Memory’s Goal: Help You To Make More From Your Media Content

rsz_pmemThere’s a growing focus on the opportunity for semantic technology to help out with managing media assets – and with making money off of them, too. Last week, The Semantic Web Blog covered the EU-funded project Media Mixer for repurposing and reusing media fragments across borders on the Web. Also hailing from Europe – France, to be exact – is Perfect Memory, which aims to support the content management, automatic indexing and asset monetizing of large-scale multimedia.

Perfect Memory, which was a finalist at this spring’s SemTechBiz semantic start-up competition, has implemented its platform at Belgian TV broadcaster and radio RTBF, for its GEMS semantic-based multimedia browser prototype that was a runner-up at IBC 2013 this fall. In September it also received a 600K Euro investment from SOFIMAC Partners to extend its development efforts, platform, and market segments, as well as protect its innovations with patent filings.

“Our idea is to reinvent media asset management systems,” says Steny Solitude, CEO of Perfect Memory.

Read more

Fighting Global Hunger with Semantics, And How You Can Help

Hunger is a critical issue affecting approximately 870 million people worldwide. With new technologies, research, and telecommunication, we as a global population have the power to significantly reduce the levels of hunger around the world. But in order to accomplish this, the people who have control of the aforementioned research and technology will need to share their data and combine forces to create direct solutions to this global problem.

This is precisely what the good people at the International Food Policy Research Institute (IFPRI) are working toward. What the IFPRI has to offer is data–data on every country around the world, data about malnutrition, child mortality rates, ecology, rainfall, and much more. With the help of Web Portal Specialists like Soonho Kim, they are working on making that data open and easily accessible, but they are currently facing a number of challenges along the way. Soonho spoke to an intimate group of semantic technology experts at the recent Semantic Technology Conference, sharing the successes of the IFPRI thus far and the areas where they could use some help. Read more

Art Lovers Will See There’s More To Love With Linked Data

The team behind the data integration tool Karma this week presented at LODLAM (Linked Open Data in Libraries, Archives & Museums), illustrating how to map museum data to the Europeana Data Model (EDM) or CIDOC CRM (Conceptual Reference Model). This came on the heels of its earning the best-in-use paper award at ESWC2013 for its publication about connecting Smithsonian American Art Museum (SAAM) data to the LOD cloud.

The work of Craig KnoblockPedro SzekelyJose Luis AmbiteShubham GuptaMaria MusleaMohsen Taheriyan, and Bo Wu at the Information Sciences InstituteUniversity of Southern California, Karma lets users integrate data from a variety of data sources (hierarchical and dynamic ones too) — databases, spreadsheets, delimited text files, XML, JSON, KML and Web APIs — by modeling it according to an ontology of their choice. A graphical user interface automates much of the process. Once the model is complete, users can publish the integrated data as RDF or store it in a database.

The Smithsonian project builds on the group’s work on Karma for mapping structured sources to RDF. For the Smithsonian project (whose announcement we covered here), Karma converted more than 40,000 of the museum’s holdings, stored in more than 100 tables in a SQL Server Database, to LOD, leveraging EDM, the metamodel used in the Europeana project to represent data from Europe’s cultural heritage institutions.

Read more

Self Medicating? Stay Safe With Semantic Tech’s Help

It’s pretty common these days for people to hit the web in search of medication advice to deal with symptoms they’re experiencing.  The trouble is, most people don’t approach the process in a truly safe manner.

Semantic technology can help rectify the situation. In fact, it’s already doing so in France, where Olivier Curé, an associate professor in computer science at the University of Paris-Est in France, created a web application based on the book by pharmacology expert and educator Jean-Paul Giroud Médicaments sans ordonnance: Les bons et les mauvais!, of which he is a co-author with Catherine Cupillard. The app is made available to their consumers via three big insurance companies there, in order to help the companies save costs on reimbursing them for buying drugs that won’t actually help their condition, or direct them to the appropriate drugs at pharmacies with which the insurers may have relationships to supply them at lower costs. An iPhone version of the app was just released to accompany the web version.

Read more

Addressing Price-Performance And Curation Issues For Big Data Work In The Cloud

The cloud’s role in processing big semantic data sets was recently highlighted in early April when DERI and Fujitsu Laboratories announced a new data storage technology for storing and querying Linked Open Data that resides on a cloud-based platform (see our story here).

The cloud conversation, with storage as one key discussion point, will continue to be an active one in Big Data circles, whether users are working with massive, connected Linked Data sets or trying to run NLP across the Twitter firehose. CloudSigma, for example, recently publicly disclosed that it is using an all solid-state drive (SSD) solution for its public cloud offering that lets users purchase CPU, RAM, storage and bandwidth independently. The use of SSD, says CEO Robert Jenkins, avoids the problem that spinning disks have with the randomized, multi-tenant access of a public cloud that leads to storage bottlenecks and curbs performance.

That, combined with the company’s approach of letting customers size virtual machine resources as they like, as well as leverage exposed advanced hypervisor settings to optimize for their particular applications, he says, brings the use of the public cloud infrastructure closer to what companies can get out of private cloud environments, and at a price-performance win.

Read more

Fujitsu Labs And DERI To Offer Free, Cloud-Based Platform To Store And Query Linked Open Data

The Semantic Web Blog reported last year about a relationship formed between the Digital Enterprise Research Institute (DERI) and Fujitsu Laboratories Ltd. in Japan, focused on a project to build a large-scale RDF store in the cloud capable of processing hundreds of billions of triples. At the time, Dr. Michael Hausenblas, who was then a DERI research fellow, discussed Fujitsu Lab’s research efforts related to the cloud, its huge cloud infrastructure, and its identification of Big Data as an important trend, noting that “Linked Data is involved with answering at least two of the three Big Data questions” – that is, how to deal with volume and variety (velocity is the third).

This week, the DERI and Fujitsu Lab partners have announced a new data storage technology that stores and queries interconnected Linked Open Data, to be available this year, free of charge, on a cloud-based platform. According to a press release about the announcement, the data store technology collects and stores Linked Open Data that is published across the globe, and facilitates search processing through the development of a caching structure that is specifically adapted to LOD.

Read more

Opening Up Publically Funded Research in Europe

Anna Leach of the Wall Street Journal reports, “New scientific research must be published for free online, the vice-president of the European Commission said, in a move designed to increase the knowledge pool open to small business and lead to more innovative products. All scientists receiving European Union funding will have to publish their results in an open-access format, Neelie Kroes, the commissioner responsible for Europe’s digital agenda, said Monday in Stockholm.  Ms. Kroes also  launched the global Research Data Alliance — a group committed to pooling and co-ordinating scientific data so it can be shared better.”

Leach continues, “Opening up scientific research is good for small business, said Victor Henning, CEO of British startup Mendeley, which aims to make academic research more connected. He has noticed the demand for access to academic research from small businesses. Read more

List of Thousands of Public Data Sources

A website called BigML (for Big Machine Learning) has compiled a great list of freely available public data sources. The article begins: “We love data, big and small and we are always on the lookout for interesting datasets. Over the last two years, the BigML team has compiled a long list of sources of data that anyone can use. It’s a great list for browsing, importing into our platform, creating new models and just exploring what can be done with different sets of data. In this post, we are sharing this list with you. Why? Well, searching for great datasets can be a time consuming task. We hope this list will support you in that search and help you to find some inspiring datasets. ” Read more

Announcing the Launch of the GBPN Knowledge Platform

Martin Kaltenbock of the Semantic Web Company reports, “The brand new web based GBPN Knowledge Platform has been launched on 21 February 2013. It helps the building sector effectively reduce its impact on climate change! It has been designed as a participative knowledge hub and data hub harvesting, sharing and curating best practice policies in building energy performance globally. Available in English and soon in Mandarin, this new web-based tool of the Global Buildings Performance Network (GBPN) aims to stimulate collective research and analysis from experts worldwide to promote better decision-making and help the building sector effectively reduce its impact on climate change. Read more

<< PREVIOUS PAGENEXT PAGE >>