Lars Marius Garshol has posted an overview of RDF triple stores. He writes, “There’s a huge range of triple stores out there, and it’s not trivial to find the one most suited for your exact needs. I reviewed all those I could find earlier this year for a project, and here is the result. I’ve evaluated the stores against the requirements that mattered for that particular project. I haven’t summarized the scores, as everyone’s weights for these requirements will be different.” Read more
Posts Tagged ‘triple stores’
Big Data and the Semantic Web are on a track to intersect. And businesses that want to be on track to profit from the explosion in data should start looking a little more closely at that intersection, and soon.
“We’ve got more data now than ever before coming at us, and it is coming faster and faster,” says Frank Coyle, director of the Software Engineering program in the Lyle School of Engineering at Southern Methodist University, whose research is in the area of web services and semantic web technologies. “So the semantic angle is how can you organize this data to take advantage of it, to do queries over it.” Those in the semantic web community say RDF is the way to go, he says, adding that people now use the term linked data as another way of describing semantic data. “If you take Big Data and link it, then you have semantics – you have meaning now introduced into the equation.”
Richard Wallis has followed up his recent announcement that WorldCat data can now be downloaded as RDF triples with an explanation of how to put that data into a triple store. He begins: “Step 1: Choose a triplestore. I followed my own advise and chose 4Store. The main reasons for this choice were that it is open source yet comes from an environment where it was the base platform for a successful commercial business, so it should work. Also in my years rattling around the semantic web world, 4Store has always been one of those tools that seemed to be on everyone’s recommendation list.” Read more
Yesterday The Semantic Web Blog discussed how personalized mobile assistance came up on the lists of a bright future in the eyes of semantic web experts (see here). Sharing that vision is the team at Vital.AI, the NYC-startup founded by Marc Hadfield. Its Thrive.AI app, also a contender at SemTech’s Startup Competition, is a personalized semantic shopping agent for the iPad, but the underlying Vital.AI platform on which it is built provides an integrated suite of components for a variety of knowledge-centric, intelligence-rich, Big Data-driven applications.
The e-commerce agent, Hadfield told attendees at SemTech in San Francisco last week, was the company’s own foray into figuring out what it needed to add to the platform to make it easier to build apps that bring semantic technologies and Big Data together.
The Semantic Technology & Business Conference has been underway since Sunday, with tutorials and lightning sessions catching audience interest. The conference presentations get underway today, most of them following on the heels of the opening keynotes given by Bart van Leeuwen, firefighter and architect at netage.nl; Jay Myers, web architect at Best Buy; and Steve Harris, CTO of Garlik, a part of Experian.
Best Buy, as readers of this blog know, has been diving deep into the semantic web waters under Myers’ direction for a few years now, and he shared that journey with the audience at SemTech.
Last week the New York City Council gave its nod of approval to legislation that would require city agencies to publish public data sets in a common format on an online portal for the public’s use. Mayor Bloomberg just signed off on it, with the Open Data Bill legislation to be phased in over six years.
But semantic tech startup Ontodia hopes to help speed up the development of the Big Apple as the Digital City of the Future with NYCFacets, a Smart Open Data Exchange for the developer community just released that catalogs all the NYC-related data sources already present in the New York City Open Data Catalogue.
“There are about 900 data sets in the New York City Open Data Catalogue,” says Ontodia co-founder Joel Natividad. Last year, while at TCG Software Services, he was part of a team that won the Large Organization Recognition Award at BigApps 2.0 – the city-sponsored contest for developers to use NYC Open Data – for participating in creating NYC Data Web, which integrates the NYC.gov data sets into a single web of data for developers. The team also included Revelytix and Spry. “Now that the Open Data Bill just passed, there will be a tsunami of data,” he says.
For all of you Java developers out there, Clark&Parsia has released an initial integration between Stardog and Spring. The Stardog website states, “Spring is a platform to build and run enterprise applications in Java. Stardog’s Spring support makes life easier for enterprise developers who need to work with Semantic Web technology—including RDF, SPARQL, and OWL—by way of Stardog. The Spring for Stardog source code is available on Github. A more feature-full version of will be available in Stardog Enterprise Edition.” Read more
What’s the link between the trends of more and more objects and even commercial transactions on the web being described in a machine-readable, semantic format and the endless streaming of all that data? Revenue-funded startup First Retail, whose principals Anne Jude Hunt and Simon G. Handley will be speaking at the upcoming Semantic Technology Conference in June, thinks the answer is semantic ETL.
Extract, transform, load (ETL) is a widely known concept in the well-charted terrain of the IT world. That’s about transforming a bunch of heterogeneous data to unify it within a data warehouse and get some use out of it.
Semantic ETL, says Hunt, is brought on by the fact that today people want to deal with the growing loads of streaming data while it’s streaming and that “people want intelligent data, machine-readable tags,[they want] to slice and dice it for BI in lots of different ways, so the traditional data warehouse and relational database approach is just not working for people.” Cleansed and integrated semantic data loaded into distributed, scalable triple stores can come to the rescue.