Posts Tagged ‘SPARQL’

Hello 2014

rsz_lookaheadone

Courtesy: Flickr/Wonderlane

Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:

Phil Archer, Data Activity Lead, W3C:

For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.

I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!

[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.

Read more

Good-Bye 2013

Courtesy: Flickr/MadebyMark

Courtesy: Flickr/MadebyMark

As we prepare to greet the New Year, we take a look back at the year that was. Some of the leading voices in the semantic web/Linked Data/Web 3.0 and sentiment analytics space give us their thoughts on the highlights of 2013.

Read on:

 

Phil Archer, Data Activity Lead, W3C:

The completion and rapid adoption of the updated SPARQL specs, the use of Linked Data (LD) in life sciences, the adoption of LD by the European Commission, and governments in the UK, The Netherlands (NL) and more [stand out]. In other words, [we are seeing] the maturation and growing acknowledgement of the advantages of the technologies.

I contributed to a recent study into the use of Linked Data within governments. We spoke to various UK government departments as well as the UN FAO, the German National Library and more. The roadblocks and enablers section of the study (see here) is useful IMO.

Bottom line: Those organisations use LD because it suits them. It makes their own tasks easier, it allows them to fulfill their public tasks more effectively. They don’t do it to be cool, and they don’t do it to provide 5-Star Linked Data to others. They do it for hard headed and self-interested reasons.

Christine Connors, founder and information strategist, TriviumRLG:

What sticks out in my mind is the resource market: We’ve seen more “semantic technology” job postings, academic positions and M&A activity than I can remember in a long time. I think that this is a noteworthy trend if my assessment is accurate.

There’s also been a huge increase in the attentions of the librarian community, thanks to long-time work at the Library of Congress, from leading experts in that field and via schema.org.

Read more

How BestBuy is SPARQLing This Holiday Season

4160863914_13a9a98cc7

Jay Myers of BestBuy recently wrote, “Shortly before Black Friday, one of my colleagues approached me with a curious question. ‘Mr. SVP XYZ was talking today about us creating a promo page of ‘stocking stuffers’. Do you think you could produce a list of products that might be ‘stocking stuffers’?’. After some discussion, we agreed that these products would be under $20 and be 5”x5” or smaller to qualify as a stocking stuffer. In a couple hours time we had a SPARQL generated list of 190 products (thank you @bsletten) on a promo page for anyone who searched for the ‘stocking stuffers’ phrase. A classic last minute, rogue (skunkworks?) effort.” Read more

In Search Of Apps To Leverage Public BioMolecular Data In RDF Platform

rsz_rdfpfThe European Molecular Biology Laboratory (EMBL) and the European Bioinformatics Institute (EBI) that is part of Europe’s leading life sciences laboratory this fall launched a new RDF platform hosting data from six of the public database archives it maintains. That includes peer-reviewed and published data, submitted through large-scale experiments, from databases covering genes and gene expression, proteins (with SIB), pathways, samples, biomodels and molecules with drug-like properties. And next week, during a competition at SWAT4LS in Edinburgh, it’s hoping to draw developers with innovative use case ideas for life-sciences apps that can leverage that data to the benefit of bioinformaticians or bench biologists.

“We need developers to build apps on top of the platform, to build apps to pull in data from these and other sources,” explains Andy Jenkinson, Technical Project Manager at EMBL-EBI. “There is the potential using semantic technology to build those apps more rapidly,” he says, as it streamlines integrating biological data, which is a huge challenge given the data’s complexity and variety. And such apps will be a great help for lab scientists who don’t know anything about working directly with RDF data and SPARQL queries.

Read more

HealthCare.Gov: Progress Made But BackEnd Struggles Continue

rsz_hcgovThe media has been reporting the last few hours on the Obama administration’s self-imposed deadline for fixing HealthCare.gov. According to these reports, the site is now working more than 90 percent of the time, up from 40 percent in October; that pages on the website are loading in less than a second, down from about eight; that 50,000 people can simultaneously use the site and that it supports 800,000 visitors a day; and page-load failures are down to under 1 percent.

There’s also word, however, that while the front-end may be improved, there are still problems on the back-end. Insurance companies continue to complain they aren’t getting information correctly to support signups. “The key question,” according to CBS News reporter John Dickerson this morning, “is whether that link between the information coming from the website getting to the insurance company – if that link is not strong, people are not getting what was originally promised in the entire process.” If insurance companies aren’t getting the right information for processing plan enrollments, individuals going to the doctor’s after January 1 may find that they aren’t, in fact, covered.

Jeffrey Zients, the man spearheading the website fix, at the end of November did point out that work remains to be done on the backend for tasks such as coordinating payments and application information with insurance companies. Plans are for that to be in effect by mid-January.

As it turns out, among components of its backend technology, according to this report in the NY Times, is the MarkLogic Enterprise NoSQL database, which in its recent Version 7 release also added the ability to store and query data in RDF format using SPARQL syntax.

Read more

Linked Data: “The Gift That Keeps On Giving”

semtechnyclogoK. Krasnow Waterman started the New York Semantic Technology & Business Conference off on the right foot Wednesday, highlighting the highly practical virtues of the semantic web and Linked Data for all.

Krasnow Waterman pointed to four big benefits of a semantic web world, including easing the path to:

  • Analyzing data from multiple sources;
  • Understanding context, from the meaning of a particular term to the context of relationships;
  • Linking data; and
  • Applying rules in which there’s no limit about what can be said or linked to within program code that can be reasoned over as data flows through systems.

In her life as CEO of LawTechIntersect, which offers data/technology management and policy consulting, she noted that she’s now “talking to people about forestalling their platform upgrades, to go back instead and embed the tagging needed for semantic processing,” she told attendees at SemTech. “It’s the gift that keeps on giving. Do it and you can compute forevermore with that data.”

Read more

YarcData Software Update Points Out That The Sphere Of Semantic Influence Is Growing

YarcDataRecent updates to YarcData’s software for its Urika analytics appliance reflect the fact that the enterprise is starting to understand the impact that semantic technology has on turning Big Data into actual insights.

The latest update includes integration with more enterprise data discovery tools, including the visualization and business intelligence tools Centrifuge Visual Network Analytics and TIBCO Spotfire, as well as those based on SPARQL and RDF, JDBC, JSON, and Apache Jena. The goal is to streamline the process of getting data in and then being able to provide connectivity to the tools analysts use every day.

As customers see the value of using the appliance to gain business insight, they want to be able to more tightly integrate this technology into wider enterprise workflows and infrastructures, says Ramesh Menon, YarcData vice president, solutions. “Not only do you want data from all different enterprise sources to flow into the appliance easily, but the value of results is enhanced tremendously if the insights and the ability to use those insights are more broadly distributed inside the enterprise,” he says. “Instead of having one analyst write queries on the appliance, 200 analysts can use the appliance without necessarily knowing a lot about the underlying, or semantic, technology. They are able to use the front end or discovery tools they use on daily basis, not have to leave that interface, and still get the benefit of the Ureka appliance.”

Read more

Hooray For Semantic Tech In The Film Industry

Image courtesy popturfdotcom/Flickr

Image courtesy popturfdotcom/Flickr

The story below features an interview with Kurt Cagle, Information Architect Avalon Consulting, LLC, who is speaking this week at the Semantic Technology And Business Conference in NYC. You can save $200 when you register for the event before October 2.

 

New York has a rich history in the film industry.  The city was the capital of film production from 1895 to 1910. In fact, a quick trip from Manhattan to Queens will take you to the former home of the Kaufman Astoria Studios, now the site of the American Museum of the Moving Image. Even after the industry moved shop to Hollywood, New York continued to hold its own, as evidenced by this Wikipedia list of films shot in the city.

 

semtechnyclogoThis week, at the Semantic Technology & Business Conference, a session entitled Semantics Goes Hollywood will offer a perspective on the technology’s applicability to the industry for both its East and West Coast practitioners (and anyone in between). For that matter, even people in industries of completely different stripes stand to gain value: As Kurt Cagle, Information Architect at Avalon Consulting, LLC, who works with many companies in the film space, explains, “A lot of what I see is not really a Hollywood-based problem at all – it’s a data integration problem.”

 

Here’s a spotlight on some of the points Cagle will discuss when he takes the stage:

 

  • Just like any enterprise, studios that have acquired other film companies face the challenge of ensuring that their systems can understand the information that’s stored in the systems of the companies they bought. Semantic technology can come to the fore here as it has for industries that might not have the same aura of glamour surrounding them. “Our data models may not be completely in sync but you can represent both and communicate both into a single composite data system, and a language like SPARQL can query against both sets to provide information without having to do a huge amount of re-engineering,” Cagle says.

Read more

A Look Into Learning SPARQL With Author Bob DuCharme

Cover of Learning SPARQL - Second Edition, by Bob DuCharmeThe second edition of Bob DuCharme’s Learning SPARQL debuted this summer. The Semantic Web Blog connected with DuCharme – who is director of digital media solutions at TopQuadrant, the author of other works including XML: The Annotated Specification, and also a welcome speaker both at the Semantic Technology & Business Conference and our Semantic Web Blog podcasts – to learn more about the latest version of the book.

Semantic Web Blog: In what I believe has been two years since the first edition was published, what have been the most significant changes in the ‘SPARQL space’ – or the semantic web world at large — that make this the right time for an expanded edition of Learning SPARQL?

DuCharme: The key thing is that SPARQL 1.1 is now an actual W3C Recommendation. It was great to see it so widely implemented so early in its development process, which justified the release of the book’s first edition so long before 1.1 was set in stone, but now that it’s a Recommendation we can release an edition of the book that is no longer describing a moving target. Not much in SPARQL has changed since the first edition – the VALUES keyword replaced BINDINGS, with some tweaks, and some property path syntax details changed – but it’s good to know that nothing in 1.1 can change now.

Read more

Fighting Global Hunger with Semantics, And How You Can Help

Hunger is a critical issue affecting approximately 870 million people worldwide. With new technologies, research, and telecommunication, we as a global population have the power to significantly reduce the levels of hunger around the world. But in order to accomplish this, the people who have control of the aforementioned research and technology will need to share their data and combine forces to create direct solutions to this global problem.

This is precisely what the good people at the International Food Policy Research Institute (IFPRI) are working toward. What the IFPRI has to offer is data–data on every country around the world, data about malnutrition, child mortality rates, ecology, rainfall, and much more. With the help of Web Portal Specialists like Soonho Kim, they are working on making that data open and easily accessible, but they are currently facing a number of challenges along the way. Soonho spoke to an intimate group of semantic technology experts at the recent Semantic Technology Conference, sharing the successes of the IFPRI thus far and the areas where they could use some help. Read more

<< PREVIOUS PAGENEXT PAGE >>