Aaron Bradley recently posted a roundtable discussion about JSON-LD which includes: “JSON-LD is everywhere. Okay, perhaps not everywhere, but JSON-LD loomed large at the 2014 Semantic Web Technology and Business Conference in San Jose, where it was on many speakers’ lips, and could be seen in the code examples of many presentations. I’ve read much about the format – and have even provided a thumbnail definition of JSON-LD in these pages – but I wanted to take advantage of the conference to learn more about JSON-LD, and to better understand why this very recently-developed standard has been such a runaway hit with developers. In this quest I could not have been more fortunate than to sit down with Gregg Kellogg, one of the editors of the W3C Recommendation for JSON-LD, to learn more about the format, its promise as a developmental tool, and – particularly important to me as a search marketer – the role in the evolution of schema.org.”
Dominik Schweiger, Zlatko Trajanoski and Stephan Pabinger recently wrote, “Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. Results: SPARQLGraph offers an intuitive drag &drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers.” Read more
Solution demonstrates 10x+ the performance while running on 100x the data
San Diego – August 20, 2014 – SPARQL City, which introduced its scalable graph analytic engine to market earlier this year, today announced that it has successfully run the SP2 SPARQL benchmark on 100 times the data volume as other graph solution providers, while still delivering an order of magnitude better performance on average compared to published results.
SPARQL City ran the SP2 Benchmark against 2.5 billion triples/edges on a sixteen node cluster on Amazon EC2. Average query response time for the set of seventeen queries was about 6 seconds, with query 4, the most data intensive query involving the entire dataset taking approximately 34 seconds to run. By comparison, the best reported query 4 result by other graph solution providers has been around 15 seconds, but this is when running against 25 million triples/edges, or 1/100th of the data volume in SPARQL City’s benchmark test. This level of performance, combined with the ability to easily scale out the solution on a cluster when required, makes easy to use interactive graph analytics on very large datasets possible for the first time. Detailed benchmark results can be found on our website.
WASHINGTON, D.C. – SYSTAP, LLC. today announced that Syapse, the leading provider of software for enabling precision medicine, has selected Bigdata® as its backend semantic database. Syapse, which launched the Precision Medicine Data Platform in 2011, will use the Bigdata® database as a key element of their semantic platform. The Syapse Precision Medicine Data Platform integrates medical data, omics data, and biomedical knowledge for use in the clinic. Syapse software is delivered as a cloud-based SaaS, enabling access from anywhere with an internet connection, regular software updates and new features, and online collaboration and delivery of results, with minimal IT resources required. Syapse applications comply with HIPAA/HITECH, and data in the Syapse platform are protected according to industry standards.
Syapse’s Precision Medicine Data Platform features a semantic layer that provides powerful data modeling, query, and integration functionality. According to Syapse CTO and Co-Founder, Tony Loeser, Ph.D., “We have adopted SYSTAP’s graph database, Bigdata®, as our RDF store. Bigdata’s exceptional scalability, query performance, and high-availability architecture make it an enterprise-class foundation for our semantic technology stack.”
Is SPARQL the SQL for NoSQL? The question will be discussed at this month’s Semantic Technology & Business Conference in San Jose by Arthur Keen, vp of solution architecture of startup SPARQL City.
It’s not the first time that the industry has considered common database query languages for NoSQL (see this story at our sister site Dataversity.net for some perspective on that). But as Keen sees it, SPARQL has the legs for the job. “What I know about SPARQL is that for every database [SQL and NoSQL alike] out there, someone has tried to put SPARQL on it,” he says, whereas other common query language efforts may be limited in database support. A factor in SPARQL’s favor is query portability across NoSQL systems. Additionally, “you can achieve much higher performance using declarative query languages like SPARQL because they specify the ‘What’ and not the ‘How’ of the query, allowing optimizers to choose the best way to implement the query,” he explains.
There’s a chance to learn everything you should know about RDF to get the most value from the W3C standard model for data interchange at the 10th annual Semantic Technology & Business Conference in San Jose next month. David Booth, senior software architect at Hawaii Resource Group, will be hosting a session explaining how the standard’s unique capabilities can have a profound effect on projects that seek to connect data coming in from multiple sources.
“One of the assumptions that people make looking at RDF is that it is analogous to any other data format, like JSON or XML,” says Booth, who is working at the Hawaii Research Group’s on a contract the firm has with the U.S. Department of Defense to use semantic web technologies to achieve healthcare data interoperability. “It isn’t.” RDF, he explains, isn’t just another data format – rather, it’s about the information content that is encoded in the format.
“The focus is different. It is on the meaning of data vs. the details of syntax,” he says.
Straight out of Google I/O this week, came some interesting announcements related to Semantic Web technologies and Linked Data. Included in the mix was a cool instructional video series about how to “Build a Small Knowledge Graph.” Part 1 was presented by Jarek Wilkiewicz, Knowledge Developer Advocate at Google (and SemTechBiz speaker).
Wilkiewicz fits a lot into the seven-and-a-half minute piece, in which he presents a (sadly) hypothetical example of an online music store that he creates with his Google colleague Shawn Simister. During the example, he demonstrates the power and ease of leveraging multiple technologies, including the schema.org vocabulary (particularly the recently announced ‘Actions‘), the JSON-LD syntax for expressing the machine readable data, and the newly launched Cayley, an open source graph database (more on this in the next post in this series).
Do you still remember a time when a utility company worker came to your house to check your electric meter? For many of us already, this is in the past. Smart meters send information directly to the utility company and as a result, it knows our up-to-the-minute power usage patterns. And, while we don’t yet talk to our ovens or refrigerators through the Internet, many people routinely control thermostats from their smart phones. The emerging Internet of Things is real and we interact with it on the daily basis.
The term Internet of Things refers to devices we wouldn’t traditionally expect to be smart or connected, such as a smoke detector or other home appliance. They are being made ‘smart’ by enabling them to send data to an application. From smart meters to sensors used to track goods in a supply chain, the one thing these devices have in common is that they send data – data that can then be used to create more value by doing things better, faster, cheaper, and more conveniently.
The physical infrastructure needed for these devices to work is largely in place or being put in place quickly. We get immediate first order benefits simply by installing new equipment. For example, having a smart meter provides cost savings because there is no need for a person to come to our houses. Similarly, the ability to change settings on a thermostat remotely can lower our heating costs. However, far vaster changes and benefits are projected or are already beginning to be delivered from inter-connecting the data sent by smart devices:
- Health: Connecting vital measurements from wearable devices to the vast body of medical information will help to improve our health, fitness and, ultimately, save lives.
- Communities: Connecting information from embedded devices and sensors will enable more efficient transportation. When a sprinkler system meter understands weather data, it will use water more efficiently. Once utilities start connecting and correlating data from smart meters, they might deliver electricity more efficiently and be more proactive in handling infrastructure problems.
- Environment: Connecting readings from fields, forests, oceans, and cities about pollution levels, soil moisture, and resource extraction will allow for closer monitoring of problems.
- Goods and services: Connecting data from sensors and readers installed throughout factories and supply chains will more precisely track materials and speed up and smooth out the manufacture and distribution of goods.
Following the newly minted “recommendation” status of RDF 1.1, Michael C. Daconta of GCN has asked, “What does this mean for open data and government transparency?” Daconta writes, “First, it is important to highlight the JSON-LD serialization format. JSON is a very simple and popular data format, especially in modern Web applications. Furthermore, JSON is a concise format (much more so than XML) that is well-suited to represent the RDF data model. An example of this is Google adopting JSON-LD for marking up data in Gmail, Search and Google Now. Second, like the rebranding of RDF to ‘linked data’ in order to capitalize on the popularity of social graphs, RDF is adapting its strong semantics to other communities by separating the model from the syntax. In other words, if the mountain won’t come to Muhammad, then Muhammad must go to the mountain.” Read more
The Google Webmaster Central blog reports, “When music lovers search for their favorite band on Google, we often show them a Knowledge Graph panel with lots of information about the band, including the band’s upcoming concert schedule. It’s important to fans and artists alike that this schedule be accurate and complete. That’s why we’re trying a new approach to concert listings. In our new approach, all concert information for an artist comes directly from that artist’s official website when they add structured data markup.” Read more