Benjamin Young of Cloudant reports, “Data is often stored and distributed in esoteric formats… Even when the data is available in a parse-able format (CSV, XML, JSON, etc), there is often little provided with the data to explain what’s inside. If there is descriptive meta data provided, it’s often only meant for the next developer to read when implementing yet-another-parser for said data. Really, it’s all quite abysmal… Enter, JSON-LD! JSON-LD (JSON Linked Data) is a simple way of providing semantic meaning for the terms and values in a JSON document. Providing that meaning with the JSON means that the next developer’s application can parse and understand the JSON you gave them.” Read more
Posts Tagged ‘JSON’
Last week news came from SindiceTech about the availability of its SindiceTech Freebase Distribution for the cloud (see our story here). SindiceTech has finalized its separation from the university setting in which it incubated, the former DERI institute, now a part of the Insight Center for Data Analytics, and now is re-launching its activities, with more new solutions and capabilities on the way.
“The first thing was to launch the Knowledge Graph distribution in the cloud,” says CEO Giovanni Tummarello. “The Freebase distribution showcases how it is possible to quickly have a really large Knowledge Graph in one’s own private cloud space.” The distribution comes instrumented with some of the tools SindiceTech has developed to help users both understand and make use of the data, he says, noting that “the idea of the Knowledge Graph is to have a data integration space that makes it very simple to add new information, but all that power is at risk of being lost without the tools to understand what is in the Knowledge Graph.”
Included in the first round of the distribution’s tools for composing queries and understanding the data as a whole are the Data Types Explorer (in both tabular and graph versions), and the Assisted SPARQL Query Editor. The next releases will increase the number of tools and provide updated data. “Among the tools expected is an advanced Knowledge Graph entity search system based on our newly released SIREn search system,” he says.
ElasticSearch 1.0 launches today, combining Elasticsearch realtime search and analytics, Logstash (which helps you take logs and other event data from your systems and store them in a central place), and Kibana (for graphing and analyzing logs) in an end-to-end stack designed to be a complete platform for data interaction. This first major update of the solution that delivers actionable insights in real-time from almost any type of structured and unstructured data source follows on the heels of the release of the commercial monitoring solution Elasticsearch Marvel, which gives users insight into the health of Elasticsearch clusters.
Organizations from Wikimedia to Netflix to Facebook today take advantage of Elasticsearch, which vp of engineering Kevin Kluge says is distinguished by its focus from its open-source start four years ago on realtime search in a distributed fashion. The native JSON and RESTful search tool “has intelligence where when it gets a new field that it hasn’t seen before, it discerns from the content of the field what type of data it is,” he explains. Users can optionally define schemas if they want, or be more freeform and very quickly add new styles of data and still profit from easier management and administration, he says.
Models also exist for using JSON-LD to represent RDF in a manner that can be indexed by Elasticsearch. The BBC World Service Archive prototype, in fact, uses an index based on ElasticSearch and constructed from the RDF data held in a central triple store to make sure its search engine and aggregation pages are quick enough.
JSON-LD has reached the status of being an official “Recommendation” of the W3C. JSON-LD provides yet another way for web developers to add structured data into web pages, joining RDFa.The W3C documentation says, “JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.” This addition should be welcome news for Linked Data developers familiar with JSON and/or faced with systems based on JSON.
SemanticWeb.com caught up with the JSON-LD specfication editors to get their comments…
Manu Sporny (Digital Bazaar), told us, “When we created JSON-LD, we wanted to make Linked Data accessible to Web developers that had not traditionally been able to keep up with the steep learning curve associated with the Semantic Web technology stack. Instead, we wanted people that were comfortable working with great solutions like JSON, MongoDB, and REST to be able to easily integrate Linked Data technologies into their day-to-day work. The adoption of JSON-LD by Google and schema.org demonstrates that we’re well on our way to achieving this goal.”
Recent updates to YarcData’s software for its Urika analytics appliance reflect the fact that the enterprise is starting to understand the impact that semantic technology has on turning Big Data into actual insights.
The latest update includes integration with more enterprise data discovery tools, including the visualization and business intelligence tools Centrifuge Visual Network Analytics and TIBCO Spotfire, as well as those based on SPARQL and RDF, JDBC, JSON, and Apache Jena. The goal is to streamline the process of getting data in and then being able to provide connectivity to the tools analysts use every day.
As customers see the value of using the appliance to gain business insight, they want to be able to more tightly integrate this technology into wider enterprise workflows and infrastructures, says Ramesh Menon, YarcData vice president, solutions. “Not only do you want data from all different enterprise sources to flow into the appliance easily, but the value of results is enhanced tremendously if the insights and the ability to use those insights are more broadly distributed inside the enterprise,” he says. “Instead of having one analyst write queries on the appliance, 200 analysts can use the appliance without necessarily knowing a lot about the underlying, or semantic, technology. They are able to use the front end or discovery tools they use on daily basis, not have to leave that interface, and still get the benefit of the Ureka appliance.”
Today sees the launch of Meritora, the first commercial implementation of the universal payment standard PaySwarm (initially discussed in this blog here and here). The creation of Digital Bazaar, the company founded and CEO’d by Manu Sporny – whose W3C credentials include being founder of both the Web Payments Community Group and JSON-LD Community Group, as well as chair of the RDF Web Applications Working Group – Meritora is designed to ease what is still a surprisingly arduous task of buying and selling on the web. The service is starting with a simple asset hosting feature for helping vendors sell digital content on WordPress-powered sites, and support for decentralized web app stores so that app creators can put their work on their web sites, set a price for them, and let them be bought there, at a web app store, or anywhere on the web.
The name Meritora points to the service’s underlying purpose of rewarding greatness, coming from the bases ‘merit’ and ‘ora,’ the latter of which has been used across a number of cultures to express a unit of value, Sporny says (noting that it means ‘golden’ in Esperanto, and was also used as a unit of currency among Anglo-Saxons). That’s a big name to live up to, but the service hopes to do so by making Web payments work simply, securely, quickly, with low fees and no vendor lock-in for buyers and sellers on the digital content scene.
There’s Linked Data to thank for what Meritora, and PaySwarm, can do, with Sporny describing the system as “the world’s first payment solution where the core of the technology is powered by Linked Data.”
Singly “App Fabric” Platform Helps Developers Deeply Connect To Other Apps So Users Can Connect With All Their Data
Singly, which has as its mission connecting people more closely with their data everywhere it lives, now is opening up the beta of its development platform to help developers create the apps that can make that happen.
As co-founder and CEO Jason Cavnar describes Singly’s work, “it is an app fabric product” that gives developers a way to build applications without having to worry about making all the different connection points into the other applications they want their products to talk to. “That’s handled as a service for them. Like Amazon Web Services is for the infrastructure layer, we would like to be a trusted partner in the data layer,” he says.
“It’s really about a person’s life and experiences – sharing that wherever it is in other applications into a new one and that new one generating things to share back out,” says fellow co-founder and CTO Jeremie Miller, who invented Jabber/XMPP technologies and was the primary developer of jabberd 1.0, the first XMPP server. APIs are prominent in Singly’s approach to unlocking that data, but Miller sees some parallels between its own mission and that of the semantic web – a concept whose potential he’s always been excited about, he says, but which he doesn’t think has caught on as he’d hoped.
NEXT PAGE >>