Posts Tagged ‘JSON’

How Job Sites Could Improve with Semantic Web Technologies

8214124711_a9f6738627_zKurt Cagle, a Principal Evangelist for Semantic Technologies at Avalon Consulting recently wrote, “I’m not a recruiter. I have from time to time submitted resumés for jobs to Monster or Linked-In to individual company sites as a developer or architect, but even there I’ve discovered what millions of job hunters already know: submitting online resumés is a pain. Consider the process. You create a profile, identifying yourself to job submission system X. This site may or may not have a way of uploading a text resumé, but one thing you find in the data management space is that structure matters, and the farther you deviate from the structure, the harder it is for some OCR Artificial Intelligence to actually make sense of what you’ve written.” Read more

Semantic Web Job: Java JEE/C++ Developer

Net Consultants logoNet Consultants is looking to recruit a Java JEE/ C++ Developer with experience in RDF. The position description states, “Contract for US Citizen working at Government contractor in Rancho Bernardo on an archived image library system.

Technical Qualifications/Experience Required:

  • C++ Developer
  • 3+ years Java JEE and C++ development
  • SQL/Oracle Development on Linux/Unix environment
  • Writing SW for archiving, dissemination of large amounts of data.  Understanding the topology for accessing, moving, storing data and generating discrepancy reports
  • Some experience writing RESTful web services
  • Other tools desired:  Jira/greenhopper, Jenkins, Spring, Nexus, Fisheye, json, RDF, MongoDB, Python, Jquery, Leaflet, Tomcat, Javascript, Arborjs
  • Cobra would be a plus (original application written in Cobra)”

Read more

Get The Scoop On The Critical ABCs of RDF

semtechbiz-10th-125sqThere’s a chance to learn everything you should know about RDF to get the most value from the W3C standard model for data interchange at the 10th annual Semantic Technology & Business Conference in San Jose next month. David Booth, senior software architect at Hawaii Resource Group, will be hosting a session explaining how the standard’s unique capabilities can have a profound effect on projects that seek to connect data coming in from multiple sources.

“One of the assumptions that people make looking at RDF is that it is  analogous to any other data format, like JSON or XML,” says Booth, who is working at the Hawaii Research Group’s on a contract the firm has with the U.S. Department of Defense to use semantic web technologies to achieve healthcare data interoperability. “It isn’t.” RDF, he explains, isn’t just another data format – rather, it’s about the information content that is encoded in the format.

“The focus is different. It is on the meaning of data vs. the details of syntax,” he says.

Read more

“Webize” Your Data with JSON-LD

json-ld-button

Benjamin Young of Cloudant reports, “Data is often stored and distributed in esoteric formats… Even when the data is available in a parse-able format (CSV, XML, JSON, etc), there is often little provided with the data to explain what’s inside. If there is descriptive meta data provided, it’s often only meant for the next developer to read when implementing yet-another-parser for said data. Really, it’s all quite abysmal… Enter, JSON-LD! JSON-LD (JSON Linked Data) is a simple way of providing semantic meaning for the terms and values in a JSON document. Providing that meaning with the JSON means that the next developer’s application can parse and understand the JSON you gave them.” Read more

SindiceTech Relaunch Features SIREn Search System, PivotBrowser Relational Faceted Browser

sindiceLast week news came from SindiceTech about the availability of its SindiceTech Freebase Distribution for the cloud (see our story here). SindiceTech has finalized its separation from the university setting in which it incubated, the former DERI institute, now a part of the Insight Center for Data Analytics, and now is re-launching its activities, with more new solutions and capabilities on the way.

“The first thing was to launch the Knowledge Graph distribution in the cloud,” says CEO Giovanni Tummarello. “The Freebase distribution showcases how it is possible to quickly have a really large Knowledge Graph in one’s own private cloud space.” The distribution comes instrumented with some of the tools SindiceTech has developed to help users both understand and make use of the data, he says, noting that “the idea of the Knowledge Graph is to have a data integration space that makes it very simple to add new information, but all that power is at risk of being lost without the tools to understand what is in the Knowledge Graph.”

Included in the first round of the distribution’s tools for composing queries and understanding the data as a whole are the Data Types Explorer (in both tabular and graph versions), and the Assisted SPARQL Query Editor. The next releases will increase the number of tools and provide updated data. “Among the tools expected is an advanced Knowledge Graph entity search system based on our newly released SIREn search system,” he says.

Read more

Elasticsearch 1.0 Takes Realtime Search To The Next Level

esearchpixElasticSearch 1.0 launches today, combining Elasticsearch realtime search and analytics, Logstash (which helps you take logs and other event data from your systems and store them in a central place), and Kibana (for graphing and analyzing logs) in an end-to-end stack designed to be a complete platform for data interaction. This first major update of the solution that delivers actionable insights in real-time from almost any type of structured and unstructured data source follows on the heels of the release of the commercial monitoring solution Elasticsearch Marvel, which gives users insight into the health of Elasticsearch clusters.

Organizations from Wikimedia to Netflix to Facebook today take advantage of Elasticsearch, which vp of engineering Kevin Kluge says is distinguished by its focus from its open-source start four years ago on realtime search in a distributed fashion. The native JSON and RESTful search tool “has intelligence where when it gets a new field that it hasn’t seen before, it discerns from the content of the field what type of data it is,” he explains. Users can optionally define schemas if they want, or be more freeform and very quickly add new styles of data and still profit from easier management and administration, he says.

Models also exist for using JSON-LD to represent RDF in a manner that can be indexed by Elasticsearch. The BBC World Service Archive prototype, in fact, uses an index based on ElasticSearch and constructed from the RDF data held in a central triple store to make sure its search engine and aggregation pages are quick enough.

Read more

JSON-LD is an official Web Standard

JSON-LD logo JSON-LD has reached the status of being an official “Recommendation” of the W3C. JSON-LD provides yet another way for web developers to add structured data into web pages, joining RDFa.The W3C documentation says, “JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.” This addition should be welcome news for Linked Data developers familiar with JSON and/or faced with systems based on JSON.

SemanticWeb.com caught up with the JSON-LD specfication editors to get their comments…

photo of Manu Sporny Manu Sporny (Digital Bazaar), told us, “When we created JSON-LD, we wanted to make Linked Data accessible to Web developers that had not traditionally been able to keep up with the steep learning curve associated with the Semantic Web technology stack. Instead, we wanted people that were comfortable working with great solutions like JSON, MongoDB, and REST to be able to easily integrate Linked Data technologies into their day-to-day work. The adoption of JSON-LD by Google and schema.org demonstrates that we’re well on our way to achieving this goal.”

Read more

Get Your Big, Linked, Smart Data eBook

rsz_blddata

Attendees at the Semantic Technology & Business Conference in NYC earlier this month got first-access to Big, Linked, Smart Data, an eBook of selections from The Semantic Web Blog. It’s built on the KEeReader, a browser-based e-reading platform that brings the ability to identify concepts, entities and relationships within content and allow users to interact with it. Now, that Knowledge Enhanced eReader (KEeReader) is available to all on the bookshelf here.

The Semantic Web Blog introduced the KEeReader platform to our readers in this article, and its chief architect Eric Freese demonstrated it to conference attendees at SemTechNYC referencing content from Big, Linked, Smart Data and the authorized biography of Steve Jobs. (You can also find that on the bookshelf. ) 

Read more

YarcData Software Update Points Out That The Sphere Of Semantic Influence Is Growing

YarcDataRecent updates to YarcData’s software for its Urika analytics appliance reflect the fact that the enterprise is starting to understand the impact that semantic technology has on turning Big Data into actual insights.

The latest update includes integration with more enterprise data discovery tools, including the visualization and business intelligence tools Centrifuge Visual Network Analytics and TIBCO Spotfire, as well as those based on SPARQL and RDF, JDBC, JSON, and Apache Jena. The goal is to streamline the process of getting data in and then being able to provide connectivity to the tools analysts use every day.

As customers see the value of using the appliance to gain business insight, they want to be able to more tightly integrate this technology into wider enterprise workflows and infrastructures, says Ramesh Menon, YarcData vice president, solutions. “Not only do you want data from all different enterprise sources to flow into the appliance easily, but the value of results is enhanced tremendously if the insights and the ability to use those insights are more broadly distributed inside the enterprise,” he says. “Instead of having one analyst write queries on the appliance, 200 analysts can use the appliance without necessarily knowing a lot about the underlying, or semantic, technology. They are able to use the front end or discovery tools they use on daily basis, not have to leave that interface, and still get the benefit of the Ureka appliance.”

Read more

Art Lovers Will See There’s More To Love With Linked Data

The team behind the data integration tool Karma this week presented at LODLAM (Linked Open Data in Libraries, Archives & Museums), illustrating how to map museum data to the Europeana Data Model (EDM) or CIDOC CRM (Conceptual Reference Model). This came on the heels of its earning the best-in-use paper award at ESWC2013 for its publication about connecting Smithsonian American Art Museum (SAAM) data to the LOD cloud.

The work of Craig KnoblockPedro SzekelyJose Luis AmbiteShubham GuptaMaria MusleaMohsen Taheriyan, and Bo Wu at the Information Sciences InstituteUniversity of Southern California, Karma lets users integrate data from a variety of data sources (hierarchical and dynamic ones too) — databases, spreadsheets, delimited text files, XML, JSON, KML and Web APIs — by modeling it according to an ontology of their choice. A graphical user interface automates much of the process. Once the model is complete, users can publish the integrated data as RDF or store it in a database.

The Smithsonian project builds on the group’s work on Karma for mapping structured sources to RDF. For the Smithsonian project (whose announcement we covered here), Karma converted more than 40,000 of the museum’s holdings, stored in more than 100 tables in a SQL Server Database, to LOD, leveraging EDM, the metamodel used in the Europeana project to represent data from Europe’s cultural heritage institutions.

Read more

NEXT PAGE >>