Charles Silver of Algebraix recently shared his opinions on artificial intelligence‘s recently revamped popularity and growing plausibility. Silver writes, “Just a few months ago, the phrase ‘artificial intelligence’ suddenly started being tossed around presentations, blogs, headlines, seminars — even a Facebook earnings meeting — as if it were the most benign concept in the world. AI could actually win an Oscar, thanks to Scarlett Johansson’s riveting voice-only performance as Samantha, the AI-enabled OS in the new movie ‘Her’. One reason for AI’s new respectability: Big steps have been made in solving the problems of artificial intelligence, especially in speech recognition and concept communication. Just think about how casually we now accept machines that can understand and talk, from Apple’s Siri to IBM’s ‘Jeopardy’-winning Watson.” Read more
JSON-LD has reached the status of being an official “Recommendation” of the W3C. JSON-LD provides yet another way for web developers to add structured data into web pages, joining RDFa.The W3C documentation says, “JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.” This addition should be welcome news for Linked Data developers familiar with JSON and/or faced with systems based on JSON.
SemanticWeb.com caught up with the JSON-LD specfication editors to get their comments…
Manu Sporny (Digital Bazaar), told us, “When we created JSON-LD, we wanted to make Linked Data accessible to Web developers that had not traditionally been able to keep up with the steep learning curve associated with the Semantic Web technology stack. Instead, we wanted people that were comfortable working with great solutions like JSON, MongoDB, and REST to be able to easily integrate Linked Data technologies into their day-to-day work. The adoption of JSON-LD by Google and schema.org demonstrates that we’re well on our way to achieving this goal.”
Drew Hendricks of Sys-Con recently wrote, “Over the course of the past decade, there’s been a lot of hype pertaining to the Internet of Things (IoT) and how China leads the U.S. in this technology – yet many who are active on the Internet are still unaware of its existence. In its simplest form, IoT is an evolving wireless network of objects and devices that will eventually all be connected with each other. Using RFID, Bluetooth, GPS and other emerging semantic technology, and working in tandem with cloud computing, Web portals and back-end systems, in essence our “things” will be be able to “talk” with each other.” Read more
Menlo Park, California (PRWEB) December 23, 2013 — Applied Relevance announces Epinomy optimized for MarkLogic 7, a leading NoSQL database platform for managing big data. Epinomy is an advanced information management application for organizing, tagging and classifying structured and unstructured big data content. Epinomy’s semantic engine allows organizations to easily build ontologies and auto-tag documents with metadata, enabling information managers to harness the power of ‘triple stores’ allowing users to quickly search and find all relevant structured and unstructured information all the time. Read more
The European Molecular Biology Laboratory (EMBL) and the European Bioinformatics Institute (EBI) that is part of Europe’s leading life sciences laboratory this fall launched a new RDF platform hosting data from six of the public database archives it maintains. That includes peer-reviewed and published data, submitted through large-scale experiments, from databases covering genes and gene expression, proteins (with SIB), pathways, samples, biomodels and molecules with drug-like properties. And next week, during a competition at SWAT4LS in Edinburgh, it’s hoping to draw developers with innovative use case ideas for life-sciences apps that can leverage that data to the benefit of bioinformaticians or bench biologists.
“We need developers to build apps on top of the platform, to build apps to pull in data from these and other sources,” explains Andy Jenkinson, Technical Project Manager at EMBL-EBI. “There is the potential using semantic technology to build those apps more rapidly,” he says, as it streamlines integrating biological data, which is a huge challenge given the data’s complexity and variety. And such apps will be a great help for lab scientists who don’t know anything about working directly with RDF data and SPARQL queries.
Silver Spring, MD (PRWEB) October 30, 2013 — PhUSE and CDISC are happy to announce the completion of Phase I of the FDA/PhUSE Semantic Technology Working Group Project. The PhUSE Semantic Technology Working Group aims to investigate how formal semantic standards can support the clinical and non-clinical trial data life cycle from protocol to submission. This deliverable includes a draft set of existing CDISC standards represented in RDF. Read more
Recent updates to YarcData’s software for its Urika analytics appliance reflect the fact that the enterprise is starting to understand the impact that semantic technology has on turning Big Data into actual insights.
The latest update includes integration with more enterprise data discovery tools, including the visualization and business intelligence tools Centrifuge Visual Network Analytics and TIBCO Spotfire, as well as those based on SPARQL and RDF, JDBC, JSON, and Apache Jena. The goal is to streamline the process of getting data in and then being able to provide connectivity to the tools analysts use every day.
As customers see the value of using the appliance to gain business insight, they want to be able to more tightly integrate this technology into wider enterprise workflows and infrastructures, says Ramesh Menon, YarcData vice president, solutions. “Not only do you want data from all different enterprise sources to flow into the appliance easily, but the value of results is enhanced tremendously if the insights and the ability to use those insights are more broadly distributed inside the enterprise,” he says. “Instead of having one analyst write queries on the appliance, 200 analysts can use the appliance without necessarily knowing a lot about the underlying, or semantic, technology. They are able to use the front end or discovery tools they use on daily basis, not have to leave that interface, and still get the benefit of the Ureka appliance.”
Larry Hardesty of RD Mag reports, “Researchers at Massachusetts Institute of Technology (MIT)’s Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute have developed new tools that allow people with minimal programming skill to rapidly build cellphone applications that can help with disaster relief. The tools are an extension of the App Inventor, open-source software that enables nonprogrammers to create applications for devices running Google’s Android operating system by snapping together color-coded graphical components. Based on decades of MIT research, the App Inventor was initially a Google product, but it was later rereleased as open-source software managed by MIT.” Read more
Ivan Herman of the W3C recently reported, ” The RDF Working Group and the JSON-LD Community Group published the Candidate Recommendation of JSON-LD 1.0, and JSON-LD 1.0 Processing Algorithms and API. This signals the beginning of the call for implementations for JSON-LD 1.0. JSON-LD harmonizes the representation of Linked Data in JSON by describing a common JSON representation format for expressing directed graphs; mixing both Linked Data and non-Linked Data in a single document. The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Linked Data Web services, and to store Linked Data in JSON-based storage engines.” Read more
Mankind has been trying to understand the nature of time since, well, since forever. How time works is a big question, with many different facets being explored by scientists, philosophers, even social-psychologists. Semantic technologists, however, are focusing a little more strategically, considering temporal data management for semantic data.
At the Semantic Technology and Business Conference in NYC, coming up in early October, Dean Allemang, principal consultant at Working Ontologist LLC will be hosting a panel on the topic of managing time in Linked Data. Relational database systems long have been tuned into dealing with bi-temporal data, which changes over two dimensions of time independently – that is, valid (real world) and transactional (database) time. Not so with RDF databases. But many institutions, in fields ranging from finance to health care, have no desire to go back.
“They’ll lose all the RDF powers they’re familiar with, all the semantic linkages,” says Allemang. “And if you want that kind of distributed data understood in your enterprise, a relational solution isn’t going to help.”