Technologies

Applied Relevance Announces Epinomy Version 7

epinomy

Menlo Park, California (PRWEB) December 23, 2013 — Applied Relevance announces Epinomy optimized for MarkLogic 7, a leading NoSQL database platform for managing big data. Epinomy is an advanced information management application for organizing, tagging and classifying structured and unstructured big data content. Epinomy’s semantic engine allows organizations to easily build ontologies and auto-tag documents with metadata, enabling information managers to harness the power of ‘triple stores’ allowing users to quickly search and find all relevant structured and unstructured information all the time. Read more

In Search Of Apps To Leverage Public BioMolecular Data In RDF Platform

rsz_rdfpfThe European Molecular Biology Laboratory (EMBL) and the European Bioinformatics Institute (EBI) that is part of Europe’s leading life sciences laboratory this fall launched a new RDF platform hosting data from six of the public database archives it maintains. That includes peer-reviewed and published data, submitted through large-scale experiments, from databases covering genes and gene expression, proteins (with SIB), pathways, samples, biomodels and molecules with drug-like properties. And next week, during a competition at SWAT4LS in Edinburgh, it’s hoping to draw developers with innovative use case ideas for life-sciences apps that can leverage that data to the benefit of bioinformaticians or bench biologists.

“We need developers to build apps on top of the platform, to build apps to pull in data from these and other sources,” explains Andy Jenkinson, Technical Project Manager at EMBL-EBI. “There is the potential using semantic technology to build those apps more rapidly,” he says, as it streamlines integrating biological data, which is a huge challenge given the data’s complexity and variety. And such apps will be a great help for lab scientists who don’t know anything about working directly with RDF data and SPARQL queries.

Read more

PhUSE and CDISC Announce Draft RDF Representation of Existing CDISC Data Standards

phuse

Silver Spring, MD (PRWEB) October 30, 2013 — PhUSE and CDISC are happy to announce the completion of Phase I of the FDA/PhUSE Semantic Technology Working Group Project. The PhUSE Semantic Technology Working Group aims to investigate how formal semantic standards can support the clinical and non-clinical trial data life cycle from protocol to submission. This deliverable includes a draft set of existing CDISC standards represented in RDF. Read more

YarcData Software Update Points Out That The Sphere Of Semantic Influence Is Growing

YarcDataRecent updates to YarcData’s software for its Urika analytics appliance reflect the fact that the enterprise is starting to understand the impact that semantic technology has on turning Big Data into actual insights.

The latest update includes integration with more enterprise data discovery tools, including the visualization and business intelligence tools Centrifuge Visual Network Analytics and TIBCO Spotfire, as well as those based on SPARQL and RDF, JDBC, JSON, and Apache Jena. The goal is to streamline the process of getting data in and then being able to provide connectivity to the tools analysts use every day.

As customers see the value of using the appliance to gain business insight, they want to be able to more tightly integrate this technology into wider enterprise workflows and infrastructures, says Ramesh Menon, YarcData vice president, solutions. “Not only do you want data from all different enterprise sources to flow into the appliance easily, but the value of results is enhanced tremendously if the insights and the ability to use those insights are more broadly distributed inside the enterprise,” he says. “Instead of having one analyst write queries on the appliance, 200 analysts can use the appliance without necessarily knowing a lot about the underlying, or semantic, technology. They are able to use the front end or discovery tools they use on daily basis, not have to leave that interface, and still get the benefit of the Ureka appliance.”

Read more

Building a Disaster-Relief App Quickly with RDF

MIT

Larry Hardesty of RD Mag reports, “Researchers at Massachusetts Institute of Technology (MIT)’s Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute have developed new tools that allow people with minimal programming skill to rapidly build cellphone applications that can help with disaster relief. The tools are an extension of the App Inventor, open-source software that enables nonprogrammers to create applications for devices running Google’s Android operating system by snapping together color-coded graphical components. Based on decades of MIT research, the App Inventor was initially a Google product, but it was later rereleased as open-source software managed by MIT.” Read more

W3C Announces JSON-LD Feature Freeze, Call for Implementation

Ivan Herman of the W3C recently reported, ” The RDF Working Group and the JSON-LD Community Group published the Candidate Recommendation of JSON-LD 1.0, and JSON-LD 1.0 Processing Algorithms and API. This signals the beginning of the call for implementations for JSON-LD 1.0. JSON-LD harmonizes the representation of Linked Data in JSON by describing a common JSON representation format for expressing directed graphs; mixing both Linked Data and non-Linked Data in a single document. The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Linked Data Web services, and to store Linked Data in JSON-based storage engines.” Read more

It’s Time To Take On Temporal Data Management For Semantic Data

Mankind has been trying to understand the nature of time since, well, since forever. How time works is a big question, with many different facets being explored by scientists, philosophers, even social-psychologists. Semantic technologists, however, are focusing a little more strategically, considering temporal data management for semantic data.

At the Semantic Technology and Business Conference in NYC, coming up in early October, Dean Allemang, principal consultant at Working Ontologist LLC will be hosting a panel on the topic of managing time in Linked Data. Relational database systems long have been tuned into dealing with bi-temporal data, which changes over two dimensions of time independently – that is, valid (real world) and transactional (database) time. Not so with RDF databases. But many institutions, in fields ranging from finance to health care, have no desire to go back.

“They’ll lose all the RDF powers they’re familiar with, all the semantic linkages,” says Allemang. “And if you want that kind of distributed data understood in your enterprise, a relational solution isn’t going to help.”

Read more

W3C Announces Three New RDFa Recommendations

World Wide Web Consortium logoThe W3C announced today that three specifications have reached recommendation status:

HTML+RDFa 1.1:
http://www.w3.org/TR/2013/REC-html-rdfa-20130822/

RDFa 1.1 Core – Second Edition
http://www.w3.org/TR/2013/REC-rdfa-core-20130822/

XHTML+RDFa 1.1 – Second Edition
http://www.w3.org/TR/2013/REC-xhtml-rdfa-20130822/

As the W3C website explains, “The last couple of years have witnessed a fascinating evolution: while the Web was initially built predominantly for human consumption, web content is increasingly consumed by machines which expect some amount of structured data. Sites have started to identify a page’s title, content type, and preview image to provide appropriate information in a user’s newsfeed when she clicks the ‘Like’ button. Search engines have started to provide richer search results by extracting fine-grained structured details from the Web pages they crawl. In turn, web publishers are producing increasing amounts of structured data within their Web content to improve their standing with search engines.”

“A key enabling technology behind these developments is the ability to add structured data to HTML pages directly. RDFa (Resource Description Framework in Attributes) is a technique that allows just that: it provides a set of markup attributes to augment the visual information on the Web with machine-readable hints. ”

Manu Sporny, the editor of the HTML+RDFa 1.1 specification, told us that, “The release of RDFa 1.1 for HTML5 establishes it as the first HTML-based Linked Data technology to achieve recognition as an official Web standard by the World Wide Web Consortium.” Read more

An Easier Approach To Ontology Editing

It’s probably not news to most people that not everyone is expert at using OWL for authoring or editing ontologies. Domain experts who don’t find the XML-based syntax for OWL particularly user-friendly need a hand, and that’s where Controlled Natural Language (CNL) tools come in.

One such tool for editing and manipulating ontologies is Fluent Editor from Cognitum. The major product from the vendor, now in Version 2, lets users edit ontologies, expressed with CNL, that are compatible with OWL 2 and SWRL (Semantic Web Rule Language). When the company debuted Version 1 a couple of years back, it discovered that “there are a lot of people interested in semantic technology,” says CEO Pawel Zarzycki. That includes business analysts and other domain experts who would like to express some business rules and to leverage a semantic system for the computer as a supporting tool.

Read more

A Look Into Learning SPARQL With Author Bob DuCharme

Cover of Learning SPARQL - Second Edition, by Bob DuCharmeThe second edition of Bob DuCharme’s Learning SPARQL debuted this summer. The Semantic Web Blog connected with DuCharme – who is director of digital media solutions at TopQuadrant, the author of other works including XML: The Annotated Specification, and also a welcome speaker both at the Semantic Technology & Business Conference and our Semantic Web Blog podcasts – to learn more about the latest version of the book.

Semantic Web Blog: In what I believe has been two years since the first edition was published, what have been the most significant changes in the ‘SPARQL space’ – or the semantic web world at large — that make this the right time for an expanded edition of Learning SPARQL?

DuCharme: The key thing is that SPARQL 1.1 is now an actual W3C Recommendation. It was great to see it so widely implemented so early in its development process, which justified the release of the book’s first edition so long before 1.1 was set in stone, but now that it’s a Recommendation we can release an edition of the book that is no longer describing a moving target. Not much in SPARQL has changed since the first edition – the VALUES keyword replaced BINDINGS, with some tweaks, and some property path syntax details changed – but it’s good to know that nothing in 1.1 can change now.

Read more

<< PREVIOUS PAGENEXT PAGE >>