Coralville, Iowa (PRWEB) January 20, 2014–After five years of development, the last twenty months in quiet mode supported by clients, Structured Dynamics (SD) today released a new enterprise-ready version of its open-source Open Semantic Framework (OSF). “This new version 3.0 finally establishes the baseline foundation we set for ourselves five years ago,” said Frédérick Giasson, SD’s CTO.
OSF is a turnkey platform targeted to enterprises to bring interoperability to their information assets, achieved via a layered architecture of semantic technologies. OSF can integrate information from documents to Web pages and standard databases. Its broad functions range from information ingest to tagging to search and data management to publishing. Read more
Washington, DC – January 21, 2014 – The new release (2.1) of Stardog, a leading RDF database, hits new scalability heights with a 50-fold increase over previous versions. Using commodity server hardware at the $10,000 price point, Stardog can manage, query, search, and reason over datasets as large as 50B RDF triples.
The new scalability increases put Stardog into contention for the largest semantic technology, linked data, and other graph data enterprise projects. Stardog’s unique feature set, including reasoning and integrity constraint validation, at large scale means it will increasingly serve as the basis for complex software projects.
“We’re really happy about the new scalability of Stardog,” says Mike Grove, Clark & Parsia’s Chief Software Architect, “which makes us competitive with a handful of top graph database systems. And our feature set is unmatched by any of them.”
The new scalability work required software engineering to remove garbage collection pauses during query evaluation, which the 2.1 release also accomplishes. Along with a new hot backup capability, Stardog is more mature and production-capable than ever before.
The Mike2.0 Governance Association has shared an article on Smart Data Collective about semantic business vocabularies and rules. The article states, “Classically business management establishes policies which are sent to an Information Technology department for incorporation to new and existing applications. It is then the job of systems analysts to stare at these goats and translate them into coding specifications for development and testing. Agile and other methodologies help speed this process internally to the IT department, however until the fundamental dynamic between management and IT changes, this cycle remains slow, costly and mistake-prone.” Read more
Expert System Releases Complete Solution for Taxonomy Creation, Deployment and Document Categorization
CHICAGO, ILLINOIS–(Marketwired – Nov 5, 2013) - Expert System, the semantic technology company, today announces the launch of its Cogito Categorization Platform at the Taxonomy Boot Camp conference held in Washington, D.C. The platform leverages the Cogito semantic technology for an end-to-end taxonomy creation, deployment and document categorization solution.
Top-down or theoretical approaches in taxonomy development often result in unnecessary complexity, and often include nodes that are not representative of the content being categorized. Cogito Categorization Platform uses intuitive semantic analysis to ensure creation of a taxonomy driven by the most relevant concepts and topics in the actual content. Read more
Today, the World Wide Web Consortium announced that R2RML has achieved Recommendation status. As stated on the W3C website, R2RML is “a language for expressing customized mappings from relational databases to RDF datasets. Such mappings provide the ability to view existing relational data in the RDF data model, expressed in a structure and target vocabulary of the mapping author’s choice.” In the life cycle of W3C standards creation, today’s announcement means that the specifications have gone through extensive community review and revision and that R2RML is now considered stable enough for wide-spread distribution in commodity software.
Richard Cyganiak, one of the Recommendation’s editors, explained why R2RML is so important. “In the early days of the Semantic Web effort, we’ve tried to convert the whole world to RDF and OWL. This clearly hasn’t worked. Most data lives in entrenched non-RDF systems, and that’s not likely to change.”
If you would like your company to be considered for an interview please email editor[ at ]semanticweb[ dot ]com.
In this segment of our “Innovation Spotlight” we spoke with Andreas Blumauer, the CEO of Semantic Web Company. Semantic Web Company is headquartered in Vienna, Austria and their software extracts meaning from big data using linked data technologies. In this interview Andreas describes some of the their core products to us in more detail.
Sean: Hi Andreas. Can you give us a little background on your company? When did you get started in the Semantic Web?
Andreas: As an offspring of a ‘typical’ web agency from the early days of the internet, we became a specialized provider in 2004: The ‘Semantic Web School’ focused on research, consulting and training in the area of the semantic web. We learned quickly how the idea of a ‘semantic web’ was able to trigger a lot of great project visions but also, that most of the tools from the early days of the semantic web were rather scary for enterprises. In 2007 we experienced that information professionals began to search for grown-up semantic web solutions to improve their information infrastructure. We were excited that ‘our’ main topics obviously began to play a role in the development of IT-strategies in many organizations. We refocused on the development of software and renamed our company.
Google has announced the addition of a “Structured Data Dashboard” as a new feature in its Webmaster Tools offerings. The Dashboard gives webmasters greater visibility into the structured data that Google knows about for a given website. This will no doubt come as good news to people wanting confirmation that Google was consuming the structured data being published.
Google’s Rich Snippet Testing Tool has been around for a while and allows webmasters to see how their semantic markup might appear in a Rich Snippet. There are tools that allow developers to test semantic markup during the development process. However, until now there has not been a good way for a webmaster to see how (or even if) Google was consuming the structured markup in a given site.
Last week, we published Under the Hood: A Closer Look at Information WorkBench, an interview with Peter Haase conducted by Kristen Milhollin as part of her series on Dynamic Semantic Publishing.
We are pleased to announce that FluidOps has created a viewer for the SemTechBiz conference program in Information Workbench. The viewer is a good example of a faceted, semantic viewer for the conference program data, including map, timeline, and graph views, and tying in data from disparate sources such as Facebook, Twitter, and the conference program itself.
You can view this event browser here:
Of course, we have the conference agenda-at-a-glance here, and will make the program available to conference attendees via the Guidebook app for mobile devices, but this is an interesting example of Semantic Technology at work in a human-friendly user interface.
Thanks to Peter Haase and the FluidOps team for this work!
fluid Operations’ Information Workbench is part of the semantic infrastructure supporting the BBC’s revolutionary coverage of the 2012 Olympic Games. Below is a conversation with fluid Operations Senior Architect for Research & Development Michael Schmidt in advance of his 2012 Semantic Technology and Business Conference presentation. This conversation is a supplement to the series “Dynamic Semantic Publishing for Beginners.”
Q. Is the Information Workbench a response to the need for more robust applications to help process “Big Data”? How is it different than other popular tools?
A. Dealing with Big Data involves a number of different challenges, including increasing volume (amount of data), complexity (of schemas and structures), and variety (range of data types, sources).
However, most Big Data solutions available on the market today focus on volume only, in particular supporting vertical scalability (greater operating capacity, efficiency, and speed.) This means that such solutions mainly address the analysis of large volumes of similarly structured data sets. Yet the Big Data problem is not fully solved only by technologies that help you process similarly structured data more quickly and efficiently.
NEXT PAGE >>