The Mike2.0 Governance Association has shared an article on Smart Data Collective about semantic business vocabularies and rules. The article states, “Classically business management establishes policies which are sent to an Information Technology department for incorporation to new and existing applications. It is then the job of systems analysts to stare at these goats and translate them into coding specifications for development and testing. Agile and other methodologies help speed this process internally to the IT department, however until the fundamental dynamic between management and IT changes, this cycle remains slow, costly and mistake-prone.” Read more
Expert System Releases Complete Solution for Taxonomy Creation, Deployment and Document Categorization
CHICAGO, ILLINOIS–(Marketwired – Nov 5, 2013) - Expert System, the semantic technology company, today announces the launch of its Cogito Categorization Platform at the Taxonomy Boot Camp conference held in Washington, D.C. The platform leverages the Cogito semantic technology for an end-to-end taxonomy creation, deployment and document categorization solution.
Top-down or theoretical approaches in taxonomy development often result in unnecessary complexity, and often include nodes that are not representative of the content being categorized. Cogito Categorization Platform uses intuitive semantic analysis to ensure creation of a taxonomy driven by the most relevant concepts and topics in the actual content. Read more
Today, the World Wide Web Consortium announced that R2RML has achieved Recommendation status. As stated on the W3C website, R2RML is “a language for expressing customized mappings from relational databases to RDF datasets. Such mappings provide the ability to view existing relational data in the RDF data model, expressed in a structure and target vocabulary of the mapping author’s choice.” In the life cycle of W3C standards creation, today’s announcement means that the specifications have gone through extensive community review and revision and that R2RML is now considered stable enough for wide-spread distribution in commodity software.
Richard Cyganiak, one of the Recommendation’s editors, explained why R2RML is so important. “In the early days of the Semantic Web effort, we’ve tried to convert the whole world to RDF and OWL. This clearly hasn’t worked. Most data lives in entrenched non-RDF systems, and that’s not likely to change.”
If you would like your company to be considered for an interview please email editor[ at ]semanticweb[ dot ]com.
In this segment of our “Innovation Spotlight” we spoke with Andreas Blumauer, the CEO of Semantic Web Company. Semantic Web Company is headquartered in Vienna, Austria and their software extracts meaning from big data using linked data technologies. In this interview Andreas describes some of the their core products to us in more detail.
Sean: Hi Andreas. Can you give us a little background on your company? When did you get started in the Semantic Web?
Andreas: As an offspring of a ‘typical’ web agency from the early days of the internet, we became a specialized provider in 2004: The ‘Semantic Web School’ focused on research, consulting and training in the area of the semantic web. We learned quickly how the idea of a ‘semantic web’ was able to trigger a lot of great project visions but also, that most of the tools from the early days of the semantic web were rather scary for enterprises. In 2007 we experienced that information professionals began to search for grown-up semantic web solutions to improve their information infrastructure. We were excited that ‘our’ main topics obviously began to play a role in the development of IT-strategies in many organizations. We refocused on the development of software and renamed our company.
Google has announced the addition of a “Structured Data Dashboard” as a new feature in its Webmaster Tools offerings. The Dashboard gives webmasters greater visibility into the structured data that Google knows about for a given website. This will no doubt come as good news to people wanting confirmation that Google was consuming the structured data being published.
Google’s Rich Snippet Testing Tool has been around for a while and allows webmasters to see how their semantic markup might appear in a Rich Snippet. There are tools that allow developers to test semantic markup during the development process. However, until now there has not been a good way for a webmaster to see how (or even if) Google was consuming the structured markup in a given site.
Last week, we published Under the Hood: A Closer Look at Information WorkBench, an interview with Peter Haase conducted by Kristen Milhollin as part of her series on Dynamic Semantic Publishing.
We are pleased to announce that FluidOps has created a viewer for the SemTechBiz conference program in Information Workbench. The viewer is a good example of a faceted, semantic viewer for the conference program data, including map, timeline, and graph views, and tying in data from disparate sources such as Facebook, Twitter, and the conference program itself.
You can view this event browser here:
Of course, we have the conference agenda-at-a-glance here, and will make the program available to conference attendees via the Guidebook app for mobile devices, but this is an interesting example of Semantic Technology at work in a human-friendly user interface.
Thanks to Peter Haase and the FluidOps team for this work!
fluid Operations’ Information Workbench is part of the semantic infrastructure supporting the BBC’s revolutionary coverage of the 2012 Olympic Games. Below is a conversation with fluid Operations Senior Architect for Research & Development Michael Schmidt in advance of his 2012 Semantic Technology and Business Conference presentation. This conversation is a supplement to the series “Dynamic Semantic Publishing for Beginners.”
Q. Is the Information Workbench a response to the need for more robust applications to help process “Big Data”? How is it different than other popular tools?
A. Dealing with Big Data involves a number of different challenges, including increasing volume (amount of data), complexity (of schemas and structures), and variety (range of data types, sources).
However, most Big Data solutions available on the market today focus on volume only, in particular supporting vertical scalability (greater operating capacity, efficiency, and speed.) This means that such solutions mainly address the analysis of large volumes of similarly structured data sets. Yet the Big Data problem is not fully solved only by technologies that help you process similarly structured data more quickly and efficiently.
Paul Houle, founder of Ontology 2, says, “:BaseKB is an important milestone for both Freebase and the Semantic Web. :BaseKB opens Freebase to users of SPARQL and other RDF standards. The superior quality of Freebase data solves data quality problems that have, so far, frustrated Linked Data applications.”
The schema.org official blog has announced support for enumerated lists. Adding this support allows developers using schema.org to use selected externally maintained vocabularies in their schema.org markup. According to the W3C-hosted schema.org WebSchemas wiki, “This is in addition to the existing extension mechanisms we support, and the general ability to include whatever markup you like in your pages. The focus here is on external vocabularies which can be thought of as ‘supported’ (or anticipated) in some sense by schema.org.”
In other words, “Schema.org markup uses links into well-known authority lists to clarify which particular instance of a schema.org type (eg. Country) is being mentioned.”
Yesterday, we announced RDFa.info, a new site devoted to helping developers add RDFa (Resource Description Framework-in-attributes) to HTML.
Building on that work, the team behind RDFa.info is announcing today the release of “PLAY,” a live RDFa editor and visualization tool. This release marks a significant step in providing tools for web developers that are easy to use, even for those unaccustomed to working with RDFa.
“Play” is an effort that serves several purposes. It is an authoring environment and markup debugger for RDFa that also serves as a teaching and education tool for Web Developers. As Alex Milowski, one of the core RDFa.info team, said, “It can be used for purposes of experimentation, documentation (e.g. crafting an example that produces certain triples), and testing. If you want to know what markup will produce what kind of properties (triples), this tool is going to be great for understanding how you should be structuring your own data.”
NEXT PAGE >>