Archives: May 2012

Semantic Web Jobs: Freedom Consulting Group

Freedom Consulting Group is looking for an Enterprise Data Modeler in Fort Meade, MD. Responsibilities of this position include the following: “Perform semantic data modeling for the NTOC domain logical data model. Skills needed across the positions include Semantics, Ontology, Epistemology, Resource Description Framework (RDF) Schema (RDFS), OWL based technologies, Hadoop, SPRQL, XML, XSD, and Data Warehouse, and Oracle, SQL, and relational theory. Develop and maintain NTOC domain logical data model artifacts: Logical Data Models (LDM), Data Element Dictionaries (DED), Physical Data Models (PDM), Interface Control Documents (ICD), data format specifications, model release documentation, user guides, data harmonization and mappings.” Read more

Late Breaking Addition to #SemTechBiz SF: Tangible Semantics at Uma

SemTechBiz San Francisco is set to begin this Sunday, June 3. The conference will feature over 130 presentations and over 160 speakers covering such topics as linked data, social networks, content management, open government, semantic wikis, and much more. The already full agenda has been made even better with the late addition of a new presentation, Tangible Semantics, a case study to be presented by Christian Doegel, Founder and CEO of Uma Information Technology GmbH. Read more

Cambridge Semantics Helps Users Take First Steps Into The Semantic Web

Cambridge Semantics has a new way for users to get access to its Anzo solutions: Next week at the Semantic Technology & Business Conference in San Francisco it will announce a packaging of the technology, dubbed the Anzo Express Starter Edition, that can be downloaded for free by anyone. “This lets anyone really easily start with semantics without having to invest a lot of time and without learning every fundamental detail,” says Rob Gonzalez, Director of Product Management & Marketing and a frequent contributor to this blog.

The full Anzo semantic suite is a complete enterprise data management solution with the ability to pull data in and out of relational databases for integration, to connect data within unstructured documents, and to provide analytics and enterprise security, for heavy-duty enterprise use. The Starter Edition is a trimmed-down version that’s more suitable for small groups, such as users in academia or others engaged in research,  that need a basic server and Excel integration for spreadsheet data sharing to get started.

Read more

DERI and Korean Institute Teaming Up for Semantic Technology Research

A new article reports, “The Digital Enterprise Research Institute (DERI) at NUI Galway has signed a collaborative research agreement with the Korea Institute of Science and Technology Information (KISTI) to progress research in the realm of semantic technologies. The agreement looks set to spawn close collaborations between researchers at both institutes and is expected to lead to a number of funded projects. Researchers at DERI and KISTI are already collaborating on a joint project in the area of semantic data integration and application.” Read more

Dynamic Semantic Publishing for Beginners, Part 1


Even as semantic web concepts and tools are underpinning revolutionary changes in the way we discover and consume information, people with even a casual interest in the semantic web have difficulty understanding how and why this is happening.  One of the most exciting application areas for semantic technologies is online publishing, although for thousands of small-to-medium sized publishers, unfamiliar semantic concepts are too intimidating to grasp the relevance of these technologies. This three-part series is part of my own journey to better understand how semantic technologies are changing the landscape for publishers of news and information.


The 2010 World Cup was a notable first not only for Spain, but also for publishing and the BBC.  This is because the BBC’s coverage of the tournament marked a dramatic evolution in the way content can be delivered online.  This new system was labeled dynamic semantic publishing (DSP) by the team of architects–including Jem Reyfield and Paul Wilton–that created it.  DSP was soon defined as “utilizing Linked Data technology to automate the aggregation and publication of interrelated content objects.”

Read more

Under the Hood: A Closer Look at Information WorkBench

fluidOps Logofluid Operations’ Information Workbench is part of the semantic infrastructure supporting the BBC’s revolutionary coverage of the 2012 Olympic Games.  Below is a conversation with fluid Operations Senior Architect for Research & Development Michael Schmidt in advance of his 2012 Semantic Technology and Business Conference presentation. This conversation is a supplement to the series “Dynamic Semantic Publishing for Beginners.

Q. Is the Information Workbench a response to the need for more robust applications to help process “Big Data”? How is it different than other popular tools?

A. Dealing with Big Data involves a number of different challenges, including increasing volume (amount of data), complexity (of schemas and structures), and variety (range of data types, sources).

However, most Big Data solutions available on the market today focus on volume only, in particular supporting vertical scalability (greater operating capacity, efficiency, and speed.)   This means that such solutions mainly address the analysis of large volumes of similarly structured data sets. Yet the Big Data problem is not fully solved only by technologies that help you process similarly structured data more quickly and efficiently.

Read more

Dynamic Semantic Publishing for News Organizations

Ontoba logo

Paul Wilton was Technical and development lead for semantic publishing at BBC News and Sport Online during the 2010 World Cup.  Currently he is the Technical architect at Ontoba.  In this interview, a supplement to “Dynamic Semantic Publishing for Beginners”, Paul describes the current landscape for DSP as it applies to news organizations.

Q. Are you seeing a wide disparity in the way that news organizations have approached the creation and use of semantically-linked (or annotated) content?

A. Actually the pattern and often the (general) technical architecture is surprisingly similar. Where things differ are the applications, models used and instance data. This is undoubtedly bleeding edge technology, and typically the impetus to begin investigating the use of linked data, RDF and semantics in the technology stack has come from within the Information Architecture and R&D teams, not from the offices of the CTO/CIO. Maybe this is starting to change now.

Q. Do many news organizations have the resources (staff and/or Content Management Systems) that are able to publish and use semantic data?

A. Not in our experience, but this shouldn’t be a barrier to integrating semantic technologies and publishing linked data.

The key components to adopting semantic publishing – a semantic repository (triple store); appropriate linked data sets; and the ability to semantically annotate your content – can be built alongside an existing Content Management System. Read more

2 RDF Candidate Recommendations Published

Ivan Herman of the W3C reports that two new Candidate Recommendations have been published by the RDB2RDF Working Group. The first is A Direct Mapping of Relational Data to RDF: “The need to share data with collaborators motivates custodians and users of relational databases (RDB) to expose relational data on the Web of Data. This document defines a direct mapping from relational data to RDF. This definition provides extension points for refinements within and outside of this document.” Read more

Resolve Names in Freebase Data with :BaseKB

Ontology 2 logoOntology2 has announced the release of :BaseKB Early Access 2 (EA2), a tool for accessing Freebase data in RDF.

Paul Houle, founder of Ontology 2, says, “:BaseKB is an important milestone for both Freebase and the Semantic Web. :BaseKB opens Freebase to users of SPARQL and other RDF standards.  The superior quality of Freebase data solves data quality problems that have,  so far,  frustrated Linked Data applications.”

Read more

Flattr Gets an Unflattering Rejection from Apple

We recently covered the development of a web payment standard, PaySwarm. Another candidate in the field isn’t fairing quite as well. According to Sarah Perez, “Social micro-payments platform Flattr is taking an unkind hit in terms of its future growth opportunities on mobile, the company details on its blog this morning. After being integrated into popular third-party podcast manager Instacast back in February, Apple decided at the beginning of May to reject the app from the iTunes App Store due to its Flattr integration. The result? The only way Instacast could get back into the app store was to change the user flow in the app to direct the actual ‘flattr’ (as the micro-payment process is called) to take place in the Safari web browser instead. Not an ideal user experience, Apple admits, but it’s as required by the App Store Review Guidelines.” Read more