On his personal website, Frederick Giasson reports, “We just released a new UMBEL web service endpoint and online tool: the Concept Tagger Plain. This plain tagger uses UMBEL reference concepts to tag an input text. The OBIE (Ontology-Based Information Extraction) method is used, driven by the UMBEL reference concept ontology. By plain we mean that the words (tokens) of the input text are matched to either the preferred labels or alternative labels of the reference concepts. The simple tagger is merely making string matches to the possible UMBEL reference concepts.” Read more
LightReading reports, “Ontology Systems, the semantic search company for enterprise application data, today announces the launch of Intelligent 360 for Network Operators (i360-NetOps), a product to help operators gain a reliable, fast and holistic view of their network across all layers, technologies and vendors. i360-NetOps helps organisations to carry out network troubleshooting, navigate their infrastructure and the customers that depend on it, handle their change management and track the alignment and quality of the data that describes the network and its services.” Read more
Cognitum’s year got off to a good start, with an investment from the Giza Polish Ventures Fund, and it plans to apply some of that funding to building its sales and development teams, demonstrating the approaches to and benefits of semantic knowledge engineering, and focusing on big implementations for recognizable customers. The company’s products include Fluent Editor 2 for editing and manipulating complex ontologies via controlled natural language (CNL) tools, and its NLP-fronted Ontorion Distributed Knowledge Management System for managing large ontologies in a distributed fashion (both systems are discussed in more detail in our story here). “The idea here is to open up semantic technologies more widely,” says CEO Pawel Zarzycki.
To whom? Zarzycki says the company currently has pilot projects underway in the banking sector, which see opportunities to leverage ontologies and semantic management frameworks that provide a more natural way for sharing and reusing knowledge and expressing business rules for purposes such as lead generation and market intelligence. In the telco sector, another pilot project is underway to support asset management and impact assessment efforts, and in the legal arena, the Poland-based company is working with the Polish branch of international legal company Eversheds on applying semantics to legal self-assessment issues. Having a semantic knowledge base can make it possible to automate the tasks behind assessing a legal issue, he says, and so it opens the door to outsourcing this job directly to the entity pursuing the case, with the lawyer stepping in mostly at the review stage. That saves a lot of time and money.
Ontologies are getting a thumbs up to serve as the basis for the Office of Financial Research’s Instruments database. Last week, the Data & Technology Subcommittee of the OFR Financial Research Advisory Committee (FRAC) recommended that the OFR “adopt the goal of developing and validating a comprehensive ontology for financial instruments as part of its overall effort to meet its statutory requirement to ‘prepare and publish’ a financial instrument reference database.”
The Instruments database will define the official meaning of financial instruments for the financial system — derivatives, securities, and so on. The recommendation by the subcommittee is that the OFR conduct its own evaluation of private sector initiatives in this area, including the Financial Industry Business Ontology (FIBO), to assess whether and how ontology can support transparency and financial stability analysis.
FIBO, which The Semantic Web Blog discussed in detail most recently here, is designed to improve visibility to the financial industry and the regulatory community by standardizing the language used to precisely define the terms, conditions, and characteristics of financial instruments; the legal and relationship structure of business entities; the content and time dimensions of market data; and more. The effort is spearheaded by the Object Management Group and the Enterprise Data Management (EDM) Council.
Everyone knows The Clapper for turning electric equipment on and off, right? Sing along: “Clap-on, clap-off….The Clapper.”
Things have come a long way since then, with security, energy management, and more coming along to help turn the average house into a smarter home. Now comes a chance to take things to another level, with semantic-based resource discovery and orchestration in home and building automation. Research led by Michele Ruta, assistant professor at Technical University of Bari, takes on the challenge of bringing together the worlds of semantic web and automation in order to improve what Ruta says are very poor user interaction scenarios.
How? In home automation solutions today, he says, the user is limited to very basic scenarios and very static interaction that requires pre-programming the capabilities the home can assume. “It should be possible to have dynamic interaction, more intelligent interaction with the user, and decisions should be done according to user interest, to a user’s profile,” Ruta says. Semantic technology can be called upon to annotate users’ profiles, interests, and needs against home automation profile options, and make the match between them, he says.
Look, up in the sky! It’s a bird, it’s a plane, no – it’s an Amazon drone!
Admittedly, Amazon Prime Air’s unmanned aerial vehicles in commercial use are still a little ways off. But such technology – along with other recent innovations, such as the use of unmanned aircraft in crop-dusting or even Department of Homeland Security border applications, or future capabilities to extend the notion of auto-piloting in passenger airplanes using autonomous machine logic to control airspace and spacing between planes –needs to be accounted for in terms of its impact on the air space. The Next-Generation Air Transportation System is taking on the change in the management and operation of the national air transportation system.
And semantic technology, natural language processing, and machine learning, too, will have a hand in helping out, by fostering collaboration among the agencies that will be working together to develop the system, including the Federal Aviation Administration, the U.S. Air Force, U.S. Navy, and the National Aeronautics and Space Administration, under the coordination of the Joint Planning and Development Office. These agencies will need to leverage each other’s knowledge and research, as well as ensure – as necessary – data privacy.
We reported yesterday on the news that JSON-LD has reached Recommendation status at W3C. Three formal vocabularies also reached that important milestone yesterday:
The W3C Documentation for The Data Catalog Vocabulary (DCAT), says that DCAT “is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web….By using DCAT to describe datasets in data catalogs, publishers increase discoverability and enable applications easily to consume metadata from multiple catalogs. It further enables decentralized publishing of catalogs and facilitates federated dataset search across sites. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation.”
Meanwhile, The RDF Data Cube Vocabulary addresses the following issue: “There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations. The Data Cube vocabulary is a core foundation which supports extension vocabularies to enable publication of other aspects of statistical data flows or other multidimensional data sets.”
Lastly, W3C now recommends use of the Organization Ontology, “a core ontology for organizational structures, aimed at supporting linked data publishing of organizational information across a number of domains. It is designed to allow domain-specific extensions to add classification of organizations and roles, as well as extensions to support neighbouring information such as organizational activities.”
Industry leaders in sectors including banking and financial services look to have high hopes for semantic technology. They’re thinking about FIBO (Financial Industry Business Ontology) and leveraging semantic technology for more traditional types of data integration and analytics projects. At Cognizant, Thomas Kelly, a director in its Enterprise Information Management practice – and the author of this white paper on How Semantic Technology Drives Agile Business – sees the positive development that clients in the Fortune 500 space like these “are maturing in their use of semantic technology, from a project focus to more enterprise initiatives.”
The interest in FIBO, he says, is representative of an overall interest across in industries in leveraging industry ontologies as mechanisms to help companies better standardize, align and learn from the output of industry-wide efforts. The attention that industry analysts, including Gartner, have put on the semantic web in the last year – not to mention regulators beginning to consider its use in sharing information on a regulatory basis – have helped increase interest by commercial organizations, Kelly notes. That’s also evident in the life sciences sector, as another example, with the efforts of the FDA/PhUSE Semantic Technology Working Group Project to include a draft set of existing CDISC standards in RDF.
The pickup in attention to many things semantic ties to the different perspectives that organizations need to manage about their data, which include “how they currently think of their data, how it is currently perceived in managing business operations; and where they are looking to go in the future that makes it more inclusive of what’s going on in the world outside their walls – that is, how the rest of the industry looks at this data and uses it to support their business processes,” he says.
News came the other week that Senzari had announced the MusicGraph knowledge engine for music. The Semantic Web Blog had a chance to learn a little bit more about it what’s underway thanks to a chat with Senzari’s COO Demian Bellumio.
MusicGraph used to go by the geekier name of Adaptable Music Parallel Processing Platform, or AMP3 for short, for helping users control their Internet radio. “We wanted to put more knowledge into our graph. The idea was we have really cool and interesting data that is ontologically connected in ways never done before,” says Bellumio. “We wanted to put it out in the world and let the world leverage it, and MusicGraph is a production of that vision.”
Since its announcement earlier this month about launching the consumer version on the Firefox OS platform that lets users make complex queries about music and learn and then listen to results, Senzari has submitted its technology to be offered for the iOS, Android, and Windows Mobile platforms. “You can ask anything you can think of in the music realm. We connect about 1 billion different points to respond to these queries,” he says. Its data covers more than twenty million songs, connected to millions of individual albums and artists across all genres, with extracted information on everything from keys to concept extractions derived from lyrics.
NEXT PAGE >>