In the video below, Dr. James Melton, a Lecturer in Comparitive Politics at University College London, gives a presentation on Constitute. Constitute is a new way to explore the constitutions of the world. The origins of the project date back to 2005 with the Comparative Constitutions Project, which has the stated goal of cataloging the contents of all constitutions written in independent states since 1789. To date, that work has resulted in a collection of 900+ constitutions and 2500+ Amendments. A rigorous formal survey instrument including 669 questions was then applied to each of these “constitutional events,” resulting in the base data that the team had to work with. Melton and his group wanted to create a system that allowed for open sharing of this information, and not just with researchers, but with anyone who wants to explore the world’s constitutions. They also needed the system to be flexible enough to handle changes, when, as Melton points out, “…roughly 15% of the countries in the world change their constitution every single year.”
Posts Tagged ‘Juan Sequeda’
[Editor's note: this guest post was co-written by Héctor Pérez-Urbina (Clark & Parsia) and Juan Sequeda (Capsenta)]
Important enterprise business logic is often buried deep within a complex ecosystem of applications. Domain constraints and assumptions, as well as the main actors and the relations with one another, exist only implicitly in thousands of lines of code distributed across the enterprise.
Sure, there might be some complex UML diagrams somewhere accompanied by hundreds of pages of use case descriptions; but there is no common global representation of the domain that can be effectively shared by enterprise applications. When the domain inevitably evolves, applications must be updated one by one, forcing developers to dive into long-forgotten code to try to make sense of what needs to be done. Maintenance in this kind of environment is time-consuming, error-prone, and expensive.
The suite of semantic technologies, including OWL, allows the creation of rich domain models (a.k.a., ontologies) where business logic can be captured and maintained. Crucially, unlike UML diagrams, OWL ontologies are machine-processable so they can be directly exploited by applications.
Reasoning is the task of deriving implicit facts from a set of given explicit facts. These facts can be expressed in OWL 2 ontologies and stored RDF triplestores. For example, the following fact: “a Student is a Person,” can be expressed in an ontology, while the fact: “Bob is a Student,” can be stored in a triplestore. A reasoner is a software application that is able to reason. For example, a reasoner is able to infer the following implicit fact: “Bob is a Person.”
Reasoning tasks considered in OWL 2 are: ontology consistency, class satisfiability, classification, instance checking, and conjunctive query answering.
OWL, the Web Ontology Language has been standardized by W3C as a powerful language to represent knowledge (i.e. ontologies) on the Web. OWL has two functionalities. The first functionality is to express knowledge in an unambiguous way. This is accomplished by representing knowledge as set of concepts within a particular domain and the relationship between these concepts. If we only take into account this functionality, then the goal is very similar to that of UML or Entity-Relationship diagrams. The second functionality is to be able to draw conclusions from the knowledge that has been expressed. In other words, be able to infer implicit knowledge from the explicit knowledge. We call this reasoning and this is what distinguishes OWL from UML or other modeling languages.
OWL evolved from several proposals and became a standard in 2004. This was subsequently extended in 2008 by a second standard version, OWL 2. With OWL, you have the possibility of expressing all kinds of knowledge. The basic building blocks of an ontology are concepts (a.k.a classes) and the relationships between the classes (a.k.a properties). For example, if we were to create an ontology about a university, the classes would include Student, Professor, Courses while the properties would be isEnrolled, because a Student is enrolled in a Course, and isTaughtBy, because a Professor teaches a Course.
Triplestores are Database Management Systems (DBMS) for data modeled using RDF. Unlike Relational Database Management Systems (RDBMS), which store data in relations (or tables) and are queried using SQL, triplestores store RDF triples and are queried using SPARQL.
A key feature of many triplestores is the ability to do inference. It is important to note that a DBMS typically offers the capacity to deal with concurrency, security, logging, recovery, and updates, in addition to loading and storing data. Not all Triplestores offer all these capabilities (yet).
Triplestores can be broadly classified in three types categories: Native triplestores, RDBMS-backed triplestores and NoSQL triplestores. Read more
If you are learning about the Semantic Web, one of the things you will hear is that the Semantic Web assumes the Open World. In this post, I will clarify the distinction between the Open World Assumption and the Closed World Assumption.
The Closed World Assumption (CWA) is the assumption that what is not known to be true must be false.
The Open World Assumption (OWA) is the opposite. In other words, it is the assumption that what is not known to be true is simply unknown.
Consider the following statement: “Juan is a citizen of the USA.” Now, what if we were to ask “Is Juan a citizen of Colombia?” Under a CWA, the answer is no. Under the OWA, it is I don’t know.
When do CWA and OWA apply?
Last week, the 11th International Semantic Web Conference (ISWC 2012) took place in Boston. It was an exciting week to learn about the advances of the Semantic Web and current applications.
The first two days, Sunday November 11 and Monday November 12, consisted of 18 workshops and 8 tutorials. The following three days (Tuesday November 13 – Thursday November 15) consisted of keynotes, presentation of academic and in-use papers, the Big Graph Data Panel and industry presentations. It is basically impossible to attend all the interesting presentations. Therefore, I am going to try my best to summarize and offer links to everything that I can.
SKOS, which stands for Simple Knowledge Organization System, is a W3C standard, based on other Semantic Web standards (RDF and OWL), that provides a way to represent controlled vocabularies, taxonomies and thesauri. Specifically, SKOS itself is an OWL ontology and it can be written out in any RDF syntax.
Before we dive into SKOS, what is the difference between Controlled Vocabulary, Taxonomy and Thesaurus?
A controlled vocabulary is a list of terms which a community or organization has agreed upon. For example: Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday are the days of the week.
A taxonomy is a controlled vocabulary organized in a hierarchy. For example, we can have the terms Computer, Tablet and Laptop and the concepts Tablet and Laptop are subclasses of Computer because a Tablet and Laptop are types of Computers.
Today, the World Wide Web Consortium announced that R2RML has achieved Recommendation status. As stated on the W3C website, R2RML is “a language for expressing customized mappings from relational databases to RDF datasets. Such mappings provide the ability to view existing relational data in the RDF data model, expressed in a structure and target vocabulary of the mapping author’s choice.” In the life cycle of W3C standards creation, today’s announcement means that the specifications have gone through extensive community review and revision and that R2RML is now considered stable enough for wide-spread distribution in commodity software.
Richard Cyganiak, one of the Recommendation’s editors, explained why R2RML is so important. “In the early days of the Semantic Web effort, we’ve tried to convert the whole world to RDF and OWL. This clearly hasn’t worked. Most data lives in entrenched non-RDF systems, and that’s not likely to change.”
Yesterday, the W3C announced the advancement to Proposed Recommendations of two Relational Database to RDF (RDB2RDF) documents: 1) R2RML: RDB to RDF Mapping Language and 2) A Direct Mapping of Relational Data to RDF. Additionally, two Working Group Notes were also published: R2RML and Direct Mapping Test Cases and RDB2RDF Implementation Report.
Given that a vast amount of data in enterprises and on the web resides in Relational Databases, it is paramount to have methods that expose relational data as RDF, in order for Semantic Web applications to interact with Relational Databases. The R2RML and Direct Mapping standards bridges this gap. Direct Mapping is an automatic default mapping and R2RML is a mapping language where users can customize the mappings. With these two standards, we will now be able to see more and more relational data in the Linked Data cloud and part of Semantic Web applications.
A little bit of RDB2RDF history
Tim Berners-Lee wrote a Design Issue on “Relational Database on the Semantic Web” dating back initially to 1998. During the 2000′s, several tools, such as R2O, D2RQ, Virtuoso RDF Views, Triplify, Ultrawrap, were built that would expose Relational Databases as RDF and even allow SPARQL to be executed directly on the relational database.
In October 2007, the W3C organized a workshop to discuss the interest of mapping relational databases to RDF: RDF Access to Relational Databases. The outcome of this workshop was the formation of the RDB2RDF Incubator Group in 2008. The objective of this group was to classify existing approaches to map relational databases to RDF and to then further decide if a standard was necessary. The Incubator Group had a face-to-face meeting in October 2008. The Incubator Group concluded its work with two deliverables: a Survey of Current Approaches for Mapping of Relational Databases to RDF and the RDB2RDF XG Final Report. The conclusion was to recommend the formation of a Working Group to standardize a mapping language.
NEXT PAGE >>