Learning

Introduction to: OWL Profiles

Name Tag: Hello, we are the OWL familyOWL, the Web Ontology Language has been standardized by W3C as a powerful language to represent knowledge (i.e. ontologies) on the Web. OWL has two functionalities. The first functionality is to express knowledge in an unambiguous way. This is accomplished by representing knowledge as set of concepts within a particular domain and the relationship between these concepts. If we only take into account this functionality, then the goal is very similar to that of UML or Entity-Relationship diagrams. The second functionality is to be able to draw conclusions from the knowledge that has been expressed. In other words, be able to infer implicit knowledge from the explicit knowledge. We call this reasoning and this is what distinguishes OWL from UML or other modeling languages.

OWL evolved from several proposals and became a standard in 2004. This was subsequently extended in 2008 by a second standard version, OWL 2. With OWL, you have the possibility of expressing all kinds of knowledge. The basic building blocks of an ontology are concepts (a.k.a classes) and the relationships between the classes (a.k.a properties).  For example, if we were to create an ontology about a university, the classes would include Student, Professor, Courses while the properties would be isEnrolled, because a Student is enrolled in a Course, and isTaughtBy, because a Professor teaches a Course.

Read more

White Paper: “The Business Value of Semantic Technology”

Image of white paper cover: "The Business Value of Semantic Technology"

Free Download at
http://bit.ly/WqS34V

“If you don’t understand what your software engineers are talking about, perhaps it’s because they are using a vocabulary they invented for the problem they are solving.” This begins a white paper called, “The Business Value of Semantic Technology” by Chris Moran, CTO, Information Management Solutions Consultants, Inc.

Moran continues, “Engineers invent a vocabulary and data structure for each system they build and each problem they solve, and only the engineers who built the system understand this structure and vocabulary. Even other engineers must learn it in order to make the data usable. In most enterprises today, we have as many different ways to ask questions of our data as we have systems to store it. We have as many different vocabularies and data structures as we have systems. The problem is actually worse than it sounds….

Read more

Introduction to: Triplestores

Badge: Hello, my name is TriplestoreTriplestores are Database Management Systems (DBMS) for data modeled using RDF. Unlike Relational Database Management Systems (RDBMS), which store data in relations (or tables) and are queried using SQL, triplestores store RDF triples and are queried using SPARQL.

A key feature of many triplestores is the ability to do inference. It is important to note that a DBMS typically offers the capacity to deal with concurrency, security, logging, recovery, and updates, in addition to loading and storing data. Not all Triplestores offer all these capabilities (yet).

Triplestore Implementations

Triplestores can be broadly classified in three types categories: Native triplestores, RDBMS-backed triplestores and NoSQL triplestores. Read more

Getting Started with the Semantic Web Using SPARQL with R

A new article on R Bloggers explains how to get “up and running on the Semantic Web” using SPARQL with R in under five minutes. The article states, “We’ll use data at the Data.gov endpoint for this example. Data.gov has a wide array of public data available, making this example generalizable to many other datasets. One of the key challenges of querying a Semantic Web resource is knowing what data is accessible. Sometimes the best way to find this out is to run a simple query with no filters that returns only a few results or to directly view the RDF. Fortunately, information on the data available via Data.gov has been cataloged on a wiki hosted by Rensselaer. We’ll use Dataset 1187 for this example. It’s simple and has interesting data – the total number of wildfires and acres burned per year, 1960-2008.” Read more

Free Course on Semantic Web Technologies Starting February 4

The Hasso Plattner Institute is offering a free course in English introducing the Semantic Web. The article states, “The conventional Internet search reaches the limits of its power when the computer is expected to correctly interpret the meaning of information and not simply a character string. How the information expressed in natural language is extended in the so-called Semantic Web to enable a machine-readable interpretation of its meaning (semantics) is shown in the new open online course starting February 4th at http://www.openhpi.de. Registration for this free Hasso Plattner Institute (HPI) course is now open. This third course in the new Internet educational platform, launched in September 2012, lasts six weeks and is being held in English. Those who successfully complete the course will receive a certificate from the HPI.” Read more

New Year, New Skills: Get Ready For The Future With MOOCs

Photo courtesy: Flickr/CollegeDegrees360

Was one of your New Year’s resolutions to build up your knowledge, skills and talents for the new digital world? If so, there are plenty of online options to help you achieve your goals, and at no cost to you, from the crop of MOOCs (massive open online courses) that’s sprung up.

The Semantic Web Blog scoured some of them to present you with some possible courses of study to consider in pursuit of your goals:

Coursera:

  • Data scientists-in-training, Johns Hopkins Bloomberg School of Public Health assistant professor of biostatistics Jeff Leek wants to help you get a leg up on Big Data – and the job doors that understanding how to work with it opens up – with this applied statistics course focusing on data analysis. The course notes that there’s a shortage of individuals with the skills to find the right data to answer a question, understand the processes underlying the data, discover the important patterns in the data, and communicate results to have the biggest possible impact, so why not work to become one of them and land what Google chief economist Hal Varian reportedly calls the sexy job for the next ten years – statistician (really). The course starts Jan. 22.
  • We’ve seen a lot about robots in the news over the last month, from the crowd-funded humanoid service robot Roboy, the brainchild of the Artificial Intelligence Laboratory of the University of Zurich, to Vomiting Larry, a projectile vomiter developed to help scientists to better understand the spread of noroviruses. If you’d like to learn about what’s behind robots that can act intelligently (sorry, Larry, but you might not qualify here), you want to learn more about AI. And you can, with a course starting Jan. 28 taught by Dr. Gerhard Wickler and Prof. Ausin Tate, both of the University of Edinburgh.
  • Siri, where can I go to find out more about natural language processing? One option: Spend ten weeks starting February 11 learning about NLP with Michael Collins, the Vikram S. Pandit Professor of Computer Science at Columbia University. Students will have a chance to study mathematical and computational models of language, and the application of these models to key problems in natural language processing, with a focus on machine learning methods.

Read more

Semantic Technologist Gets In On The Ground Floor

One of the exciting things about being a semantic technologist is the opportunity to be in on the ground floor of things as companies revamp, revise, and renew their infrastructures for the Web 3.0 world.

That’s the position that Keith DeWeese finds himself in. DeWeese recently moved from The Tribune Company, where he led efforts in applying semantic technology to the publisher’s content (see story here), to Ascend Learning, a company that provides technology-based education products with a focus on the healthcare sector.

There, as principal content architect he is again championing the power of semantic technology for online content. “What’s cool is that Ascend is in a state of redefining what it does, how it works, its whole platform,” DeWeese says. Ascend wants to be able to take people from the beginning stages of their career, when they’re learning the basics, and work with them throughout their life, so that as they progress in their careers and become more knowledgeable about their profession or specialization and work toward different exams, it’s got the tools to engage with them at that part of their lifecycle.

“It’s really great because there’s an openness and willingness to try different approaches to making content available to end users.”

Read more

New “Linked Data” Book Launches – 50% Discount for Our Readers

Cover of Linked Data book by David Wood et alThis week, Manning Publications is launching the book “Linked Data,” by David Wood, Marsha Zaidman, Luke Ruth, and Michael Hausenblas.

As part of that launch, Manning is offering a one-day 50% discount for readers of SemanticWeb.com. The discount applies to all versions of “Linked Data”: eBook, print books, and Manning’s “MEAP” books (more on MEAP below). To claim the discount, use coupon code “12linksw” when ordering.

This offer expires at 11:59 pm (US EST) on December 6, so if you’re interested, act fast!

About the Book (description by David Wood):

The flexible, unstructured nature of the Web is being extended to act as a global database of structured data. Linked Data is a standards-driven model for representing structured data on the Web that gives developers, publishers, and information architects a consistent, predictable way to publish, merge and consume data. The Linked Data model offers the potential to standardize Web data in the same way that SQL standardized large-scale commercial databases. Linked Data has been adopted by many well-known institutions, including Google, Facebook, IBM, Oracle and government agencies, as well as popular Open Source projects such as Drupal.

Read more

Introduction to: Open World Assumption vs Closed World Assumption

Nametag: Hello, my name is O.W.A.If you are learning about the Semantic Web, one of the things you will hear is that the Semantic Web assumes the Open World. In this post, I will clarify the distinction between the Open World Assumption and the Closed World Assumption.

The Closed World Assumption (CWA) is the assumption that what is not known to be true must be false.

The Open World Assumption (OWA) is the opposite. In other words, it is the assumption that what is not known to be true is simply unknown.

Consider the following statement: “Juan is a citizen of the USA.” Now, what if we were to ask “Is Juan a citizen of Colombia?” Under a CWA, the answer is no. Under the OWA, it is I don’t know.

When do CWA and OWA apply?

Read more

Easy to Use Ontology?

Michael Uschold of Semantic Arts has offered an answer to the question, how can you ensure that an ontology is easy to use? Uschold responds, “This is a complex and multi-faceted issue. The answer depends on the audience, who have varying degrees of a) knowledge in the domain, b) technical background, c) awareness of what the ontology is for and d) need to directly work with the ontology. For everyone, and especially non-technical people, it is important for there to be natural language comments explaining the meaning of the concepts. It is helpful to have an overview of the ontology which has only the top few dozen classes and relationships (like a UML class diagram).”

He goes on, “It is good to have HTML documents that can be automatically generated from various tools.  It should be possible to seamlessly move between levels of detail from the very general to the very specific ban back.  Read more

<< PREVIOUS PAGENEXT PAGE >>