Princeton Establishes New Center for Statistics and Machine Learning, Names John Storey Director

princeton-university-logoDaniel Day of News at Princeton reports, “Princeton University has established the Center for Statistics and Machine Learning. John Storey, a professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics, has been named the center’s director. The center will anchor the teaching of and research in statistics and machine learning on campus, Storey said, offering an undergraduate certificate as well as graduate training in the field.” Read more

Introduction to: Linked Data Platform

Nametag: Hello, my name is Linked Data PlatformIn its ongoing mission to lead the World Wide Web to its full potential, the W3C recently released the first specification for an entirely new kind of system. Linked Data Platform 1.0 defines a read-write Linked Data architecture, based on HTTP access to web resources described in RDF. To put that more simply, it proposes a way to work with pure RDF resources almost as if they were web pages.

Because the Linked Data Platform (LDP) builds upon the classic HTTP request and response model, and because it aligns well with things like REST, Ajax, and JSON-LD, mainstream web developers may soon find it much easier to leverage the power and benefits of Linked Data. It’s too early to know how big of an impact it will actually make, but I’m confident that LDP is going to be an important bridge across the ever-shrinking gap between todays Web of hyperlinked documents and the emerging Semantic Web of Linked Data. In today’s post, I’m going to introduce you to this promising newcomer by covering the most salient points of the LDP specification in simple terms. So, let’s begin with the obvious question…


What is a Linked Data Platform?

A Linked Data Platform is any client, server, or client/server combination that conforms in whole or in sufficient part to the LDP specification, which defines techniques for working with Linked Data Platform Resources over HTTP. That is to say, it allows Linked Data Platform Resources to be managed using HTTP methods (GET, POST, PUT, etc.). A resource is either something that can be fully represented in RDF or otherwise something like a binary file that may not have a useful RDF representation. When both are managed by an LDP, each is referred to as a Linked Data Platform Resource (LDPR), but further distinguished as either a Linked Data Platform RDF Source (LDP-RS) or a Linked Data Platform Non-RDF Source (LDP-NR).

Read more

New Open Source Graph Database Cayley Unveiled (Video – Part 2)

Cayley Logo[Editor's note: This is Part 2 of a 3 part series. See Part 1 and Part 3]

Barak Michener, Software Engineer, Knowledge NYC has posted on the Google Open Source Blog about “Cayley, an open source graph database.”: “Four years ago this July, Google acquired Metaweb, bringing Freebase and linked open data to Google. It’s been astounding to watch the growth of the Knowledge Graph and how it has improved Google search to delight users every day. When I moved to New York last year, I saw just how far the concepts of Freebase and its data had spread through Google’s worldwide offices. I began to wonder how the concepts would advance if developers everywhere could work with similar tools. However, there wasn’t a graph available that was fast, free, and easy to get started working with. With the Freebase data already public and universally accessible, it was time to make it useful, and that meant writing some code as a side project.”

The post continues: “Cayley is a spiritual successor to graphd; it shares a similar query strategy for speed. While not an exact replica of its predecessor, it brings its own features to the table:RESTful API, multiple (modular) backend stores such as LevelDB and MongoDB, multiple (modular) query languages, easy to get started, simple to build on top of as a library, and of course open source. Cayley is written in Go, which was a natural choice. As a backend service that depends upon speed and concurrent access, Go seemed like a good fit.”

Read more

New schema.org Technical Brief Available from LRMI and Cetis

Image of Technical Brief by LRMI about schema.org.The Learning Resource Metadata Initiative (LRMI) has released a technical briefing about schema.org. The paper was co-authored by Phil Barker and Lorna M. Campbell of Cetis, the Centre for Educational Technology, Interoperability and Standards.

LRMI, which we have reported on here, “has developed a common metadata framework for describing or ‘tagging’ learning resources on the web.”

The Cetis website says, “This briefing describes schema.org for a technical audience. It is aimed at people who may want to implement schema.org markup in websites or other tools they build but who wish to know more about the technical approach behind schema.org and how to implement it. We also hope that this briefing will be useful to those who are evaluating whether to implement schema.org to meet the requirements of their own organization.”

In making the announcement in a W3C list, Barker explained, “We often find that when explaining the technology approach of LRMI we are mostly talking about schema.org, so this briefing, which describes the schema.org specification for a technical audience should be of interest to anyone thinking about implementing or using LRMI in a website or other tool. It should also be of interest to people who plan to use schema.org for describing other types of resources.”

The technical brief can be downloaded from:


New Free Online Semantic Web Technologies Class Beginning in May, 2014


A new article reports that the Hasso Plattner Institute will be launching a free online course on Semantic Web Technologies which should begin on May 26, 2014. According to the article, “Anyone wishing to keep up with the current university knowledge on information technology will again have the opportunity in the coming year with the five free online courses to be offered by Hasso Plattner Institute (HPI). The new courses listed in the just released openHPI overview for 2014 are: Concepts in Parallel Computing, Networking via the Internet Protocol TCP/IP, Semantic Web Technologies, In-Memory Data Management and Introduction to Internet Security. Read more

Semantic Web in Emergency Response Systems – UPDATE

2009-Veiligheidsregios-mediumCoordinated emergency response, built on Linked Data.

That is the vision of Bart van Leeuwen, Amsterdam Firefighter and founder of software company, Netage. We’ve covered Bart’s work before here at SemanticWeb.com and at the Semantic Technology & Business Conference, and today, there is news that the work is advancing to a new stage.

In the Netherlands, there exist 25 “Safety Regions” (pictured on the left). These organizations coordinate disaster management, fire services, and emergency medical teams. The regions are designed to enable various first responders to work together to deal with complex and severe crises and disasters.

Additionally, the Dutch Police acts as a primary partner organization in these efforts. The police is a national organization, separate from the safety regions and divided into its own ten regions. Read more

School Days May See Semantic Tech Help With Online Learning Assessments

It’s getting to be that time again – yup, school days are getting into full swing. Education, of course, is going through a lot of change these days. The Common Core State Standards initiative is changing what students must learn and what teachers must teach in the early grades, while school-specific online courses now are joined by massive online learning courses (MOOCs) that are bringing new learning experiences on a large-scale to everyone from high school and college students to adults who haven’t taken courses inside a live classroom for decades.

It’s under these circumstances that startup Cognii is hoping to make its mark by applying natural language processing and semantic technology to automate assessments for online learning for students and to grade essays for educators. Its initial focus is on the online education sector, though founder and CTO Dharmendra Kanejiya – whose background involves developing algorithms to improve speech recognition at Vlingo, which were applied to Nuance Communications’ solutions when it acquired the company – says it also can have applicability in the real-world classroom.

Read more

Georgia Tech Embraces MOOC Model For MS In Computer Science

Now you can get a master’s degree in Computer Science from a prestigious university online. The New York Times has reported that the Georgia Institute of Technology is planning to offer the CS degree via the MOOC (massive open online course) model.

According to the Georgia Tech MS Computer Science program of study website, students can choose specializations in topics such as computational perception and robotics, which includes courses in artificial intelligence, machine learning, and autonomous multi-robot systems among student choices; interactive intelligence, which include courses in knowledge-based AI and natural language; or machine learning, which offers electives in the topic for theory, trading and finance, among other options.

Read more

The Semantic Spin On YouTube’s GeekWeek

As you surely know by now, it’s GeekWeek on YouTube. But in case you haven’t been keeping up with every theme, today is Brainiac Tuesday, its focus on science, education and knowledge – a particularly relevant topic for readers of this blog, we think.

We didn’t see any particularly semantic videos pointed out in the Tuesday Highlights. The recommendation of Wired and YouTube’s “How to Make a Giant Robot Mech” fed some hopes, but looks like the big guy owes his smarts to a human pilot rather than artificial intelligence.

That’s not to say there isn’t good stuff among the pickings. Steve Spangler’s Favorite Experiments is a kick, for instance. And who knew that a volcano caused the French Revolution? But we’d like to hear it for semantic web, tech and related videos, too, on this Brainiac day.

To that end, here are a few of our own recommendations:

Introduction to: Reasoners

Name Tag: Hello, we are ReasonersReasoning is the task of deriving implicit facts from a set of given explicit facts. These facts can be expressed in OWL 2 ontologies and stored RDF triplestores. For example, the following fact: “a Student is a Person,” can be expressed in an ontology, while the fact: “Bob is a Student,” can be stored in a triplestore. A reasoner is a software application that is able to reason. For example, a reasoner is able to infer the following implicit fact: “Bob is a Person.”

Reasoning Tasks

Reasoning tasks considered in OWL 2 are: ontology consistency, class satisfiability, classification, instance checking, and conjunctive query answering.

Read more