Semantic-based cloud and data management vendor fluidOps has an interesting project underway with Google Glass, which would bring its technology to the wearable computer to help conference attendees explore information about the event they’re at and their fellow participants.

With funding from Germany’s Federal Ministry of Economics and Technology and in cooperation with the Department of Computer Science of the University of Freiburg, the project began this spring. The effort will build upon fluidOps’ existing Conference Explorer web app, which took second place in the Metadata Challenge held as part of the WWW conference last year. Conference Explorer is based on the company’s Information Workbench, a Web-based open platform for the development of Linked Data solutions, and was designed to give users ways to explore events and augment data about them with information from external sources like social networks.

The mobile conference assistant project goes by the name Durchblick, and for Google Glass or head-mounted displays “it takes the information corpus we already have and provides context-sensitive information to the end user,” Dr. Michael Schmidt, fluidOps’ architect, research and development, told The Semantic Web Blog at the Semantic Technology and Business conference held earlier this month.

The current version of the HTML application is not context-sensitive, but the goal is to achieve that and “display the right information for the right user at the right time on Google Glass, and develop a nice front end for Google Glass,” he says. “We have to focus on relevant information and finding ways on how to deliver that to users without disturbing current actions.”

App in Action

For example, users wearing Google Glass displays could get background information about a speaker as they’re watching his presentation, or perhaps gain insight into whom they know in common with the speaker, Schmidt says. It leverages the vendor’s semantic technology that makes it easy to integrate and correlate data from various sources (personal or public) to help with these jobs. Another possibility: Scanning a QR code on a person’s badge could alert Google Glass about who it is you are talking to, and within a few seconds call up a few topics you can discuss with this person. Not only that, but pictures taken with Google Glass and uploaded to the conference website — annotated with semantically rich content about who took the image, who’s in it and so on — could help conference hosts automatically create a photo album of the event, an otherwise laborious manual process, he notes.

The current project status is focused on considering use cases and defining them in detail, and refining the architecture. The aim is to be able to soon implement the first prototypes.

In addition to leveraging the vendor’s semantic technologies to integrate data from heterogeneous sources, Schmidt mentions that next-generation recommendation systems will be useful to enabling context-sensitivity and knowledge that is additional metadata about the situation of the users. Typical recommendation systems like those you might see on ecommerce sights deliver results based on things like what you bought in the past, “but here the situation is quite a bit different….On the one hand it’s using all the contextual information around you — what is the time, where you are – and it’s also learning based on your history,” he says. “We hope by cooperating with the university we will get better answers with them.”

One problem at the moment is that there are still some unknowns about Google Glass. When it becomes generally available to the public, it might, for instance, come with prohibitions against facial scanning, which could be a useful component for this project to leverage. “There are still a lot of variables that we also don’t know about how it will develop, but in principle we are evaluating the functionality that they offer today with the API, and we have use cases in mind about how we can implement this and not raise any concerns,” says Schmidt.

fluidOps hopes to have a prototype it could show by the time of the next SemTechBiz West conference. Given that its core business is data-center focused, Schmidt thinks it is most likely the vendor will seek a partnership when it comes time to market the final product.

And speaking of the company’s focus on the data center, he sees opportunities for Google Glass and fluidOps’ technology to come together there, too. Imagine the mobile conference assistant app turned to the purpose of helping data center administrators: Scan a QR code on a server with Google Glass and get information on what’s running on it, how cables are connected, what you can and cannot remove to and not remove, and so on.

“That’s really supporting administration. In principle the basic technology is more or less the same. It’s not so much about recommendations, though, but integrated data,” Schmidt says, “where you have information about a service and can deliver that information to the front-end when a certain action applies.”