Last week, we published Under the Hood: A Closer Look at Information WorkBench, an interview with Peter Haase conducted by Kristen Milhollin as part of her series on Dynamic Semantic Publishing.
We are pleased to announce that FluidOps has created a viewer for the SemTechBiz conference program in Information Workbench. The viewer is a good example of a faceted, semantic viewer for the conference program data, including map, timeline, and graph views, and tying in data from disparate sources such as Facebook, Twitter, and the conference program itself.
You can view this event browser here:
Of course, we have the conference agenda-at-a-glance here, and will make the program available to conference attendees via the Guidebook app for mobile devices, but this is an interesting example of Semantic Technology at work in a human-friendly user interface.
Thanks to Peter Haase and the FluidOps team for this work!
I’m a fan of the waterfowl model of semantic technology. Clever semantics — as well as ‘advanced’ search boxes, arcane query syntax, and consumer interfaces that require user training — can paddle away as frantically as they like, but only while hidden well below the waterline. SPARQL, SKOS and SQL really shouldn’t be visible to most users of a web site. Ontologies and XML are enabling technologies, not user interface features.
With this week’s unveiling of the Knowledge Graph, Google has taken another step toward realising the potential of their Metaweb acquisition. The company has also clearly demonstrated its continued enthusiasm for delivering additional user value without requiring changes in user behaviour (well, except that those of us outside the US have to remember to use google.com and not our local version, if we want to try this out).
For those who don’t remember, Metaweb was one of those companies that got people excited about the potential for semantic technologies to hit the big time. Founded way back in 2005, Metaweb attracted almost $60Million in investment for their “open, shared database of the world’s knowledge” (Freebase) before disappearing inside Google in 2010.
Yesterday, we announced RDFa.info, a new site devoted to helping developers add RDFa (Resource Description Framework-in-attributes) to HTML.
Building on that work, the team behind RDFa.info is announcing today the release of “PLAY,” a live RDFa editor and visualization tool. This release marks a significant step in providing tools for web developers that are easy to use, even for those unaccustomed to working with RDFa.
“Play” is an effort that serves several purposes. It is an authoring environment and markup debugger for RDFa that also serves as a teaching and education tool for Web Developers. As Alex Milowski, one of the core RDFa.info team, said, “It can be used for purposes of experimentation, documentation (e.g. crafting an example that produces certain triples), and testing. If you want to know what markup will produce what kind of properties (triples), this tool is going to be great for understanding how you should be structuring your own data.”
Today, May 9, 2012 is Global Accessibility Awareness Day (#GAAD). What started with a simple blog-post by Los Angeles Web Developer, Joe Devon, has grown to include events around the world designed to increase awareness about web accessibility issues. To read more about the day and these various activities, see the official GAAD Website and Facebook page.
According to the US Centers for Disease Control and Prevention, “Today, about 50 million Americans, or 1 in 5 people, are living with at least one disability, and most Americans will experience a disability some time during the course of their lives.” In other parts of the world, this number may be significantly higher.
In the interest of full disclosure, Joe Devon is a personal friend of mine, and I must admit that if he were not, I likely wouldn’t have seen his blog post or explored the issues of accessibility as deeply as I have in recent weeks. But I have been exploring, and I’ve been surprised at what I’ve found. In my opinion, Semantic Technology and Assistive Technology are a natural fit for one another, but there seems to be very little discussion or work around the intersection of the two. I have looked, but have not found much collaboration between the two communities. I have also found few individuals who possess much knowledge about both Semantic Tech and Assistive Tech. Of course, if I’ve missed something, please let me know in the comments!
NOTE: This post is provided by guest author, Mr. Dennis E. Wisnosky, Chief Technical Officer and Chief Architect, Business Mission Area, U.S. Department of Defense. Dennis will be delivering a Special Presentation, “The Enterprise Information Web: Analytics, Efficiency and Security” at the June SemTechBiz Conference.
Semantic Technology brings a number of unique capabilities to data stores and applications. These capabilities evidence themselves both at the user interaction level, in what users can do with and expect from Semantic technologies; and at the system level, in terms of things applications can do internally without rework or recoding. Semantic Technology, based upon W3C standards, provides capabilities significantly beyond those of proprietary approaches based on technologies that were founded a half century earlier.
1. User Interaction Capabilities
Access to Meaning
Semantic Technology is based upon the development of the ontology of a particular domain. That is, “what do I need to know to have an unambiguous understanding of a particular thing, organization, subject, etc.?” This knowing is based upon precise understanding of the meaning of words used in the domain. A Semantic-Technology-based application depends on and provides a user with access to the defined meaning of the terms—the vocabulary, the words—used in the application. This means access to a human-only readable definition, such as one found in a dictionary, and access to the formalized definition found in the ontology that frames the system which executes the application. Such access should be presented in a human consumable form, and is one of the areas in which various formalisms such as Controlled Natural Language (CNL) are useful for translating technical forms of ontologies, such as the Web Ontology Language (OWL) , a W3C standard, to provide a human consumable form.
“There’s our SPARQL endpoint.” Or “Just view the page in Tabulator.” I have lost count of the number of times that either of these have been the only response to an innocent request to see what some new piece of semantic wizardry can do. For a developer seeking to integrate one semantics-rich data set with another, SPARQL may very well be the tool for the job. And for someone (probably a developer, again) who wants to track the way that data is pulled together to build a page, Tabulator has a lot going for it. But as a shop window for the power of semantics? As a demonstration of what’s possible? Seriously, is it possible to pick worse ways to show off to the world?
In January’s episode of the Semantic Link, we were joined by serial entrepreneur Nova Spivack (perhaps best known to readers as the Founder and CEO of Twine) for a discussion about the importance of delivering a good user experience. In the time available, we only scratched the surface, and I’m sure it’s a topic to which we’ll return. Read more
Kevin Fitchard recently asked the question, “Is Google scared of Siri? Is Yelp? Is Facebook? If they aren’t they should be, as should any mobile website, service or app that depends on advertising for revenues. Siri is just the beginning of a new wave of user interfaces (UIs) that will gradually shift our attention away from our phones’ screens, allowing us to interact with our devices in ways that don’t involve tapping keys and staring at pixels.” Read more
NEXT PAGE >>