Caroline Burle recently discussed open data for development in Latin America on the W3C blog. She writes, “Sharing governmental information in open, accessible and structured formats can substantially increase transparency and accountability in public policy design and implementation. Furthermore, it enables broad social engagement in the process. Hence, opening data and acknowledging the demands of the population that arise from this is essential to promoting social equality and effective public administration.” Read more
Posts Tagged ‘development’
Looking Ahead to Berlin and NYC Semantic Technology & Business ConferencesDates have been set for Semantic Technology & Business Conferences in Berlin (September 18-19, 2013), and in New York City (October 1-3, 2013). The Calls For Presentations will open by Monday, June 17 at the latest. If you have an idea for a conference session, panel, keynote or conference activity be sure to watch this space and submit a proposal when the CFP goes live!
Hal Hodson recently covered the “semantic evolution” of Talis culminating in Kasabi. He writes, “In 1969, a group of libraries in Birmingham decided they needed to become more efficient. Calling themselves the Birmingham Libraries Cooperative Mechanisation Project (BLCMP), the group built a centralised database of ‘machine-readable’ bibliographic records, first using microfilm to store book data then, from 1982, using IBM mainframes with terminals at each library. BLCMP went on to become Talis, named after its integrated library system, and for many years it was leader in the automated library management software market. But that is a mature market, and last year Talis divested its library division to focus on the company’s other passion: semantic technology.” Read more
So many semantic web efforts have their beginnings as R&D projects. Moving these projects from the lab into the real world has its challenges. But they can be overcome, and Daniel Field, business consultant at IT services and consulting company Atos Origin, has advice on how to manage them so the investment pays off.
In many cases these projects begin with multiple partners from different backgrounds –universities, businesses (including sometime competitors), and research centers on commission for other parties. “So they’re starting from a diverse set of perspectives,” he says. And the focus on meeting a mission’s specific goals can take center stage without due consideration of next steps – how, that is, to ensure that the partners contributing to the effort actually are able to exploit its achievements.
One thing Field recommends in order to realize that value is that the business case for the project be made at the proposal stage, and stuck to throughout the term of the effort. Critical to this is dealing upfront with questions around licensing and intellectual property.
Social media intelligence and analytics provider ViralHeat is making its sentiment engine, which it claims as one of the biggest repositories of sentiment data on the market, available for free to developers as an API. The company has been building that engine in conjunction with its agency and big-brand customers (think the likes of Dell) the last few years, and is hoping that the move will open the door to new applications of sentiment analytics, as well as deliver benefits that will profit its paying clients.
“The key for brands and agencies is sentiment,” says CEO Raj Kadam, and ViralHeat got started down that road with a keyword dictionary approach to analyzing social media that proved disappointing. It led to a lot of neutral vs. positive or negative conclusions, and accuracy wasn’t a strong suit. That’s when it turned to its clients to take things up a few levels. “We scrapped that first approach and started building a really large-scale machine learning cluster focused on speed – we get hundreds of millions of mentions a week – and also on accuracy,” he says. Today, the technology runs mentions through its sentiment cluster and gets a sentiment score back, and from there humans play a role in further assessing the text and passing it back to continually train the engine.
Its speed, scalability, and training are what Kadam considers the features that differentiate its Python-built sentiment web service platform from other vendors in the space, and it’s that same Sentiment API that it’s opening up to others. Kadam says the fact that it can quickly do its work, tagging the sentiment score and its accuracy probability on the fly, is one reason why it can open up the API. “If that cluster was really slow it would take us days and probably a large swath wouldn’t get tagged,” he says. He says the latency on competitive systems is “incredible. It’s like batch processing. You send the data in and wait a really long time to get results. Ours is just completely real time.”
The day may come when you might not need a team of developers to write data-driven or data-aware apps that themselves can be described in just a few words. Ideally, that would mean companies would spend a lot less money on, and speed up, a long-winded process that encompasses everything from understanding requirements to discovering data sources and normalizing results, to managing data coordination across front-end and back-end teams.
That day isn’t here yet, but SemantiNet is trying to move things a step closer to that point. The company this month has introduced an open-ended alpha API that has as its centerpiece the idea of the data flow graph.
Its purpose is to enable easy querying of a collection of Web Services, Wikipedia, Linked Data and the unstructured web, and culminating in “one-liner” search bar apps, including mashups, built in minutes. Some examples: drawing out from dbPedia objects within a 50-kilometer radius of the Eiffel Tower that are somehow related to Napoleon and displaying the results as video; breaking down the revenue of the world’s major car companies listed in Wikipedia and providing a pie chart with that data and also mashing into the results the age of the companies in a table, its locations pinpointed on a map, and company snapshots called out in a graph; or finding out which pizza places close to your current location have some lunch deals on. For good measure, throw in some tweets and analyze them for sentiment, just to make sure that we’re talking about tasty pizza.
Or check out some of the output at left for the Keith Richards Guitar Gallery app, built on combining the unique DBpedia URI for Keith Richards; a fuzzy matching of the free-form text “instrument” with a predicate of dbpedia:Keith_Richards to get a list of the instruments he played; and a rendering of this list of instruments using a SemantiNet template called videolist.html.
Your guide to building working solutions for the Semantic Web.
We wrote Semantic Web Programming to offer a useful guide to get the Semantic Web to do stuff – such as data integration and rich data analysis. We are active developers in this space and directly see its potential. We outline the key concepts, tools, and methods you need to program the Semantic Web to achieve these goals. Our book is filled with practical, easy-to-follow, examples using working code to illustrate how to take advantage of the many data sources and services available today, especially non-semantic ones like instant messaging, relational databases, and web services such as those offered by Facebook.
The Semantic Web is nearing the point of widespread practical adoption:
• The core specifications have stabilized
• Tools and frameworks implementing key features have been through several development cycles
• An increasing number of major software companies have developed semantically enabled products or are actively researching the space
As companies start to translate theory into real applications, they are confronted with a host of practical software engineering issues:
• What is the standard or recommended functional architecture of a semantic application?
• How does that architecture relate to the Semantic Web standards?
• Which of those standards are stable and which can be expected to evolve in ways that would significantly impact prior applications?
• What types of tools/frameworks exist that can be leveraged to help implement semantic applications?
• How mature are the various categories of Semantic Web tools/frameworks?
• Can API standardization be expected for certain tool/framework categories?
• What best practices exist for the design, implementation and deployment of semantic applications?
• What future trends in support for semantic application development can be expected?
This panel session gathers together semantics experts from the software industry to address these and other practical issues relating to the development of semantic applications.
Attachment: Panel – Developing Semantic Web Applications (24.68 MB)
Sun Microsystems, Inc.
Dr. Allemang specializes in innovative applications of knowledge technology and brings to TopQuadrant over 15 years of experience in research, deployment, and development of knowledge-based systems. He developed the curriculum for Top Quadrant’s successful training series for Semantic Web technologies, which he has been presenting to customers world-wide for four years. Dean has completed a master’s degree at the University of Cambridge as a Marshall scholar, a PhD at the Ohio State University as a National Science Foundation Graduate Scholar, and is a two-time winner of the Swiss Prize for Innovation in Technology. Prior to joining TopQuadrant, Dr. Allemang was the Vice-President of Customer Applications at Synquiry Technologies, were he filed two patents on the application of graph matching algorithms to the problems of semantic information interchange.
Dr. Jans Aasman
President & CEO, Franz Inc.
Dr. Jans Aasman, Franz’s President and CEO, was a longtime customer and joined Franz from TNO Telecom based in The Netherlands. Prior to Franz, he worked at KPN Research, the research lab of a major Dutch telecommunication company. Dr. Aasman was a tenured professor in Industrial Design at the Technical University of Delft, where he held the chair title Informational Ergonomics of Telematics and Intelligent Products. He also was a visiting scientist at the Computer Science Department of Prof. Dr. Alan Newell at Carnegie Mellon University. Dr. Aasman holds a degree in experimental and cognitive psychology from the University of Groningen, with specialization in Psychophysiology and Cognitive Psychology.
Eric Miller is the President of Zepheira. Prior to founding Zepheira, Eric led the Semantic Web Initiative for the World Wide Web Consortium (W3C) at MIT where he led the architectural and technical leadership in the design and evolution of the Semantic Web. Eric is a frequent and sought after international speaker in the areas of International Web standards, knowledge management, collaboration, development, and deployment.