The vast majority of today’s enterprise applications owe their genesis to a period very different from today. Even the most apparently innovative share perhaps unnecessary heritage with their ancestors, preventing them from fully exploiting the potential of an ever-more connected world.

The increased ubiquity of high speed access to the Internet has changed the lives of millions. At the same time, plummeting costs for storage, computing and bandwidth have formed key aspects of the environmental shift that has enabled Web-based companies to entice users with significant free offerings, and to subsequently monetise these in a variety of ways from the ever-popular ‘pro’ account to the universal fall-back of advertising. The Web has become our water cooler, our photo album, our book shop, our encyclopaedia, our travel agent, our road atlas, and also fulfils a host of functions without commonplace offline forms.

Inside the enterprise, the Internet remains at a remove from the applications within which most employees spend their time. Valid concerns around security are certainly a factor here, as are the long lead times required to develop and implement new systems. More serious, though, is an apparent lack of vision. Rather than fundamentally re-engineering with what Tim O’Reilly refers to as ‘the Internet Inside,’ new applications continue to repeat the methods and mindset of the past. The capabilities of the network Cloud beyond the corporate firewall remain woefully underused, their potential unrealised.

For legitimate historical reasons, yesterday’s applications took the form of physical and logical silos. They often ran on their own hardware; they had their own (typically arcane) user interface; data were governed by narrowly scoped taxonomies, whether formally recognised or not. Despite availability of numerous interoperable standards and specifications, the majority of the application iceberg lurked beneath the surface of the enterprise and typically operated according to internally justified and often unique rules. These applications were optimised to the business imperatives of their host and performed adequately through waves of technology refresh, Y2K panics and the rest. Faced with a fundamental shift toward the network and increasing demands to expose the value locked up inside these monoliths, the cracks begin to show.

Current lightweight Web development practices tend toward connectedness, and the technologies and approaches of the Semantic Web have much to offer in extending the connectedness we have come to understand with today’s Web of Documents into the realm of the data locked inside so many legacy applications. There is a growing expectation, encouraged by browser plugins, RSS alerts, desktop widgets and gadgets, corporate dashboards and the rest that information should be available to those who need it without them having to turn to the data-holding application itself. Exporting data from one application for delivery via some form of alert can often be hard enough. Exposing data from across a suite of applications in order that it may be meaningfully combined and mined is harder still and requires far more rigourous adherence to the specifications that we already have as well as the development of new-and more lightweight-solutions.

The capabilities and data in our applications can and should be available to the application itself as well as exposed via simple Internet-flavour interfaces (such as REST APIs) with which most Web developers can engage. The boundaries erected between applications by those creating them are rapidly becoming real barriers to the conduct of business as the organisations using these applications find it increasingly necessary or desirable to become ever-more nimble and engaged. As the edges of the institution become blurred and permeable, do our applications not need to follow suit?

Previous attempts to enable interoperability have tended to falter, in part because of the presumption that success required either widespread adoption of the same system or the implementation of a standard so huge and riddled with compromise as to be effectively useless. Whilst not without their problems, current approaches such as OpenID and the Google-backed Open Social offer a far more feasible means by which data and credentials can begin to flow from one point on the Web to another.

By taking a fundamentally Web-based approach to the development of applications, such as that behind the Talis Platform, we shift from bolting Web capabilities onto the silo toward a mode in which data and functionality are native to the Web: a mode in which the design decisions are more about modelling business requirements for limiting the ways in which data flows from one point to another rather than trying to anticipate the places in which it might be needed in order to design those pathways into software from the outset.

Pragmatic leveraging of Semantic Web specifications, such as SKOS and RDF, create the linkages, rendering every nugget of data as a resource potentially directly addressable via the Web. Semantically aware applications are able to access data exposed by any conforming data repository, in line with the permissions policies laid down by the data owner, and lightweight APIs make it feasible for third parties to rapidly assemble applications that benefit from access both to the data and to various aggregations thereof that might, for example, obscure sensitive details or gather summary statistics from a large number of contributors. As with more prevalent Web specifications, such as the various flavours of RSS, data may simply be available for collection and use, without recourse to complex and protracted technical and contractual negotiation. More sensitive or valuable data may be restricted to particular types of use, or particular users.

Within the broader Semantic Web community, the notion of Linked Data is gaining some traction. Tim Berners-Lee was amongst those championing the importance of this activity at the World Wide Web conference in Banff earlier this year, and the Linking Open Data community project continues to go from strength to strength as it collects freely-available datasets and converts them to RDF. Where data are not understood to be in the public domain, efforts are underway to apply some of the principles extolled for creative works by Creative Commons and software by the GPL and Apache licenses to the rather different area of data. At Talis, for example, we have been funding a programme of work around the notion of an Open Data Commons license, and we anticipate being able to announce adoption of these principles shortly.

Reliable, cheap and speedy access to data, computational resources, and aggregation and analysis services via the Internet offers opportunities to fundamentally rethink the relationship between corporations and ‘their’ data. Previously considered (to paraphrase Geoffrey Moore) ‘core,’ much of the data collected, maintained and hoarded by the world’s corporations may increasingly be seen as little more than ‘context.’ The value lies in benchmarking these data against trends and metrics derived from your competitors, and in both the innovative analysis you think to undertake and the business decisions you then make as a result. By releasing the corporate grip upon the merely contextual, the pool of knowledge from which core value may be derived grows, as do the opportunities for success.