It’s fair to say that an good idea has finally “arrived” when it has left the realm of the theoretical and has become the foundation of a lot of popular tools, services, and applications.
That is surely the case with Semantic Video.
Gone are the days when internet video could best be described as a meaningless blob of content invisible to search and impossible to annotate and reuse in meaningful ways.
The past year has seen an explosion of practical (and popular) services and applications that are based upon the extraction of meaningful metadata– and often linked data– from video content.
For those of us lucky enough to view it, the BBC wowed us last July with its Olympic Coverage, broadcasting live every event of the Olympics on 24 HD streams, all accessible over the internet, with live, dynamic data and statistics on athletes. To pull off this feat, the BBC used a custom-designed Dynamic Semantic Publishing platform which included fluid Operations’ Information Workbench to help author, curate and publish ontology and instance data.