It’s fair to say that an good idea has finally “arrived” when it has left the realm of the theoretical and has become the foundation of a lot of popular tools, services, and applications.
That is surely the case with Semantic Video.
Gone are the days when internet video could best be described as a meaningless blob of content invisible to search and impossible to annotate and reuse in meaningful ways.
The past year has seen an explosion of practical (and popular) services and applications that are based upon the extraction of meaningful metadata– and often linked data– from video content.
For those of us lucky enough to view it, the BBC wowed us last July with its Olympic Coverage, broadcasting live every event of the Olympics on 24 HD streams, all accessible over the internet, with live, dynamic data and statistics on athletes. To pull off this feat, the BBC used a custom-designed Dynamic Semantic Publishing platform which included fluid Operations’ Information Workbench to help author, curate and publish ontology and instance data.
Another notable achievement was the successful launch in September of last year of the most ambitious internet-based news archive in history– the TV News Search & Borrow service, a searchable collection of 400,000+ news broadcasts dating back to 2009 maintained by the Internet Archive.
The service uses closed captioning with named entity recognition, enriching search of the archives with linked data. Users can browse and share 30-second segments of the archives, annotate moments, and even borrow entire programs on DVD-ROM. According to Project Director Roger MacDonald, the immense archive of news has been put to use by documentary filmmakers, educators and news analysts, and thanks to a $1 million grant from the Knight Foundation has the capacity to expand to put every television program in history online.
Over the past year Social TV ventures “took off“, as companies like NextGuide, Viggle, GetGlue , IntoNow, Dijit/GoMiso, and zeebox, vied to set the new standard. Second Screen “became a thing” and experiences were added to an increasing number of shows on Primetime and cable. For many of these platforms, real-time semantic annotation of video provides rich linked data to improve Social TV and Second Screen experiences for viewers, enabling the discovery of related information from around the internet and providing the basis for sharper and more focused discussions around TV-related content.
Such technologies have delivered us to the doorstep of achieving the ultimate goal of video advertising: truly smart, contextual, and even interactive ads that accompany your favorite programs and video, uniquely tailored according to your personal tastes. This year saw the acquisition of Bluefin by Twitter, combining a vast fingerprinted TV library, semantic technology, and a vast social network to tie conversations in social media directly to content in shows and in ads. ConnecTV in January launched Ad Sync, which allows brands to sync a “companion experience” with their own TV ads, allowing viewers to instantly buy what they see, or click to get promotional offers, enter contests, watch related product videos, and more. To make this happen, ConnecTV has indexed content on over 400 channels to enable advertisers to target keywords on second-screen devices.
Short-form internet video is becoming more discoverable and useful as a result of the decision by the world’s second largest search engine, YouTube, to apply linked data (in this case, Freebase topics) to YouTube videos by way of the new YouTube Data API, released in beta in December of 2012. Now developers can write code to pull videos that relate to specific, disambiguated topics. And soon, thankfully, we should expect YouTube’s own search algorithms to start improving .
YouTube’s decision to apply Freebase topics to its videos also helps burgeoning Content Discovery Networks like Interesante, ShowYou, and Seevl, that scour the web to find content that relates to a person’s interests and likes, as well as smart new media outlets like Link TV who know how to find the best contextually relevant media to add depth to their stories. When you consider the innumerable ways you can tailor the delivery of video according to context and the personal preferences of a viewer, it becomes clear that short-form video distribution is about to get a lot more interesting.
While most people watching videos or interacting with a TV program over the internet are not aware of the revolutionary changes going on “under the hood,” the ease of discovery, depth of context, and improved interaction with this media is the outcome of an explosion of video services and tools that use Semantic principles, concepts, and technology. It has truly been a great year for Semantic Video.
Dive deeper into this year’s Semantic Video successes by attending Beyond the Blob: Semantic Video’s Coming of Age during the 2013 Semantic Technology and Business Conference in San Francisco. Avoid on-site prices; save $200 and register before Sunday.