Posts Tagged ‘video’

Twelvefold Introduces Spectrum for Video: Real-Time URL-Level Video Ad Placements Across All Screens

12f

San Francisco, Calif. (March 6, 2014) – Twelvefold, a big data company that targets audiences in real time without the use of cookies, today announced Spectrum for Video. Spectrum for Video delivers the most relevant video ad placements based on the influence, authority and emotional connection a piece of content creates with its reader. With more than 700 million individual videos from more than 5,000 sources, Twelvefold has culled and indexed the best of the web, including a mix of spot lengths. Read more

Video: When Will My Computer Understand Me?

Aaron Dobrow of the Texas Advanced Computing Center at the University of Texas recently wrote, “Language isn’t always straightforward, even for humans. The multiple definitions in a dictionary can make it difficult even for people to choose the correct meaning of a word. Katrin Erk, a linguistics researcher in the College of Liberal Arts, refers to this as ‘semantic muck.’ Enabled by supercomputers at the Texas Advanced Computing Center, Erk has developed a new method for visualizing the words in a high-dimensional space. Instead of hard-coding human logic or deciphering dictionaries to try to teach computers language, Erk decided to try a different tactic: feed computers a vast body of texts (which are a reflection of human knowledge) and use the implicit connections between the words to create a map of relationships.” Read more

Wibbitz Wants A World Where Every Article Can Have Its Own Video

Olivia Solon of Wired.co.uk recently drew attentions attention to startup Wibbitz, calling it a “Summly for video.” In five to ten seconds, she reports, the technology “transforms any article into a one to two-minute video, extracting the salient points from the text, pulling in images and infographics and adding a voiceover.”

Wibbitz, Solon notes, “uses algorithms to extract text from an article, then analyses it using natural language processing and artificial intelligence to understand what the text is taking about. It then summarises the article, and hunts for relevant imagery through various image licensing sites, including Getty, Reuters and AP. It also adds in infographics based on a number of fixed templates and then converts the summarised text into a voiceover (users will soon be able to select one of four different voices).”

Read more

See the Trailer for Datalandia: The Small Town with Big Data Solutions

GE is having fun showing off their Big Data solutions in a very small (fictional) town called Datalandia. Katie Kaye of AdAge reports, “GE filmed a teensy town in Germany to teach everyday people about the internet of really big things. The maker of data-generating wind turbines and jet engines today will unveil the first of a series of short films that mimic summer-blockbusters to illustrate the industrial internet through scenarios involving blood-sucking vampires and extraterrestrials. ‘What if there was a little town that had the industrial internet and they were using it every day to keep the town folks happy, healthy and productive?’ asked Tommy Means, founder and creative director at Mekanism, the creative agency behind the campaign. The mini-films were shot on location at Miniatur Wunderland, a massive world of model trains and their surroundings in Hamburg that’s loaded with intricate replicas of an airport, ships, hospitals, a soccer stadium, and countless itsy-bitsy inhabitants. GE has named the realm created for the campaign ‘Datalandia’.” Read more

Semantic Video’s Banner Year

The BBC made use of semantic video annotation in its coverage of the 2012 Olympics

It’s fair to say that an good idea has finally “arrived” when it has left the realm of the theoretical and has become the foundation of a lot of popular tools, services, and applications.

That is surely the case with Semantic Video.

Gone are the days when internet video could best be described as a meaningless blob of content invisible to search and impossible to annotate and reuse in meaningful ways.

The past year has seen an explosion of practical (and popular) services and applications that are based upon the extraction of meaningful metadata– and often linked data– from video content.

For those of us lucky enough to view it, the BBC wowed us last July with its Olympic Coverage, broadcasting live every event of the Olympics on 24 HD streams, all accessible over the internet, with live, dynamic data and statistics on athletes.  To pull off this feat, the BBC used a custom-designed Dynamic Semantic Publishing platform which included fluid Operations’ Information Workbench to help author, curate and publish  ontology and instance data.

Read more

Twitter, TV, and Semantic Technology

Michael Learmonth of Ad Age reports, “Twitter is attempting to deepen its links to TV — as well as skim TV ad budgets — with a series of new media deals and technology to target ads at TV viewers. Twitter’s ad pitch has been consistent over the past year: advertising on Twitter in conjunction with TV makes TV ads more effective. ‘Our perspective is everybody in digital has it wrong; they have ben going to market with an either-or proposition,” said Twitter global head of revenue Adam Bain. Twitter is a bridge to these different screens and experiences.’ Today, at an Internet Week event, the company unveiled a series of media deals and a targeting tool designed to bring TV advertisers on to Twitter.” Read more

Session Spotlight: Beyond the Blob – Semantic Video’s Coming of Age

Michael Dunn, CTO of Hearst Media has been quoted saying, “Video, in its native form, is a blob of content… and it’s hard to extract data from it.” Thankfully, semantic technologies are starting to take video “beyond the blob,” and that is precisely what panelists at the upcoming Semantic Technology and Business Conference in San Francisco will discuss.

The panel discussion Beyond the Blob: Semantic Video’s Coming of Age is set to begin at 2:25 on Monday, June 3 at SemTechBiz and will feature a panel of semantic web professionals with a broad range of experience regarding semantic video. The panel will be moderated by Kristen Milhollin, Project Lead & Founder of GoodSpeaks.org, an organization aimed at creating a nonprofit media and data distribution network that increases public awareness and support of the work done by nonprofit and other charitable organizations.  Read more

James Hendler on the Arrival of Watson at RPI

Friend of SemanticWeb.com Dr. James Hendler recently shared his perspective on the arrival of Watson at Rensselaer Polytechnic Institute: “Every single student in the Department of Computer Science here at Rensselaer Polytechnic Institute has the potential to revolutionize computing. But with the arrival of Watson at Rensselaer, they’re even better positioned to do so. Watson has caused the researchers in my field of artificial intelligence (AI) to rethink some of our basic assumptions. Watson’s cognitive computing is a breakthrough technology, and it’s really amazing to be here at Rensselaer, where we will be the first university to get our hands on this amazing system.” Read more

Video: The Internet of Things

The BBC has posted a new video discussing the Internet’s next frontier, particularly the Internet of Things. According to the description of the video, “In its early days the internet was seen simply as a way of transferring data across large distances but it is now playing an ever increasing part in our lives. David Reid reports on what is seen as the next big frontier for the web – called the internet of things – allowing you to use your smartphone to control your home heating, pay for parking and even monitor your own fitness.” Read more

Video: ISWC’s Big Graph Data Panel

The recent International Semantic Web Conference produced a number of excellent sessions, including a very popular Big Graph Data Panel, captured on video by the folks at VideoLectures. The panel was moderated by Frank van Harmelen (Department of Computer Science, Faculty of Sciences, VU University Amsterdam), with panelists Tim Berners-Lee (W3C), John Giannandrea (Google), Mike Stonebraker (Massachusetts Institute of Technology, MIT), and Bryan Thompson (SYSTAP).

According to the description of the video, “The Semantic Web / Linked Data has grown immensely over the past years. When the Semantic Web community started working over a decade ago the main question was where to get the data from. By now the question of how to process ever increasing amount of semantic/linked data has come to people’s utmost attention. The goal of this panel is to shed light on the various approaches/options for Big Graph Data processing.” Read more

NEXT PAGE >>