Posts Tagged ‘video’

Tim Berners-Lee: What Kind of Internet Do We Want

timbernersleeExchange Magazine recently wrote, “Sir Tim Berners-Lee invented the World Wide Web 25 years ago. So it’s worth a listen when he warns us: There’s a battle ahead. Eroding net neutrality, filter bubbles and centralizing corporate control all threaten the web’s wide-open spaces. It’s up to users to fight for the right to access and openness. The question is, What kind of Internet do we want? Tim Berners-Lee invented the World Wide Web. He leads the World Wide Web Consortium (W3C), overseeing the Web’s standards and development.” Read more

Microsoft Introduces New Deep Learning System, Project Adam

adam

Daniela Hernandez of Wired reports, “Drawing on the work of a clever cadre of academic researchers, the biggest names in tech—including Google, Facebook, Microsoft, and Apple—are embracing a more powerful form of AI known as ‘deep learning,’ using it to improve everything from speech recognition and language translation to computer vision, the ability to identify images without human help. In this new AI order, the general assumption is that Google is out in front… But now, Microsoft’s research arm says it has achieved new records with a deep learning system it calls Adam, which will be publicly discussed for the first time during an academic summit this morning at the company’s Redmond, Washington headquarters.” Read more

Get More Value Out Of Video By Facilitating Better Search

ramp_logo_grad_4cEnterprise videos– visionary statements, product introductions, town hall meetings, training aids, and conferences – are everywhere on the Internet and corporate Intranets. But no matter how flashy the graphics or how well-prepared the speaker, there’s something missing when it comes to the viewer experience: The ability to search these videos.

Ramp is one of the vendors aiming to address the issue by delivering a fully automated data-driven user experience around finding content. It’s about the ability to watch and look inside a video — a 45-minute keynote, for example, said Joshua Berkowitz, the company’s director of product management at Enterprise Search & Discovery 2014. Everyone has had the experience of starting to view such an event online, only to get distracted by their smartphones or something else a few minutes in. In the meantime, the video plays on and goes right past the part you were most interested in without your even noticing.  “How to find the piece of content that interests you in the same way you could find those pieces inside a document?” he asked the audience.

More importantly, how can the supplier of that content facilitate that, as well as other ways to help the viewer interact with the elements they are interested in, or provide additional information such as links to product or contact details? “Time-based metadata for video can revolutionize the search experience,” Berkowitz said, a capability Ramp’s technology supports with its MediaCloud technology that generates time-coded text transcripts and metadata from video content, providing a time-coded transcript and tag set.

Read more

Twelvefold Introduces Spectrum for Video: Real-Time URL-Level Video Ad Placements Across All Screens

12f

San Francisco, Calif. (March 6, 2014) – Twelvefold, a big data company that targets audiences in real time without the use of cookies, today announced Spectrum for Video. Spectrum for Video delivers the most relevant video ad placements based on the influence, authority and emotional connection a piece of content creates with its reader. With more than 700 million individual videos from more than 5,000 sources, Twelvefold has culled and indexed the best of the web, including a mix of spot lengths. Read more

Video: When Will My Computer Understand Me?

Aaron Dobrow of the Texas Advanced Computing Center at the University of Texas recently wrote, “Language isn’t always straightforward, even for humans. The multiple definitions in a dictionary can make it difficult even for people to choose the correct meaning of a word. Katrin Erk, a linguistics researcher in the College of Liberal Arts, refers to this as ‘semantic muck.’ Enabled by supercomputers at the Texas Advanced Computing Center, Erk has developed a new method for visualizing the words in a high-dimensional space. Instead of hard-coding human logic or deciphering dictionaries to try to teach computers language, Erk decided to try a different tactic: feed computers a vast body of texts (which are a reflection of human knowledge) and use the implicit connections between the words to create a map of relationships.” Read more

Wibbitz Wants A World Where Every Article Can Have Its Own Video

Olivia Solon of Wired.co.uk recently drew attentions attention to startup Wibbitz, calling it a “Summly for video.” In five to ten seconds, she reports, the technology “transforms any article into a one to two-minute video, extracting the salient points from the text, pulling in images and infographics and adding a voiceover.”

Wibbitz, Solon notes, “uses algorithms to extract text from an article, then analyses it using natural language processing and artificial intelligence to understand what the text is taking about. It then summarises the article, and hunts for relevant imagery through various image licensing sites, including Getty, Reuters and AP. It also adds in infographics based on a number of fixed templates and then converts the summarised text into a voiceover (users will soon be able to select one of four different voices).”

Read more

See the Trailer for Datalandia: The Small Town with Big Data Solutions

GE is having fun showing off their Big Data solutions in a very small (fictional) town called Datalandia. Katie Kaye of AdAge reports, “GE filmed a teensy town in Germany to teach everyday people about the internet of really big things. The maker of data-generating wind turbines and jet engines today will unveil the first of a series of short films that mimic summer-blockbusters to illustrate the industrial internet through scenarios involving blood-sucking vampires and extraterrestrials. ‘What if there was a little town that had the industrial internet and they were using it every day to keep the town folks happy, healthy and productive?’ asked Tommy Means, founder and creative director at Mekanism, the creative agency behind the campaign. The mini-films were shot on location at Miniatur Wunderland, a massive world of model trains and their surroundings in Hamburg that’s loaded with intricate replicas of an airport, ships, hospitals, a soccer stadium, and countless itsy-bitsy inhabitants. GE has named the realm created for the campaign ‘Datalandia’.” Read more

Semantic Video’s Banner Year

The BBC made use of semantic video annotation in its coverage of the 2012 Olympics

It’s fair to say that an good idea has finally “arrived” when it has left the realm of the theoretical and has become the foundation of a lot of popular tools, services, and applications.

That is surely the case with Semantic Video.

Gone are the days when internet video could best be described as a meaningless blob of content invisible to search and impossible to annotate and reuse in meaningful ways.

The past year has seen an explosion of practical (and popular) services and applications that are based upon the extraction of meaningful metadata– and often linked data– from video content.

For those of us lucky enough to view it, the BBC wowed us last July with its Olympic Coverage, broadcasting live every event of the Olympics on 24 HD streams, all accessible over the internet, with live, dynamic data and statistics on athletes.  To pull off this feat, the BBC used a custom-designed Dynamic Semantic Publishing platform which included fluid Operations’ Information Workbench to help author, curate and publish  ontology and instance data.

Read more

Twitter, TV, and Semantic Technology

Michael Learmonth of Ad Age reports, “Twitter is attempting to deepen its links to TV — as well as skim TV ad budgets — with a series of new media deals and technology to target ads at TV viewers. Twitter’s ad pitch has been consistent over the past year: advertising on Twitter in conjunction with TV makes TV ads more effective. ‘Our perspective is everybody in digital has it wrong; they have ben going to market with an either-or proposition,” said Twitter global head of revenue Adam Bain. Twitter is a bridge to these different screens and experiences.’ Today, at an Internet Week event, the company unveiled a series of media deals and a targeting tool designed to bring TV advertisers on to Twitter.” Read more

Session Spotlight: Beyond the Blob – Semantic Video’s Coming of Age

Michael Dunn, CTO of Hearst Media has been quoted saying, “Video, in its native form, is a blob of content… and it’s hard to extract data from it.” Thankfully, semantic technologies are starting to take video “beyond the blob,” and that is precisely what panelists at the upcoming Semantic Technology and Business Conference in San Francisco will discuss.

The panel discussion Beyond the Blob: Semantic Video’s Coming of Age is set to begin at 2:25 on Monday, June 3 at SemTechBiz and will feature a panel of semantic web professionals with a broad range of experience regarding semantic video. The panel will be moderated by Kristen Milhollin, Project Lead & Founder of GoodSpeaks.org, an organization aimed at creating a nonprofit media and data distribution network that increases public awareness and support of the work done by nonprofit and other charitable organizations.  Read more

NEXT PAGE >>