Posts Tagged ‘video search’

New Activity Recognition Algorithm Can Figure Out What’s Happening in a Video

13564816965_19f9c9286e

Larry Hardesty of MIT News Office reports, “With the commodification of digital cameras, digital video has become so easy to produce that human beings can have trouble keeping up with it. Among the tools that computer scientists are developing to make the profusion of video more useful are algorithms for activity recognition — or determining what the people on camera are doing when. At the Conference on Computer Vision and Pattern Recognition in June, Hamed Pirsiavash, a postdoc at MIT, and his former thesis advisor, Deva Ramanan of the University of California at Irvine, will present a new activity-recognition algorithm that has several advantages over its predecessors.” Read more

Get More Value Out Of Video By Facilitating Better Search

ramp_logo_grad_4cEnterprise videos– visionary statements, product introductions, town hall meetings, training aids, and conferences – are everywhere on the Internet and corporate Intranets. But no matter how flashy the graphics or how well-prepared the speaker, there’s something missing when it comes to the viewer experience: The ability to search these videos.

Ramp is one of the vendors aiming to address the issue by delivering a fully automated data-driven user experience around finding content. It’s about the ability to watch and look inside a video — a 45-minute keynote, for example, said Joshua Berkowitz, the company’s director of product management at Enterprise Search & Discovery 2014. Everyone has had the experience of starting to view such an event online, only to get distracted by their smartphones or something else a few minutes in. In the meantime, the video plays on and goes right past the part you were most interested in without your even noticing.  “How to find the piece of content that interests you in the same way you could find those pieces inside a document?” he asked the audience.

More importantly, how can the supplier of that content facilitate that, as well as other ways to help the viewer interact with the elements they are interested in, or provide additional information such as links to product or contact details? “Time-based metadata for video can revolutionize the search experience,” Berkowitz said, a capability Ramp’s technology supports with its MediaCloud technology that generates time-coded text transcripts and metadata from video content, providing a time-coded transcript and tag set.

Read more

Alibaba Invests in Video and Mobile Search Leading Up to IPO

al

Jon Russell of The Next Web reports, “Alibaba is gearing up for one of the largest technology IPOs in history with more significant strategy and investment moves. The latest, this week, saw the company lead a $1.2 billion investment in Chinese online video service Youku Tudou and link up with browser-make UC Web to launch a mobile search joint venture. Youku Tudou, which is comparable to a Chinese version of YouTube, is taking on investment from Alibaba and Yunfeng Capital, which will hold 16.5 percent and 2 percent shares respectively. The deal values Youku Tudou — which is listed on the NYSE and was created following a billion dollar merger in 2012 — at over $6.5 billion, but moreover it is a sign of Alibaba’s ambition to move into entertainment and mobile. Jack Ma, the iconic founder and now executive chairman of Alibaba, said that the deal would ‘accelerate our digital entertainment and video content strategy… and bring new products and services to Alibaba’s customers’.” Read more

Semantic Video: Can TV Catch Up to the Web?

oldTV_660

Sam Vasisht of Veveo recently wrote an article for Wired in which he states, “Leading mobile, web and social companies such as Google, Apple and Facebook are driving towards paradigm shifts in redefining user experience. Such experiences include intelligent voice driven interfaces and predictive personalized discovery of content as represented in services such as Google NOW, Facebook Graph Search and Apple Siri, for example. As users experience such shifts in usability in web and mobile applications, the question that television programmers should be asking themselves is whether television is keeping up with the new expectations that users are likely to have as a result?” Read more

Knight Foundation Funds Internet Archive to the Tune of $1M

Roger Macdonald and Brewster Kahle of the Knight Foundation recently wrote, “We are seeing more and more public benefits arising from applying digital search and analysis to news from our most pervasive and persuasive medium— television. That’s why, we are thrilled to announce that the Internet Archive, one of the world’s largest public digital libraries, is expanding our television news research library to make readily available hundreds of thousands of hours of U.S. television news programs for users to search, quote and borrow. The expansion plan is being supported by $1 million in funding from Knight Foundation. With this support, we will grow our TV News Search & Borrow service, which currently includes more than 400,000 broadcasts dating back to June 2009, to add hundreds of thousands of new broadcasts.” Read more

Session Spotlight: Beyond the Blob – Semantic Video’s Coming of Age

Michael Dunn, CTO of Hearst Media has been quoted saying, “Video, in its native form, is a blob of content… and it’s hard to extract data from it.” Thankfully, semantic technologies are starting to take video “beyond the blob,” and that is precisely what panelists at the upcoming Semantic Technology and Business Conference in San Francisco will discuss.

The panel discussion Beyond the Blob: Semantic Video’s Coming of Age is set to begin at 2:25 on Monday, June 3 at SemTechBiz and will feature a panel of semantic web professionals with a broad range of experience regarding semantic video. The panel will be moderated by Kristen Milhollin, Project Lead & Founder of GoodSpeaks.org, an organization aimed at creating a nonprofit media and data distribution network that increases public awareness and support of the work done by nonprofit and other charitable organizations.  Read more

Jinni Launches Natural Language Video Discovery Engine

According to an article out of the company, Jinni launched a “radically new NLU (natural language understanding) discovery engine to power the first voice-activated video guides that understand natural human language. Until now, the voice-activated TV experience was limited to basic commands because that was all the guide could process. The Jinni NLU engine leverages the company’s unique Entertainment Genome™ to interpret natural human speech and derive the underlying meaning to enable rich, intuitive interaction between users and their TVs.  Now users will be able to simply tell their TV what they are in the mood to watch and Jinni will find the most fitting content from live TV, VOD and any other available video catalog.” Read more

Video: Making Video Search Semantic

A video on YouTube is educating viewers about “Making Video Search Semantic with Semex and Media Globe.” According to the video description, “Semantic search for multimedia is a problem every video site would like to solve. Imagine being able to search for a person and getting results for every video they appear in, without needing a text description included underneath. Or trying to locate every video posted about a specific location. Or getting the context of several videos all about baseball or your favorite movie character.” Read more