Rapid TV News reports that Ooyala and Jinni have joined up to deliver advanced video discovery technology. The article states, “The two personalisation leaders are targeting media companies, broadcasters and pay-TV operators with their new proposition that will integrate Ooyala’s machine-learning big-data analytics systems with Jinni‘s semantic discovery to deliver what the two companies call a powerful new level of video personalisation for all screens. The two companies will work together to develop and deploy what they claim will be a new level of machine learning powered by semantic discovery that will allow TV providers to tailor programming and video viewing experiences to each individual user. This will include personalised channels, custom programming guides, mood-based browsing and search, and viewer recommendations for both live and video-on-demand (VOD) content.” Read more
Posts Tagged ‘semantic video search’
Michael Dunn, CTO of Hearst Media has been quoted saying, “Video, in its native form, is a blob of content… and it’s hard to extract data from it.” Thankfully, semantic technologies are starting to take video “beyond the blob,” and that is precisely what panelists at the upcoming Semantic Technology and Business Conference in San Francisco will discuss.
The panel discussion Beyond the Blob: Semantic Video’s Coming of Age is set to begin at 2:25 on Monday, June 3 at SemTechBiz and will feature a panel of semantic web professionals with a broad range of experience regarding semantic video. The panel will be moderated by Kristen Milhollin, Project Lead & Founder of GoodSpeaks.org, an organization aimed at creating a nonprofit media and data distribution network that increases public awareness and support of the work done by nonprofit and other charitable organizations. Read more
Michael Grotticelli of Broadcast Engineering reports, “Jinni, the maker of a new kind of ‘semantic’ video guide that spans television, tablets, smart phones and the Web, has signed content distributors Time Warner Cable and Vudu among seven new licensees on four continents. It replaces old grid-style video guides with a new intuitive, personalized user experience. The company’s new “natural language understanding” (NLU) discovery engine supports voice-activated video guides that understand natural human language. The Jinni NLU engine leverages the company’s ‘Entertainment Genome’ to interpret natural human speech and derives the underlying meaning to enable intuitive interaction between users and their TVs.” Read more