Yosi Glick, co-Founder and CEO of semantic taste engine Jinni, recently wrote a post about the technology and engineering Emmy award that is to be given to Amazon’s Instant Video  for its personalized recommendation algorithms.

The basis for awarding the honor, he writes, lies with Amazon’s early item-to-item collaborative filtering (CF) algorithms that analyze consumer data to find statistical connections between items and then uses that as the basis for recommendations. But, says Glick, the company may be soon heading toward a fundamentally different approach.

“Amazon,” Glick explains, “is using the Emmy award to flaunt its latest Video Finder service, that seems to leave CF behind and embrace a new semantic approach to recommendation.”

Amazon is embracing semantics for its video content because it realizes that video is different than regular consumer items. TV and movies are “entertainment that is consumed based on personal tastes and our particular mood at the moment.  The types of content each of us enjoy is not based on what ‘other people have also watched’, rather it has to do with the plots, moods, style and pace,” he writes. “So content has to be described and discovered the same way we choose and experience it.”

Categories in Amazon’s Video Finder service  include classifications that describe the mood, plot, style and pace of titles — meaningful classifications that Glick says are the basis for semantic discovery. You can read the entire piece here.