Posts Tagged ‘semantic understanding’

Cost-saving Pilot Programs to Support Warfighter Autonomy

WASHINGTON, June 19, 2013 – A call from the Defense Department to government labs for autonomous technology ideas that support the warfighter has been answered with seven initiatives.

Chosen from more than 50 submissions, the selected ideas will be tested in the Autonomy Research Pilot Initiative, officials said.

The pilot research initiative’s goal is to advance technologies that will result in autonomous systems that provide more capability to warfighters, lessen the cognitive load on operators and supervisors, and lower overall operational cost, explained Jennifer Elzea, a DOD spokeswoman. Read more

Berkeley Scientists Map How the Brain Sorts What We See

Futurity.org reports, “Scientists have found that the brain is wired to put the categories of objects and actions we see daily in order, and have created the first interactive map of how the brain organizes these groupings. The result—achieved through computational models of brain imaging data collected while the subjects watched hours of movie clips—is what researchers call ‘a continuous semantic space.’ Some relationships between categories make sense (humans and animals share the same ‘semantic neighborhood’) while others (hallways and buckets) are less obvious. The researchers found that different people share a similar semantic layout. ‘Our methods open a door that will quickly lead to a more complete and detailed understanding of how the brain is organized. Already, our online brain viewer appears to provide the most detailed look ever at the visual function and organization of a single human brain,’ says Alexander Huth, a doctoral student in neuroscience at University of California, Berkeley and lead author of the study published in the journal Neuron.” Read more

Everpix Uses Semantic Understanding to Organize Your Photos

David Worthington of the IBM Smart Planet blog recently wrote, “Everpix, a San Francisco start-up founded just over a year ago, has introduced a new service that uses semantic understanding to highlight and rediscover photos that may be of interest. If you’re like me, the number of photos that you have taken has multiplied since you began using a smartphone that stores pictures in the cloud. Everpix’s theory is that your photo collection is getting so big that it’s too unwieldy to manage. Even casual users will have more than doubled their collections, it says. Its solution is a new ‘highlights’ view that scans photos to obtain a sense of its scene or composition, and then it pairs those photos together into collections. It attempts to determine image quality, so that out of focus shots are not displayed, and the best shot is chosen to represent the collection. You can drill down to see the rest.” Read more

Smart TVs and the Semantic Web

CORDIS recently reported, “If you have bought a new television lately, the chances are it is a lot smarter than your old one. Smart TVs, also known as connected or hybrid televisions, featuring integrated internet connectivity, currently account for around a third of TV sales in Europe. They are the end point in a huge and rapidly expanding value chain driven by the intensifying convergence of television and the internet. Just as accessing the internet solely from a desktop PC is rapidly becoming a thing of the past, so too is broadcast TV in the traditional sense – along with the complaint that ‘there’s nothing on television!’ With connected TVs, channels become interactive, content can be shared, rated and commented among friends, videos can be streamed and watched at will, and a favourite programme will never be missed.” Read more

Playing Pictionary with a Computer?

Phys.org reports that, “Researchers from Brown University and the Technical University of Berlin have developed a computer program that can recognize sketches as they’re drawn in real time. It’s the first computer application that enables ‘semantic understanding’ of abstract sketches, the researchers say. The advance could clear the way for vastly improved sketch-based interface and search applications. The research behind the program was presented last month at SIGGRAPH, the world’s premier computer graphics conference. The paper is now available online, together with a video, a library of sample sketches, and other materials.” Read more

Wolfram Alpha on Semantic Understanding & Democratized Data

Mastufa Ahmed recently interviewed Luc Barthelet, Executive Director of Wolfram|Alpha to learn more about the company’s search algorithm. Asked about what semantic web technologies Wolfram uses, Barthelet responded, “Wolfram|Alpha is not searching the Semantic Web per se. It takes search queries and maps them to an exact semantic understanding of the query, which is then processed against its curated knowledge base. The main technology used is Mathematica whose language is used to describe the semantic queries, and Mathematica technology is used to build up the natural language parser, the data curation pipeline and perform the data processing, computation and visualization.” Read more

Lexalytics Amps Up the Semantic Understanding of Salience 5.0

Bill Ives recently discussed the advancements of Salience 5.0 with Lexalytics CEO Jeff Catlin. Ives writes, “Semantic technology differs from most computing as it learns on the job. This can provide great benefits but it can also be time consuming… [Lexalytics] came up with a clever idea to reduce the learning curve. They had their semantic engine digest Wikipedia to gain an understanding of human thought and build their Concept Matrices™. This allows it to do things that most computer technology would struggle with such as understanding that pizza is a food even though the word food was never associated with pizza in the text it was looking at.” Read more