has posted an article about how scientists are trying to develop computers that can think and see like humans do. The article states, “Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people. The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently ‘label’ each pictured object with certain properties, whilst undergoing an fMRI brain scan.”

It continues, “The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool). After ‘training’ the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.”

The article goes on, “Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can ‘think’ and ‘see’ in the same way as humans. It is only in recent years that the field of semantics has been explored through the analysis of brain scans and brain activity in response to both language-based and visual inputs. Teaching computers to read brain scans and interpret the language encoded in brain activity could have a variety of uses in medical science and artificial intelligence.”

Read more here.

Image: Courtesy