Some people say that reading “Harry Potter and the Sorcerer’s Stone” taught them the importance of friends, or that easy decisions are seldom right. Carnegie Mellon University scientists used a chapter of that book to learn a different lesson: identifying what different regions of the brain are doing when people read. Read more
Posts Tagged ‘human brain’
The Wharton School of Business recently wrote, “Knowledge@Wharton spoke with Brad Becker, chief design officer for IBM Watson, about current and future applications of cognitive computing and how he hopes to make computers ‘more humane.’ An edited version of the conversation follows.” Asked how his background in user experience design affects his role in the Watson project, Becker commented, “[It’s based on] the idea that technology should work for people, not the other way around. For a long time, people have worked to better understand technology. Watson is technology that works to understand us. It’s more humane, it’s helpful to humans, it speaks our language, it can deal with ambiguity, it can create hypotheses, it can learn from us. And, of course, since it’s a computer, it can scale as much as needed and has recall far beyond what humans have.” Read more
Tokyo, Nov 25, 2013 – (JCN Newswire) – Fujitsu Laboratories Ltd. and Japan’s National Institute of Informatics (NII) announced today that under NII’s “artificial brain” project, known as “Todai Robot Project – Can a Robot Pass the University of Tokyo (Todai) Entrance Exam?,” for short, their entry has taken a practice exam held by Yoyogi Seminar – Education Research Institute, a leading Japanese preparatory school.
Under the Todai Robot project, Fujitsu Laboratories has been conducting joint research and participating as the core members of the math team. The overall project, led by NII professor Noriko Arai, commenced in 2011 with the goal of enabling an artificial brain to score high marks on the test administered by the National Center for University Entrance Examinations by 2016 (the “Center Test”), and crossing the threshold required for admission to the University of Tokyo by 2021. Read more
After the recent news of the new Watson Ecosystem, IBM is now insisting that computers may be emulate the human brain in the near future, Jon Xavier of the Silicon Valley Business Journal reports. Xavier writes, “You can date the first modern era of computing, in which massive mainframes like ENIAC were put to work on math and business problems too complex for the simple counting machines that came before, to a series of talks about computer science in the late 1940s… On Nov. 19, IBM held what it hopes will be another such watershed conference at its Almaden Research Center in San Jose — a colloquium on emerging computing technologies modeled on how the human mind works.” Read more
Lucas Laursen of IEEE Spectrum reports, “As [Henry] Markram has been telling everyone since he got the €1 billion nod to lead the Human Brain Project, the way researchers study the brain needs to change. His approach—and it’s not the only one—stands on an emerging type of computing that he and others claim will let machines learn more like humans do. They could then offer generalizations from what’s known about a handful of neural pathways and find shortcuts to understanding the rest of the brain, he argues. The concept will rely as much on predictions of neural behavior as on experimental observations.” Read more
Larry Greenmeier of Scientific American recently wrote, “As computers have matured over time, the human brain has no way of keeping up with silicon’s rapid-fire calculating abilities. But the human cognitive repertoire extends far beyond just fast calculations. For that reason, researchers are still trying to develop computers that can recognize, interpret and act upon information—like the kind pulled in by eyes, ears, nose and skin—as quickly and efficiently as good old-fashioned gray matter. Such cognitive systems are critical to transforming waves of big data collected by sensor networks into a meaningful representations of, say, automobile traffic on a particular roadway or maritime weather conditions.” Read more
Michael Harper of redOrbit.com recently reported, “No matter how disturbing or even frightening as the thought may be, many scientists are working to unlock the mysteries of the human brain, create a model, and then place this model in a robot. Chris Eliasmith, a professor from Canada’s University of Waterloo, has been working to complete his own human model to think and perceive just as a human would. However, as explained in his recent paper, Spaun: A Perception-Cognition-Action Model Using Spiking Neurons, Eliasmith’s model is able to perceive and then act. Called Spaun, (for Semantic Pointer Architecture Unified Network) this model can take in information, such as numbers and shapes, remember the information, and then move an arm to draw out the numbers and shapes it’s seen.” Read more
MedicalExpress.com has posted an article about how scientists are trying to develop computers that can think and see like humans do. The article states, “Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people. The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently ‘label’ each pictured object with certain properties, whilst undergoing an fMRI brain scan.” Read more
Biologically inspired intelligence. That impressive catchphrase is used by ai-one – but what does it mean?
“The idea is that our technology works as the human brain does,” says Olin Hyde, vp of business development at the vendor. “And the idea of the business is to make it easy for developers to build intelligent apps. We want them to embed AI (artificial intelligence) into every device.” The hope is to empower developers to build applications to extract intelligence from content, to analyze and discover meaningful patterns, by making AI easy.
The issue with AI, he says, is that the way it’s traditionally been approached doesn’t correspond to how the human brain works as much as it is “based on some fancy math that was discovered in the 18th and 19th centuries.” But people were intelligent long before then, learning contextually from patterns, gaining knowledge dynamically through a series of associations, building understanding autonomically without models. Guided by that foundational idea, ai-one looks to what it calls the holo (whole) semantic (meaning) data space (the dynamic area into which information is fed).
John Markoff of the New York Times reports, “Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain. There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own. Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.” Read more
NEXT PAGE >>