Sophie Curtis of The Telegraph reports, “Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named ‘Amelia’ after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles. ‘Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. Read more
Posts Tagged ‘artificial intelligence’
Dave Altavilla of Forbes reports, “Microsoft has a big opportunity tomorrow when they unveil the next version of Windows known by the code name ‘Threshold’ and what could ultimately be dubbed Windows 9. Though its official name has yet to be confirmed, Microsoft is holding an event tomorrow in San Francisco and the unveil invitations sent out hint at ‘what’s next for Windows.’ Lately there have been a flurry of reports and leaks of what is widely known as Windows 9, though there is still some buzz that Microsoft may brand the OS by a different name upon launch. Regardless, here are a few key highlights on what I think we’ll see the Redmond team unveil with this new OS, which is expected to cure the many ills users have been complaining of with Windows 8.” Read more
Cade Metz of Wired reports, “When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence. Applying its massive cluster of computers to an emerging breed of AI algorithm known as ‘deep learning,’ the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.” Read more
Derrick Harris of GigaOM recently wrote, “Jeff Hawkins is best known for bringing us the Palm Pilot, but he’s working on something that could be much, much bigger. For the past several years, Hawkins has been studying how the human brain functions with the hope of replicating it in software. In 2004, he published a book about his findings. In 2012, Numenta, the company he founded to commercialize his work, finally showed itself to the world after roughly seven years operating in stealth mode. I recently spoke with Hawkins to get his take on why his approach to artificial intelligence will ultimately overtake other approaches, including the white-hot field of deep learning. We also discussed how Numenta has survived some early business hiccups and how he plans to keep the lights on and the money flowing in.” Read more
Greg MacSweeney of Wall Street and Tech recently wrote, “It’s relatively easy to find information on public companies. Bloomberg, Thomson Reuters, and Dun & Bradstreet, for example, all have in-depth information that is accessible to anyone with a subscription. But where do investment bankers, venture capitalists, and other investors find reliable information about private companies? If you talk to investment bankers, or other investors who are looking for information on non-public companies, it quickly becomes apparent there is no easy answer. Investment bankers rely mostly on Google searches and a combination of information gathered from Hoovers, S&P Capital IQ, Dun & Bradstreet, and others. But it is a laborious manual process to do due diligence on private companies.” Read more
We are seeing the beginning of the new artificial intelligence economy. This has many parallels to the infrastructure-as-a-service wave led by Amazon Web Services (AWS), which provided the world with access to highly-scalable compute capacity. AI technologies are being exposed as core infrastructure via the cloud, enabling companies to build smarter applications and services.
If you think you aren’t already a part of the AI economy, think again. Most of us are already participating through our interaction with popular applications and services. For example, Google Maps uses AI technology to better understand Street View images to give more accurate directions; and both Siri and Google Now use a combination of speech recognition, language understanding, and predictive modeling to act as digital personal assistants.
So the big question is: why now? Historically, AI technologies have been limited by a lack of data, insufficient compute capability, and poor algorithms. We’re now witnessing the convergence of three major forces: ready access to massive data, highly scalable on-demand compute capability, and a number of core algorithmic breakthroughs that enable us to better train robust AI systems. This is a perfect storm that has resulted in significant advances in computers’ ability to understand text, images, video, and speech. Read more
A recent announcement on EurekAlert! states: “Researchers from North Carolina State University have developed artificial intelligence (AI) software that is significantly better than any previous technology at predicting what goal a player is trying to achieve in a video game. The advance holds promise for helping game developers design new ways of improving the gameplay experience for players.”We developed this software for use in educational gaming, but it has applications for all video game developers,” says Dr. James Lester, a professor of computer science at NC State and senior author of a paper on the work. “This is a key step in developing player-adaptive games that can respond to player actions to improve the gaming experience, either for entertainment or – in our case – for education.” The researchers used “deep learning” to develop the AI software. Deep learning describes a family of machine learning techniques that can extrapolate patterns from large collections of data and make predictions. Deep learning has been actively investigated in various research domains such as computer vision and natural language processing in both academia and industry.”
MonkeyLearn Unveils its Powerful and Affordable Artificial Intelligence Technology Platform for Developers
A recent release from MonkeyLearn states, “Developers, startups, and small and medium-sized enterprises (SMEs) now have access to a powerful, customizable, and affordable artificial intelligence (AI) technology platform for text mining, MonkeyLearn. As one of the first companies to meet the demands of a new, sophisticated era of AI, MonkeyLearn will be unveiled in beta today at TechCrunch Disrupt San Francisco. MonkeyLearn’s patent-pending algorithm creation engine allows developers in any industry to quickly and easily create and incorporate text mining capabilities into their own platforms, applications and websites, regardless of their experience with AI technologies. Artificial intelligence technologies for text mining have become a priority for Internet and technology companies, as they allow them to understand users’ interests and provide similar or relevant recommendations.”
IPsoft needs a R&D Engineer. The job description states, “Amelia is the next generation human-computer dialog system that acts as your personal student, instructor, assistant, or friend. Amelia is based on the latest state-of-the-art technologies in natural language processing, information retrieval, machine learning, and more.What distinguishes Amelia from previous generation human-computer dialog systems is its learning ability. Amelia is capable of understanding syntax and semantics of natural language and automatically builds its own neural ontology from them. If you want to teach Amelia about a certain object, you simply describe the object in natural language, then Amelia builds a neural ontology for the object automatically. Once the neural ontology is built, Amelia can explain or answer questions about the object by traversing through the ontology. Objects do not have to be specified upfront; you can talk about random stuffs and expect Amelia to build neural ontologies for objects that are newly introduced during your conversation with Amelia. When you ask questions about things that Amelia does not have neural ontologies for, Amelia tries to find the most appropriate answer from the World Wide Web. These include questions about weathers, current events, historical/geopolitical facts, etc.”
Daniela Hernandez of Wired recently wrote, “If you walk into the computer science building at Stanford University, Mobi is standing in the lobby, encased in glass. He looks a bit like a garbage can, with a rod for a neck and a camera for eyes. He was one of several robots developed at Stanford in the 1980s to study how machines might learn to navigate their environment—a stepping stone toward intelligent robots that could live and work alongside humans. He worked, but not especially well. The best he could do was follow a path along a wall. Like so many other robots, his ‘brain’ was on the small side. Now, just down the hall from Mobi, scientists led by roboticist Ashutosh Saxena are taking this mission several steps further. They’re working to build machines that can see, hear, comprehend natural language (both written and spoken), and develop an understanding of the world around them, in much the same way that people do. Read more
NEXT PAGE >>