A recent announcement on EurekAlert! states: “Researchers from North Carolina State University have developed artificial intelligence (AI) software that is significantly better than any previous technology at predicting what goal a player is trying to achieve in a video game. The advance holds promise for helping game developers design new ways of improving the gameplay experience for players.”We developed this software for use in educational gaming, but it has applications for all video game developers,” says Dr. James Lester, a professor of computer science at NC State and senior author of a paper on the work. “This is a key step in developing player-adaptive games that can respond to player actions to improve the gaming experience, either for entertainment or – in our case – for education.” The researchers used “deep learning” to develop the AI software. Deep learning describes a family of machine learning techniques that can extrapolate patterns from large collections of data and make predictions. Deep learning has been actively investigated in various research domains such as computer vision and natural language processing in both academia and industry.”
Posts Tagged ‘artificial intelligence’
MonkeyLearn Unveils its Powerful and Affordable Artificial Intelligence Technology Platform for Developers
A recent release from MonkeyLearn states, “Developers, startups, and small and medium-sized enterprises (SMEs) now have access to a powerful, customizable, and affordable artificial intelligence (AI) technology platform for text mining, MonkeyLearn. As one of the first companies to meet the demands of a new, sophisticated era of AI, MonkeyLearn will be unveiled in beta today at TechCrunch Disrupt San Francisco. MonkeyLearn’s patent-pending algorithm creation engine allows developers in any industry to quickly and easily create and incorporate text mining capabilities into their own platforms, applications and websites, regardless of their experience with AI technologies. Artificial intelligence technologies for text mining have become a priority for Internet and technology companies, as they allow them to understand users’ interests and provide similar or relevant recommendations.”
IPsoft needs a R&D Engineer. The job description states, “Amelia is the next generation human-computer dialog system that acts as your personal student, instructor, assistant, or friend. Amelia is based on the latest state-of-the-art technologies in natural language processing, information retrieval, machine learning, and more.What distinguishes Amelia from previous generation human-computer dialog systems is its learning ability. Amelia is capable of understanding syntax and semantics of natural language and automatically builds its own neural ontology from them. If you want to teach Amelia about a certain object, you simply describe the object in natural language, then Amelia builds a neural ontology for the object automatically. Once the neural ontology is built, Amelia can explain or answer questions about the object by traversing through the ontology. Objects do not have to be specified upfront; you can talk about random stuffs and expect Amelia to build neural ontologies for objects that are newly introduced during your conversation with Amelia. When you ask questions about things that Amelia does not have neural ontologies for, Amelia tries to find the most appropriate answer from the World Wide Web. These include questions about weathers, current events, historical/geopolitical facts, etc.”
Daniela Hernandez of Wired recently wrote, “If you walk into the computer science building at Stanford University, Mobi is standing in the lobby, encased in glass. He looks a bit like a garbage can, with a rod for a neck and a camera for eyes. He was one of several robots developed at Stanford in the 1980s to study how machines might learn to navigate their environment—a stepping stone toward intelligent robots that could live and work alongside humans. He worked, but not especially well. The best he could do was follow a path along a wall. Like so many other robots, his ‘brain’ was on the small side. Now, just down the hall from Mobi, scientists led by roboticist Ashutosh Saxena are taking this mission several steps further. They’re working to build machines that can see, hear, comprehend natural language (both written and spoken), and develop an understanding of the world around them, in much the same way that people do. Read more
Sugandh Dhawan of iamwire.com reports, “New Delhi based SaaS startup, Contify, has launched an enterprise grade competitive intelligence (CI) platform to cater to the large organisations dealing with the job of identifying, sourcing, curating, and disseminating critical business information, across several functions. Founded in 2009 as a content syndication business, Contify is a product company focused in the areas of machine learning, artificial intelligence, and natural language processing. It offers an intelligence platform to enable businesses to monitor their competitors, customers and industries along with critical market variables that impact ones business.” Read more
Lars Hard of Beta News recently wrote, “Artificial intelligence (AI) has become a bit of a buzzword among technology professionals (and even within the mainstream public) but truthfully, most people do not know how it works or how it is already being integrated within leading enterprise businesses. AI for businesses is today mostly made up of machine learning, wherein algorithms are applied in order to teach systems to learn from data to automate and optimize processes and predict outcomes and gain insights. This simplifies, scales and even introduces new important processes and solutions for complex business problems as machine learning applications learn and improve over time. From medical diagnostics systems, search and recommendation engines, robotics, risk management systems, to security systems, in the future nearly everything connected to the internet will use a form of a machine learning algorithm in order to bring value.” Read more
Steve Ranger of Tech Republic reports, “Qwerty [the standard keyboard layout] was a compromise from the start. And as such you’d expect it to be swept away as the technology changed. And yet this odd layout became the standard, used since on billions of devices from typewriters to tablets and PCs. Even as the cold steel of the typewriter was replaced by the cool glass of a touchscreen smartphone, Qwerty has continued to dominate. That is, until now. A number of companies are rethinking the keyboard for the digital age, led by a small UK startup called SwiftKey, so that a mere 150 years after it was first created, the keyboard could finally be made to behave just how the user wants it to.” Read more
Katherine Noyes of Tech News World reports, “The inventors of Apple’s Siri personal assistant have launched an independent effort that could make their first offspring look kind of dumb. Billed by its creators as ‘the global brain,’ Viv aims to radically simplify the world by providing an intelligent interface to everything. ‘They are trying to abstract Siri’s [natural-language processing] interface so you could apply it into other applications and domains,’ Raj Singh, CEO and founder of Tempo AI, told TechNewsWorld. ‘For example, what if I wanted to integrate a Siri-like interface into the Yelp app or the Expedia app?’ Currently, ‘there isn’t a good facility to do this,’ he said. Read more
Catherine Havasi, CEO of Luminoso recently wrote for Tech Crunch, “Everyone knows that ‘water is wet,’ and ‘people want to be happy,’ and we assume everyone we meet shares this knowledge. It forms the basis of how we interact and allows us to communicate quickly, efficiently, and with deep meaning. As advanced as technology is today, its main shortcoming as it becomes a large part of daily life in society is that it does not share these assumptions. We find ourselves talking more and more to our devices — to our mobile phones and even our televisions. But when we talk to Siri, we often find that the rules that underlie her can’t comprehend exactly what we want if we stray far from simple commands. For this vision to be fulfilled, we’ll need computers to understand us as we talk to each other in a natural environment. For that, we’ll need to continue to develop the field of common-sense reasoning — without it, we’re never going to be able to have an intelligent conversation with Siri, Google Glass or our Xbox.” Read more
Will a robot take your job in the future? Given their increasing sophistication, it’s not surprising if the topic is of growing concerns to more people. The Semantic Web Blog has reported, for example, on robots that are learning to do tasks in response to humans’ natural language, and a talking robot on a space journey, covering the gamut from personal assistant to astronaut.
The Pew Research Center released a report last week entitled AI, Robotics and the Future of Jobs. It raises the question of whether advances in robotics and artificial intelligence will displace more jobs than they create by 2025, but the experts the report draws upon for their opinions haven’t reached a consensus on that point yet. Forty-eight percent believe both blue- and white-collar worker jobs are at risk, and that the future will see greater income inequality, more permanent unemployment and greater social disruption as a result. The other 52 percent see a lot of jobs that currently require real people will be taken over by robots or digital agents, as well – but with the happier prospect that humans will figure out new jobs and industries to replace the livings they can no longer make with their own brains or hands.
NEXT PAGE >>