Northwestern University reports, “Someday we might be able to build software for our computers simply by talking to them. Ken Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science, and his team have developed a program that allows them to teach computers as you would teach a small child—through natural language and sketching. Called Companion Cognitive Systems, the architecture has the ability for human-like learning. ‘We think software needs to evolve to be more like people,’ Forbus says. ‘We’re trying to understand human minds by trying to build them.’ Forbus has spent his career working in the area of artificial intelligence, creating computer programs and simulations in an attempt to understand how the human mind works. At the heart of the Companions project is the claim that much of human learning is based on analogy. When presented with a new program or situation, the mind identifies similar past experiences to form an action. This allows us to build upon our own knowledge with the ability to continually adapt and learn.” Read more
Natural Language Processing
Reno, NV (PRWEB) April 15, 2014 — Developers at the NASA Space Apps Challenge won the Plexi Natural Language Processing challenge by demonstrating an app that allows astronauts to use voice commands for mission-critical tasks by speaking to a wearable device. The International Space Apps Challenge is an international mass collaboration focused on space exploration that takes place over 48 hours in cities on six continents.
Next door at the Microsoft sponsored Reno Hackathon, a team of University of Nevada students claimed that event’s prize for best use of Natural Language Processing. This was the second Reno Hackathon, a competition between programmers to build a new product in a very short period of time, which was held on April 12-13, 2014 at the University of Nevada DeLaMare University Library. Read more
Hong Kong, Hong Kong (PRWEB) April 03, 2014–Ipselex, until now a secretive Hong Kong artificial intelligence company, today announced the launch of its web platform. The platform offers API-like access to a brain in the cloud that has taught itself to understand and make predictions about patents and patent applications.
Combining state of the art natural language processing with neural network technology designed to simulate a human brain, the AI at the core of Ipselex has learned what makes a good patent through a mix of self-study and guidance from an experienced patent attorney. It can, for example, analyze products for infringement and, in certain industry sectors, estimate the likelihood that a given patent application will be granted. Read more
Alex Philp recently wrote for IBM Data Magazine, “The Watson cognitive computing engine is rapidly evolving to recognize geometry and geography, enabling it to understand complex spatial relationships. As Watson combines spatial with temporal analytics and natural language processing, it is expected to derive associations and correlations among streaming data sets evolving from the Internet of Things. Once these associations and correlations occur, Watson can combine the where, what, and when cognitive dimensions into causal explanations about why. Putting the where into Watson represents a key evolutionary milestone into understanding cause-effect relationships and extracting meaningful insights about natural and synthetic global phenomena.” Read more
As The Semantic Web Blog discussed yesterday here, the Virtual Personal Assistant is getting more personal. Microsoft officially unveiled Cortana as part of the Windows Phone 8.1 smartphone software at its Build event yesterday, and the service effectively replaces the search function on Windows smartphones, both for the Internet and locally.
This statement served as the theme from corporate vice president and manager Joe Belfiore: “Cortana is the first truly personal digital assistant who learns about me and about the things that matter to me most and the people that matter to me most, that understands the Internet and is great at helping me get things done.”
The Bing-powered Cortana is launching in beta mode, and was still subject to a few hiccups during the presentation. For example, when Belfiore asked Cortana to give him the weather in Las Vegas, it reported the information in degrees, and was able to respond to his request to provide the same information in Celsius. But he couldn’t get her to make the calculations to Kelvin. But, he promised attendees, “Try it yourself because she is smart enough to tell you the answer in Kelvin.”
Gopal Sathe of NDTV Gadgets recently wrote, “We caught up with Sunny Rao, the MD of Nuance Communications India and South East Asia, and chatted about the developments in speech recognition, frustrations with using speech-to-text software and how the way we interact with our devices is about to change forever. Rao speaks like a person who has been talking to machines for a long time – his speech is clear, and there’s a small space around each word for maximum clarity. Over tea, we’re able to discuss how voice recognition is being used around the world, and how he sees the future of the technology shaping up. And naturally, we talked about the movie Her.” Read more
Cognitum’s year got off to a good start, with an investment from the Giza Polish Ventures Fund, and it plans to apply some of that funding to building its sales and development teams, demonstrating the approaches to and benefits of semantic knowledge engineering, and focusing on big implementations for recognizable customers. The company’s products include Fluent Editor 2 for editing and manipulating complex ontologies via controlled natural language (CNL) tools, and its NLP-fronted Ontorion Distributed Knowledge Management System for managing large ontologies in a distributed fashion (both systems are discussed in more detail in our story here). “The idea here is to open up semantic technologies more widely,” says CEO Pawel Zarzycki.
To whom? Zarzycki says the company currently has pilot projects underway in the banking sector, which see opportunities to leverage ontologies and semantic management frameworks that provide a more natural way for sharing and reusing knowledge and expressing business rules for purposes such as lead generation and market intelligence. In the telco sector, another pilot project is underway to support asset management and impact assessment efforts, and in the legal arena, the Poland-based company is working with the Polish branch of international legal company Eversheds on applying semantics to legal self-assessment issues. Having a semantic knowledge base can make it possible to automate the tasks behind assessing a legal issue, he says, and so it opens the door to outsourcing this job directly to the entity pursuing the case, with the lawyer stepping in mostly at the review stage. That saves a lot of time and money.
Ikuya Yamada, co-founder and CTO of Studio Ousia, the company behind Linkify – the technology to automatically extract certain keywords and add intelligent hyperlinks to them to accelerate mobile search – recently sat down with The Semantic Web Blog to discuss the company’s work, including its vision of Semantic AR (augmented reality).
The Semantic Web Blog: You spoke at last year’s SEEDS Conference on the subject of linking things and information and the vision of Semantic AR, which includes the idea of delivering additional information to users before they even launch a search for it. Explain your technology’s relation to that vision of finding and delivering the information users need while they are consuming content – even just looking at a word.
Yamada: The main focus of our technology is extracting accurately only a small amount of interesting keywords from text [around people, places, or things]. …We also develop a content matching system that matches those keywords with other content on the web – like a singer [keyword] with a song or a location [keyword] with a map. By combining keyword extraction and the content matching engine, we can augment text using information on the web.
Derrick Harris of GigaOM reports, “DataRPM, a Fairfax, Va.-based startup that has built a business intelligence product based on search engine technology, has raised a $5.1 million Series A round of venture capital. InterWest Partners led the round, which follows up on a $250,000 seed round the company raised in April. As we explained in a July post about an ‘askathon’ (essentially, a business intelligence hackathon) the company was hosting, DataRPM is based on search technologies and tries to deliver a search-like experience. Data is indexed rather than modeled, and users perform queries in a search bar using natural language.” Read more
NEXT PAGE >>