Posts Tagged ‘natural language processing’

IBM’s Watson Group Invests In Fluid And Its Cognitive Assistant For Online Shoppers

watsonpixnwqEarlier this year The Semantic Web Blog covered the launch of the IBM Watson Group, a new business unit to create an ecosystem around Watson Cloud-delivered cognitive apps and services. One of the partners announced at that time was Fluid Inc., which is developing a personal shopper for ecommerce that leverages Watson. Today, the Watson Group is pushing that partnership forward by drawing from the $100 million that IBM has earmarked for direct investments in cognitive apps in order to invest in Fluid and in helping deliver what it says will be “the first-ever cognitive assistant for online shoppers into the marketplace.”

At the previous event in January, Fluid CEO Kent Deverell discussed and demonstrated the Expert Personal Shopper, now known as the Fluid Expert Shopper (XPS). Still in development, it takes advantage of Watson’s ability to understand the context of consumers’ questions in natural language, draw upon what it learns from users via its interactions with them, and match that against insights uncovered from huge amounts of data around a product or category – including a brand’s product information, user reviews and online expert publications — to deliver a personalized e-commerce shopping experience via desktops, tablets and smartphones.

The first Fluid XPS prototype is being developed for customer outdoor apparel and equipment retailer, The North Face, which Deverell showcased at the previous event.

At the IBM event in January, Deverell painted a picture of the difference between the experience consumers have with a great sales person vs. traditional ecommerce. Good salespeople, he said, “are personal, proactive conversational,” whereas e-commerce is data-driven. He told the audience at the event that Fluid wants to combine the best of both worlds. “A great sales associate makes you feel good about your purchase,” he said, and he envisions Fluid XPS doing the same through natural conversation, the ability to learn about the users’ needs, “to go as deep as you need to and resurface and provide relevant recommendations.”

Read more

Computers Learning Like Humans Through NLP

norNorthwestern University reports, “Someday we might be able to build software for our computers simply by talking to them. Ken Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science, and his team have developed a program that allows them to teach computers as you would teach a small child—through natural language and sketching. Called Companion Cognitive Systems, the architecture has the ability for human-like learning. ‘We think software needs to evolve to be more like people,’ Forbus says. ‘We’re trying to understand human minds by trying to build them.’ Forbus has spent his career working in the area of artificial intelligence, creating computer programs and simulations in an attempt to understand how the human mind works. At the heart of the Companions project is the claim that much of human learning is based on analogy. When presented with a new program or situation, the mind identifies similar past experiences to form an action. This allows us to build upon our own knowledge with the ability to continually adapt and learn.” Read more

Winners at NASA Space Apps Challenge Demonstrate Star Trek-Like Technology

plexi

Reno, NV (PRWEB) April 15, 2014 — Developers at the NASA Space Apps Challenge won the Plexi Natural Language Processing challenge by demonstrating an app that allows astronauts to use voice commands for mission-critical tasks by speaking to a wearable device. The International Space Apps Challenge is an international mass collaboration focused on space exploration that takes place over 48 hours in cities on six continents.

 

Next door at the Microsoft sponsored Reno Hackathon, a team of University of Nevada students claimed that event’s prize for best use of Natural Language Processing. This was the second Reno Hackathon, a competition between programmers to build a new product in a very short period of time, which was held on April 12-13, 2014 at the University of Nevada DeLaMare University Library. Read more

Semantic Technology Jobs: MIT

MIT

MIT is looking for a Research Software Engineer, NLP in Cambridge, MA. The post states, “The lab will develop technologies and methods to enable new modes of social engagement in news, politics, and government.  Will help the lab build a high performance data pipeline designed to ingest and analyze large-scale streams of heterogeneous media data. Responsibilities include creating language models from petabytes of text data using Hadoop, working closely with researchers to implement algorithms that power research experiments, and measuring and continually optimizing performance of NLP (neuro-linguistic programming) algorithms.  Will report to Professor Deb Roy.” Read more

Semantic Technology Jobs: Apple

Apple

Apple is looking for a Software Engineer, Natural Language Processing in Santa Clara, CA. The post states, “The Natural Languages Group is looking for an engineer to develop and apply algorithms and data in these areas. This involves application of bleeding edge machine learning and statistical pattern recognition on large text corpora. The position will involve all aspects of the use of natural-language processing in software, including functionality, algorithms, correctness, user experience, and performance, on both iOS and OS X.”

 

Qualifications for the position include: “Knowledge of natural-language processing techniques. MSc or PhD in Computer Science. Read more

Watson and the Future of Cognitive Computing

Watson

Alex Philp recently wrote for IBM Data Magazine, “The Watson cognitive computing engine is rapidly evolving to recognize geometry and geography, enabling it to understand complex spatial relationships. As Watson combines spatial with temporal analytics and natural language processing, it is expected to derive associations and correlations among streaming data sets evolving from the Internet of Things. Once these associations and correlations occur, Watson can combine the wherewhat, and when cognitive dimensions into causal explanations about why. Putting the where into Watson represents a key evolutionary milestone into understanding cause-effect relationships and extracting meaningful insights about natural and synthetic global phenomena.” Read more

Nuance on the Future of Natural Language Processing

Nuance

Gopal Sathe of NDTV Gadgets recently wrote, “We caught up with Sunny Rao, the MD of Nuance Communications India and South East Asia, and chatted about the developments in speech recognition, frustrations with using speech-to-text software and how the way we interact with our devices is about to change forever. Rao speaks like a person who has been talking to machines for a long time – his speech is clear, and there’s a small space around each word for maximum clarity. Over tea, we’re able to discuss how voice recognition is being used around the world, and how he sees the future of the technology shaping up. And naturally, we talked about the movie Her.” Read more

Cognitum Points To Use Cases For Semantic Knowledge Engineering

fl24Cognitum’s year got off to a good start, with an investment from the Giza Polish Ventures Fund, and it plans to apply some of that funding to building its sales and development teams, demonstrating the approaches to and benefits of semantic knowledge engineering, and focusing on big implementations for recognizable customers. The company’s products include Fluent Editor 2 for editing and manipulating complex ontologies via controlled natural language (CNL) tools, and its NLP-fronted Ontorion Distributed Knowledge Management System for managing large ontologies in a distributed fashion (both systems are discussed in more detail in our story here). “The idea here is to open up semantic technologies more widely,” says CEO Pawel Zarzycki.

To whom? Zarzycki says the company currently has pilot projects underway in the banking sector, which see opportunities to leverage ontologies and semantic management frameworks that provide a more natural way for sharing and reusing knowledge and expressing business rules for purposes such as lead generation and market intelligence. In the telco sector, another pilot project is underway to support asset management and impact assessment efforts, and in the legal arena, the Poland-based company is working with the Polish branch of international legal company Eversheds on applying semantics to legal self-assessment issues. Having a semantic knowledge base can make it possible to automate the tasks behind assessing a legal issue, he says, and so it opens the door to outsourcing this job directly to the entity pursuing the case, with the lawyer stepping in mostly at the review stage. That saves a lot of time and money.

Read more

SNAP To It: Dell Proposes Way Of Seeing Returns On Social Media Investments

shreeAre you seeing a return on your investment in social media? When the question about whether such returns exist was put to the audience at the recent Sentiment Analytics Symposium by Shree Dandekar, Dell Software’s chief strategist and senior director of BI and analytics, only a few hands went up. But Dandekar explained that it’s more possible to realize returns than many people may believe.

Dell’s Social Net Advocacy Pulse, or SNAP, tool and program is designed to help drive those returns. “Social ROI is not a myth but a reality,” Dandekar said. “It starts with a social media strategy and text analytics is a crucial underpinning of that journey,” which takes a company from social media listening and monitoring, to capturing and aggregating data from that, to engaging on and deriving insights from social media, to bringing that information into context with enterprise data for better lead- and opportunities-tracking. Dell is an in-house user of SNAP, bringing in 25,000 to 30,000 conversations a day for dell, he noted, and it runs a Social Media Command Center for facilitating listening to those conversations.  (It also helps customers implement their own Command Centers.)

Read more

Studio Ousia Envisions A World Of Semantic Augmented Reality

Image courtesy: Flickr/by Filter Forge

Image courtesy: Flickr/by Filter Forge

Ikuya Yamada, co-founder and CTO of Studio Ousia, the company behind Linkify – the technology to automatically extract certain keywords and add intelligent hyperlinks to them to accelerate mobile search – recently sat down with The Semantic Web Blog to discuss the company’s work, including its vision of Semantic AR (augmented reality).

The Semantic Web Blog: You spoke at last year’s SEEDS Conference on the subject of linking things and information and the vision of Semantic AR, which includes the idea of delivering additional information to users before they even launch a search for it. Explain your technology’s relation to that vision of finding and delivering the information users need while they are consuming content – even just looking at a word.

Yamada: The main focus of our technology is extracting accurately only a small amount of interesting keywords from text [around people, places, or things]. …We also develop a content matching system that matches those keywords with other content on the web – like a singer [keyword] with a song or a location [keyword] with a map. By combining keyword extraction and the content matching engine, we can augment text using information on the web.

Read more

NEXT PAGE >>