Will Oremus of Stuff.co.nz writes, “Google just bought a fearsome fleet of robots. The company confirmed a New York Times report that it has acquired Boston Dynamics, the Massachusetts-based maker of such noted mechanical beasts as BigDog, Atlas, Petman, Cheetah and Wildcat. The company’s robots are among the world’s most advanced two- and four-legged machines. Some are humanoid, while others resemble predatory animals. Most have been developed under contract with military agencies, including the Defense Advanced Research Projects Agency, or DARPA. What might Google want with an army of military robots? At first gasp, the answer might seem to be, ‘conquering the world’. But that doesn’t seem to be the goal – at least, not in a military sense.” Read more
Natural Language Processing
CHICAGO, Dec. 4, 2013 /PRNewswire/ – TMS, the international leader in entertainment navigation, and Jinni, the first and only taste-and-mood based semantic discovery engine for video, have announced that Jinni is using TMS’ world-class On®Entertainment metadata to provide availability information for linear TV and OTT content on its new ‘My TV & Movie Guide’ iPad app and web service. As a result, once consumers find something great to watch using Jinni’s taste and mood based recommendation engine, they can easily find when and where to watch it. Jinni’s superior recommendations are seamlessly integrated into the viewing experience and deliver the most personalized and relevant TV and movie options to end users. Read more
In their recent Technology Quarterly issue, The Economist discussed how predictive intelligence will lead to better virtual assistant applications that will leave Siri in the dust: “The next generation of assistant software aims to go one step further by pursuing an approach known as ‘predictive intelligence’. It exploits the fact that smartphones now have access to fast internet links and location data, and can draw upon personal information, address books, e-mail and calendars. The aim of these new assistants is to anticipate what information users need, based on context and past behaviour, and to provide it before they have even asked for it. Such an assistant might, for example, spontaneously suggest that you leave early for a meeting, because it has spotted heavy traffic en route; present directions to your hotel when you arrive in a foreign country; offer to book a taxi or hotel based on analysis of an incoming e-mail or text message; or offer personalised suggestions for dinner in the evening.” Read more
Last week, Emil Protalinski of The Next Web reported on Google’s latest search update: the arrival of the Chrome voice search “hotword” extension. The extension brings Google’s “OK Google” feature to the desktop. Protalinski writes, “You can download the new tool, currently in beta, now directly from the Chrome Web Store. Android users with version 4.4 KitKat will recognize the feature: it lets you talk to Google without first clicking or typing. It’s completely hands-free, provided you’re already on Google.com: just say ‘OK Google’ and then ask your question.” Read more
In case you missed it, last week Jim Benedetto, CTO of Gravity, shared an interesting idea on GigaOM for how to push the semantic web forward. He writes, “Everyone is always asking me how big our ontology is. How many nodes are in your ontology? How many edges do you have? Or the most common — how many terabytes of data do you have in your ontology? We live in a world where over a decade of attempted human curation, of a semantic web has born very little fruit. It should be quite clear to everyone at this point that this is a job only machines can handle. Yet we are still asking the wrong questions and building the wrong datasets.” Read more
Tokyo, Nov 25, 2013 – (JCN Newswire) – Fujitsu Laboratories Ltd. and Japan’s National Institute of Informatics (NII) announced today that under NII’s “artificial brain” project, known as “Todai Robot Project – Can a Robot Pass the University of Tokyo (Todai) Entrance Exam?,” for short, their entry has taken a practice exam held by Yoyogi Seminar – Education Research Institute, a leading Japanese preparatory school.
Under the Todai Robot project, Fujitsu Laboratories has been conducting joint research and participating as the core members of the math team. The overall project, led by NII professor Noriko Arai, commenced in 2011 with the goal of enabling an artificial brain to score high marks on the test administered by the National Center for University Entrance Examinations by 2016 (the “Center Test”), and crossing the threshold required for admission to the University of Tokyo by 2021. Read more
The health care industry – and the American citizenry at large – has been focused of late on the problems surrounding the implementation of the Affordable Care Act, the federal website’s issues foremost among them. But believe it or not, there are other things the healthcare industry needs to prepare for, among them the October 1, 2014 date for replacing the World Health Organization’s International Statistical Classification of Diseases and Related Health Problems ICD-9 code sets used to report medical diagnoses and inpatient procedures by ICD-10 code sets. ICD-9 uses 14,000 diagnosis codes which will increase to 68,000 in ICD-10, which is a HIPAA (Health Insurance Portability and Accountability Act) code set requirement.
Natural language processing has had the primary role in many solutions aimed at transforming large volumes of unstructured clinical data into information that healthcare IT application vendors and their hospital customers can leverage. But there’s an argument being made that understanding unstructured text of clinical notes that contain a huge stash of information and then mapping them to fine-grained ICD-10 coding schemes requires a combination of NLP, advanced linguistics, machine learning and semantic web technologies, and Amit Sheth, professor of computer science and engineering at Wright State University and director of the Kno.e.sis Center is making them. (See our story yesterday for a look at how the NLP market is evolving overall, including in healthcare.)
“ICD-10 has thousands of codes with millions of possible permutations and combinations. A rule-based approach is not effective to cover the huge number of ICD-10 codes.” Sheth says. Extracting the correct concepts, identifying the relationship between these concepts and mapping them to the correct code is a major challenge, with codes often formed by information from various sections of a clinical document that itself is subject to individual physicians’ style of recording information, among other factors.
The natural language processing (NLP) market is moving ahead at a steady clip. According to the recently released report, Natural Language Processing Market – Worldwide Market Forecast and Analysis (2013 – 2018), the sector is estimated to grow from $3,787.3 million in 2013 to $9,858.4 million in 2018. That’s an estimated 21 percent CAGR.
The report considers the market to factor in multiple technologies — recognition technologies such as Interactive Voice Response, Optical Character Recognition, and pattern and image recognition; operational technologies such as auto coding and classification and categorization technologies; and text analytics and speech analytics technologies; as well as machine translation, information extraction and question-answer report generation.
Driving the uptake, the report notes, is the need to enhance customer experiences, especially in an age when the smartphone rules, and Big Data predominates. Big-time industry adopters of the technology, it cites, are healthcare, banking and financial services, and e-commerce, where a big growth in real-time and unstructured customer data and transaction information can be taken in hand by NLP technology to analyze customer needs and then optimize responses to them, taking out some of the human labor costs of doing so.
LONDON, UNITED KINGDOM–(Marketwired – Nov 19, 2013) – TheySay Ltd., a text and sentiment analytics company, has introduced an advanced natural language parser. According to TheySay co-founder Professor Stephen Pulman, ‘parsing’ is the process of discovering the grammatical structure of sentences, e.g., the subject, object, main verb, and modifiers.
“It’s paramount to be able to distinguish the structural and semantic difference between sentences like ‘bacteria kill many people’ and ‘this product kills bacteria’ so that the first can be interpreted computationally as largely negative and the second as more positive even though both involve the negative concepts of ‘killing’ and ‘bacteria,’” said Pulman. Read more
ANDOVER, MA–(Marketwired – Nov 18, 2013) - Veveo, a leading provider of semantic technologies to bridge the usability gap in connected devices and applications with intelligent search, discovery and personalization solutions, announced today it has been awarded a seminal patent by the United States Patent and Trademark Office (USPTO) in the area of speech-based interfaces for devices that allow natural conversations similar to the way people speak with each other. Read more
NEXT PAGE >>