Paul Mathai of Manufacturing.net recently wrote, “Over the last few years, augmented reality (AR) technology and its application have been progressing in leaps and bounds. A couple of years ago, the AR application patterns were broadly along the lines of: A pop-up virtual object on a 2D marker… What’s inside the box… A virtual fitting room…” etc. Mathai goes on, “The first wave was mostly exploratory in nature, and looking back, quite simple compared to the current applications trends. The early adoption was oriented towards wowing the customers in product marketing or familiarizing consumers on the product features or user training in field services… Technologically, AR evolved from simple 2D-marker-based, to geo-tagged and then to natural-marker-based platforms. And from the device perspective, it has evolved from mobile handhelds to eye-wearables like the Google Glass.” Read more
Thinknum is a startup with the mission: disrupting financial analysis.
In his work as a quantitative strategist at Goldman Sachs, Thinknum co-founder Gregory Ugwi saw firsthand the trials and tribulations financial analysts went through to digest companies’ financial reports and then build their own research reports about their expectations for future performance based on past numbers. The U.S. SEC’s mandate that companies disclose their financial data using XBRL (eXtensible Business Reporting Language) was supposed to help them, as well as investors of all stripes and sizes that want to better understand what’s going on at the companies they’re interested in.
“The SEC has mandated that all companies have to release their numbers in a machine-readable format, and that’s XBRL (eXtensible Business Reporting Language),” says Ugwi. The positive side of that is that anyone can now get the stats on companies from Google to Wal-Mart, but the downside is that by and large, they can’t do it in a user-friendly way.
Northwestern University reports, “Someday we might be able to build software for our computers simply by talking to them. Ken Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science, and his team have developed a program that allows them to teach computers as you would teach a small child—through natural language and sketching. Called Companion Cognitive Systems, the architecture has the ability for human-like learning. ‘We think software needs to evolve to be more like people,’ Forbus says. ‘We’re trying to understand human minds by trying to build them.’ Forbus has spent his career working in the area of artificial intelligence, creating computer programs and simulations in an attempt to understand how the human mind works. At the heart of the Companions project is the claim that much of human learning is based on analogy. When presented with a new program or situation, the mind identifies similar past experiences to form an action. This allows us to build upon our own knowledge with the ability to continually adapt and learn.” Read more
Marketing and Communications, April 10, 2014 — The Texas A&M University Libraries is preparing to launch VIVO, a web-based community of research profiles to enhance faculty collaboration. By providing standard research profiles for all university faculty and graduate students, researchers can discover and contact individuals with similar interests whether they are across campus or at another VIVO institution. The data entry and standardization will continue through the summer with the VIVO debut planned for Open Access Week in October 2014. Read more
NEW YORK, NEW YORK — (Marketwired) — 04/10/14 — ADmantX, the next-generation contextual analysis and semantic data provider, today announced that the U.S. Patent and Trademark office has awarded patent US 8543578 B2 for its flagship semantic technology. The ADmantX technology automatically comprehends content in both advertising messages and page content to ensure brand-safe, effective ad delivery.
Alex Philp recently wrote for IBM Data Magazine, “The Watson cognitive computing engine is rapidly evolving to recognize geometry and geography, enabling it to understand complex spatial relationships. As Watson combines spatial with temporal analytics and natural language processing, it is expected to derive associations and correlations among streaming data sets evolving from the Internet of Things. Once these associations and correlations occur, Watson can combine the where, what, and when cognitive dimensions into causal explanations about why. Putting the where into Watson represents a key evolutionary milestone into understanding cause-effect relationships and extracting meaningful insights about natural and synthetic global phenomena.” Read more
Gadjo Cardenas Sevilla of the Calgary Herald recently wrote, “Apple’s Siri intelligent personal assistant has been around for nearly four years and standard on iOS devices for three years. The peppy and often humorous artificial intelligence has evolved in terms of features and the number of services it can access. Siri is also getting some stiff competition from Google Now, which along with answering user-initiated queries like Siri, it also passively delivers information to the user by way of visual flash cards… Named after a character in Microsoft’s popular Halo video game franchise, the Cortana personal assistant is expected to come to Windows Phone devices, Xbox and possibly tablets in April… A recent leak with details and screenshots of BlackBerrys upcoming BB 10.3 operating system, reveals that the company formerly known as RIM has been working on an Intelligent Assistant feature to rival Siri and Google Now.” Read more
Rancho Cordova, CA (PRWEB) April 01, 2014–The pharmaceutical community, health care organizations, and software providers are coming together at the OASIS open standards consortium to define a machine-readable content classification standard for the interoperable exchange of clinical trial data via content management systems. The work of the new OASIS Electronic Trial Master File (eTMF) Standard Technical Committee will promote interoperability across diverse computing platforms and cloud networks within the clinical trials community. Read more
A new article out of Information Daily reports, “Milton Keynes may see driverless cars on its roads in 12-18 months, says Geoff Snelson, Strategy Director of MK Smart, the innovation programme being run in the city. The driverless two-person pods are one of the outputs of the MK Smart programme, which is a collaboration between a number of organisations including the Open University (which is located in Milton Keynes) and BT. Central to the project is the creation of the ‘MK Data Hub’, which will support the acquisition and management of vast amounts of data relevant to city systems from a variety of data sources. As well as transport data, these will include data about energy and water consumption, data acquired through satellite technology, social and economic datasets, and crowd-sourced data from social media or specialised apps. Building on the capability provided by the MK Data Hub, the project will innovate in the areas of transport, energy and water management, tackling key demand issues.” Read more
Nick Stockton of Quartz reports, “Computers stole your job; now they know your pain. Using a combination of facial recognition software and machine learning algorithms, researchers have trained computers to be dramatically better than humans at reading pained facial expressions. And they’re working on new programs to help clue you into what your friend, coworker, or client is feeling. In a study released Friday (paywall) in the journal Current Biology, researchers asked 170 subjects whether the expressions of pain shown on faces in a series of videos were real or faked. They found that the humans’ collective empathetic ability was about the same as a coin flip—they read the expressions correctly only 50% of the time. Even after researchers trained the subjects to read the subtle, involuntary muscle triggers that experts use to tell when an emotion is being faked, they were only right 55% of the time.” Read more
NEXT PAGE >>