Posts Tagged ‘deep learning’

Baidu Research Achieves Speech Recognition Breakthrough With “Deep Speech”

Baidu logoSUNNYVALE, CA, Dec 18, 2014 (Marketwired via COMTEX) — Baidu Research, a division of Baidu, Inc. today announced initial results from its Deep Speech speech recognition system.

Deep Speech is a new system for speech, built with the goal of improving accuracy in noisy environments (for example, restaurants, cars and public transportation), as well as other challenging environments (highly reverberant and far-field situations).

Key to the Deep Speech approach is a well-optimized recurrent neural net (RNN) training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allowed Baidu researchers to efficiently obtain a large amount of varied data for training. Read more

4 Open Source Machine Learning Projects

deepdiveSerdar Yegulalp of InfoWorld recently wrote, “Over the last year, as part of the new enterprise services that IBM has been pushing on its reinvention, Watson has become less of a “Jeopardy”-winning gimmick and more of a tool. It also remains IBM’s proprietary creation. What are the chances, then, of creating a natural-language machine learning system on the order of Watson, albeit with open source components? To some degree, this has already happened — in part because Watson itself was built in top of existing open source work, and others have been developing similar systems in parallel to Watson. Here’s a look at four such projects.” Read more

Deep Learning Startup MetaMind Raises $8M

metamindJordan Novet of Venture Beat recently wrote, “Richard Socher never set out to place himself on the bleeding edge of artificial intelligence. He merely wanted to blend language and math — two subjects he’d always liked. But one thing led to another, and he ended up developing an impressive technology called recursive neural networks, and now the startup he established after leaving university,MetaMind, is launching with financial backing from some serious names. Socher and his team at the four-month-old startup want to demonstrate MetaMind’s ability to process images and text better than any other available technology out there to perform deep learning. Toward that end, in addition to announcing an $8 million initial funding round from Khosla Ventures and Salesforce.com chief executive Marc Benioff, MetaMind today is introducing multiple demonstrations of its technical capabilities on its website.” Read more

Lexalytics Draws On Deep Learning To Enhance Salience 6 Text Analytics Engine

lexalThere’s a new version of Lexalytics’ Salience Text Analytics Engine: Some of its key new capabilities in Version 6 are enabled by underlying Syntax Matrix technology that the vendor has been working on for the last 12 to 18 months.

Syntax Matrix, explains vp of product and marketing Seth Redmore, takes on the job of doing efficient chunk parsing, so that customers who can be dealing with hundreds of millions of documents a day can maintain that scale without sacrificing accuracy or performance. “What [chunk parsing] means is that we can tear apart a sentence to understand quickly how all the phrases in the sentence relate to each other,” he says, much as Salience’s existing Concept Matrix technology leverages Wikipedia to help it tell what entities are related to each other and how closely.

Deep learning infuses the Syntax Matrix, which is trained on billions of words to support its rich approach to extracting phrases, each with some 200 different features associated with it. “With deep learning we extract so many different features and understand all the interrelationships between them,” he says, providing users with the chunks of the sentence that are most interesting to them, and what they mean so they can take action. “Sentiment Matrix lets us tear apart these sentences in a grammatically meaningful fashion and do it in such a way that you can build other stuff on top of it,” he says.

Read more

Google Partners with Oxford on NLP and Image Recognition Research

deepmindBen Woods of The Next Web reports, “Google has joined forces with the University of Oxford in the UK in order to better study the potential of artificial intelligence (AI) in the areas of image recognition and natural language processing. The hope is that by joining forces with an esteemed academic institution, the research will progress more rapidly than going it alone for its DeepMind project. In total, Google has hired seven individuals (who also happen to be world experts in deep learning for natural language understanding), three of which will remain as professors holding joint appointments at Oxford University.” Read more

How Andrew Ng is Monetizing Deep Learning at Baidu

Baidu logoCade Metz of Wired recently wrote, “Deep learning can do many things. Tapping the power of hundreds or even thousands of computers, this new breed of artificial intelligence can help Facebook recognize people, words, and objects that appear in digital photos. It can help Google understand what you’re saying when you bark commands into an Android phone. And it can help Baidu boost the bottom line. The Chinese web giant now uses deep learning to target ads on its online services, and according to Andrew Ng—who helped launch the deep learning operation at Google and now oversees research and development at Baidu—the company has seen a notable increase in revenue as a result. ‘It’s used very successfully in advertising,’ he says, sitting inside the company’s U.S. R&D center in Sunnyvale, California. ‘We have not released revenue numbers on the specific impact, but it is significant’.” Read more

AlchemyAPI’s New Face Detection And Recognition API Boosts Entity Information Courtesy Of Its Knowledge Graph

AlcaclhinfohemyAPI has released its AlchemyVision Face Detection/Recognition API, which, in response to an image file or URI, returns the position, age, gender, and, in the case of celebrities, the identities of the people in the photo and connections to their web sites, DBpedia links and more.

According to founder and CEO Elliot Turner, it’s taking a different direction than Google and Baidu with its visual recognition technology. Those two vendors, he says in an email response to questions from The Semantic Web Blog, “use their visual recognition technology internally for their own competitive advantage.  We are democratizing these technologies by providing them as an API and sharing them with the world’s software developers.”

The business case for those developers to leverage the Face Detection/Recognition API include that companies can use facial recognition for demographic profiling purposes, allowing them to understand age and gender characteristics of their audience based on profile images and sharing activity, Turner says.

Read more

Harnessing the Power of Deep Learning

5621803163_5772be9d8cCade Metz of Wired reports, “When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence. Applying its massive cluster of computers to an emerging breed of AI algorithm known as ‘deep learning,’ the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.” Read more

Introducing RoboBrain: An Online Brain for Robots

robobrainDaniela Hernandez of Wired recently wrote, “If you walk into the computer science building at Stanford University, Mobi is standing in the lobby, encased in glass. He looks a bit like a garbage can, with a rod for a neck and a camera for eyes. He was one of several robots developed at Stanford in the 1980s to study how machines might learn to navigate their environment—a stepping stone toward intelligent robots that could live and work alongside humans. He worked, but not especially well. The best he could do was follow a path along a wall. Like so many other robots, his ‘brain’ was on the small side. Now, just down the hall from Mobi, scientists led by roboticist Ashutosh Saxena are taking this mission several steps further. They’re working to build machines that can see, hear, comprehend natural language (both written and spoken), and develop an understanding of the world around them, in much the same way that people do. Read more

Enlitic is Teaching Computers to Detect Cancer

enliticCaleb Garling of the MIT Technology Review reports, “Machines are doing more and more of the work typically completed by humans, and detecting diseases may be next: a new company called Enlitic takes aim at the examination room by employing computers to make diagnoses based on images. Enlitic cofounder and CEO Jeremy Howard—formerly the president and lead scientist at data-crunching startup Kaggle—says the idea is to teach computers how to recognize various injuries, diseases, and disorders by showing them hundreds of x-rays, MRIs, CT scans, and other films. Howard believes that with enough experience, a computer can start to spot trouble and flag the images immediately for a physician to investigate. That could save physicians from having to comb through stacks of films.” Read more

NEXT PAGE >>