Derrick Harris of GigaOM reports, “Nervana Systems, a San Diego-based startup building a specialized system for deep learning applications, has raised a $3.3 million series A round of venture capital. Draper Fisher Jurvetson led the round, which also included Allen & Co., AME Ventures and Fuel Capital. Nervana launched in April with a $600,00 seed round. The idea behind the company is that deep learning — the advanced type of machine learning that is presently revolutionizing fields such as computer vision and text analysis — could really benefit from hardware designed specifically for the types of neural networks on which it’s based and the amount of data they often need to crunch.” Read more
Posts Tagged ‘neural networks’
Jorge Garcia of Wired recently wrote, “IBM’s recent announcements of three new services based in Watson technology make it clear that there is pressure in the enterprise software space to incorporate new technologies, both in hardware and software, in order to keep pace with modern business. It seems we are approaching another turning point in technology where many concepts that were previously limited to academic research or very narrow industry niches are now being considered for mainstream enterprise software applications. Machine learning, along with many other disciplines within the field of artificial intelligence and cognitive systems, is gaining popularity, and it may in the not so distant future have a colossal impact on the software industry. This first part of my series on machine learning explores some basic concepts of the discipline and its potential for transforming the business intelligence and analytics space.” Read more
Derrick Harris of GigaOM reports, “Researchers from the University of California, Irvine, have published a paper demonstrating the effectiveness of deep learning in helping discover exotic particles such as Higgs bosons and supersymmetric particles. The research, which was published in Nature Communications, found that modern approaches to deep neural networks might be significantly more accurate than the types of machine learning scientists traditionally use for particle discovery and might also save scientists a lot of work. To get a sense of how challenging particle discovery is, consider that a collider can produce 100 billion collisions per hour and only about 300 will produce a Higgs boson. Because the particles decay almost immediately, scientists can’t expressly identify them, but instead must analyze (and sometimes infer) the products of their decay.” Read more
Jordan Novet of Venture Beat recently wrote, “A startup called Ersatz Labs wants to help lots of companies intelligently answer lots of questions after reviewing lots of data, just as big tech companies like Google and Netflix do. Toward that end, today Ersatz is launching a cloud service for deep learning, as well as a hardware-software package to run inside companies’ existing facilities. While deep learning services are often geared toward specific uses, like text processing and image recognition, Ersatz makes deep learning available for any type of use.” Read more
New Startup Skymind Offers Support for Open Source Deep Learning
Derrick Harris of GigaOM reports, “A San Francisco-based startup called Skymind launched on Monday to offer support and services for deeplearning4j, an open source deep learning project it has created. It’s early to tell how much traction deep learning will gain among mainstream companies or even web companies, but the technology does hold a lot of promise. The existence of open source libraries backed by professional services could certainly help spur adoption – especially for a field of data analysis previously relegated to top universities and research labs at companies such as Google, Microsoft, Facebook and Baidu.” Read more
Derrick Harris of GigaOM reports, “Denver-based startup AlchemyAPI is keeping proactive in the world of artificial intelligence, launching on Monday night a new service that lets users perform computer vision tasks such as image-tagging and photo search via API. The product, called AlchemyVision, is the company’s first foray outside the natural-language processing space where it has focused since 2011. It also probably foreshadows a spate of computer vision services yet to come. AlchemyAPI first demonstrated its object recognition service in September but Turner said the company has done a lot of work in the meantime to get it ready for commercial use. Among the big differences is the sheer scale of the new system, which is running unsupervised across millions of online images and using context from the pages they’re housed on in order to determine what they are.” Read more
Will deep learning take us where we want to go? It’s one of the questions that Oxford University professor of Computational Linguistics Stephen Pulman will be delving into at this week’s Sentiment Analysis Symposium. There, he’ll be participating in a workshop session today on compositional sentiment analysis and giving a presentation tomorrow on bleeding-edge natural language processing.
“There is a lot of hype about deep learning, but it’s not a magic solution,” says Pulman. “I worry whenever there is hype about some technologies like this that it raises expectations to the point where people are bound to be disappointed.”
That’s not to imply, however, that important progress isn’t taking place when it comes to deep learning, which leverages machine learning methods based on learning representations with applications to everything from NLP to computer vision to speech recognition.
Naomi Eterman of McGill Daily recently discussed a technology developed in 2012 by scientists at the University of Waterloo: “Spaun, short for Semantic Pointer Architecture Unified Network, is the largest computer simulation of a functioning brain to date. It is the brainchild of Chris Eliasmith, a professor in philosophy and systems design engineering at the University of Waterloo, who developed the system as a proof-of-principle supplement to his recent book: How to Build a Brain. The model is composed of 2.5 million simulated neurons and four different neurotransmitters that allow it to ‘think’ using the same kind of neural connections as the mammalian brain. Read more
Zach Walton of Web Pro News recently wrote, “Image search is a cornerstone of any search engine. That’s why both Google and Bing are doing everything they can to improve image search to bring up the most relevant images for any search imaginable. While some may argue that recent changes made to Google image search make it worse, Bing is moving ahead with a new strategy that involves deep learning. So, what is deep learning? In short, it’s a type of machine learning that uses artificial neural networks to learn about and understand multiple concepts, including the abstract. In the past, computer systems had to be manually ‘trained’ to recognize patterns or specific images. With machine learning, these systems can now learn to recognize these patterns on their own. When it comes to image search quality, Bing found that integrating deep learning into its systems greatly increased the quality.” Read more
NEXT PAGE >>