Cade Metz of Wired recently wrote, “Deep learning can do many things. Tapping the power of hundreds or even thousands of computers, this new breed of artificial intelligence can help Facebook recognize people, words, and objects that appear in digital photos. It can help Google understand what you’re saying when you bark commands into an Android phone. And it can help Baidu boost the bottom line. The Chinese web giant now uses deep learning to target ads on its online services, and according to Andrew Ng—who helped launch the deep learning operation at Google and now oversees research and development at Baidu—the company has seen a notable increase in revenue as a result. ‘It’s used very successfully in advertising,’ he says, sitting inside the company’s U.S. R&D center in Sunnyvale, California. ‘We have not released revenue numbers on the specific impact, but it is significant’.” Read more
Posts Tagged ‘deep learning’
AlchemyAPI’s New Face Detection And Recognition API Boosts Entity Information Courtesy Of Its Knowledge Graph
AlchemyAPI has released its AlchemyVision Face Detection/Recognition API, which, in response to an image file or URI, returns the position, age, gender, and, in the case of celebrities, the identities of the people in the photo and connections to their web sites, DBpedia links and more.
According to founder and CEO Elliot Turner, it’s taking a different direction than Google and Baidu with its visual recognition technology. Those two vendors, he says in an email response to questions from The Semantic Web Blog, “use their visual recognition technology internally for their own competitive advantage. We are democratizing these technologies by providing them as an API and sharing them with the world’s software developers.”
The business case for those developers to leverage the Face Detection/Recognition API include that companies can use facial recognition for demographic profiling purposes, allowing them to understand age and gender characteristics of their audience based on profile images and sharing activity, Turner says.
Cade Metz of Wired reports, “When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence. Applying its massive cluster of computers to an emerging breed of AI algorithm known as ‘deep learning,’ the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.” Read more
Daniela Hernandez of Wired recently wrote, “If you walk into the computer science building at Stanford University, Mobi is standing in the lobby, encased in glass. He looks a bit like a garbage can, with a rod for a neck and a camera for eyes. He was one of several robots developed at Stanford in the 1980s to study how machines might learn to navigate their environment—a stepping stone toward intelligent robots that could live and work alongside humans. He worked, but not especially well. The best he could do was follow a path along a wall. Like so many other robots, his ‘brain’ was on the small side. Now, just down the hall from Mobi, scientists led by roboticist Ashutosh Saxena are taking this mission several steps further. They’re working to build machines that can see, hear, comprehend natural language (both written and spoken), and develop an understanding of the world around them, in much the same way that people do. Read more
Caleb Garling of the MIT Technology Review reports, “Machines are doing more and more of the work typically completed by humans, and detecting diseases may be next: a new company called Enlitic takes aim at the examination room by employing computers to make diagnoses based on images. Enlitic cofounder and CEO Jeremy Howard—formerly the president and lead scientist at data-crunching startup Kaggle—says the idea is to teach computers how to recognize various injuries, diseases, and disorders by showing them hundreds of x-rays, MRIs, CT scans, and other films. Howard believes that with enough experience, a computer can start to spot trouble and flag the images immediately for a physician to investigate. That could save physicians from having to comb through stacks of films.” Read more
Christian de Looper of TechTimes recently wrote, “Google is buying Jetpac Inc., an business that makes city guides using publicly available Instragram photos. Using that data, Jetpac determines things like the happiest city… Jetpac essentially algorithmically scans users Instagram photos to generate lists like ‘10 Scenic Hikes’ which can be very handy for those travelling in a city they’ve never been to before. Jetpac has created a total of around 6,000 city guides. Not only that, but the app also puts users’ knowledge of cities to the test in a number of quizzes” Read more
Derrick Harris of GigaOM reports, “Nervana Systems, a San Diego-based startup building a specialized system for deep learning applications, has raised a $3.3 million series A round of venture capital. Draper Fisher Jurvetson led the round, which also included Allen & Co., AME Ventures and Fuel Capital. Nervana launched in April with a $600,00 seed round. The idea behind the company is that deep learning — the advanced type of machine learning that is presently revolutionizing fields such as computer vision and text analysis — could really benefit from hardware designed specifically for the types of neural networks on which it’s based and the amount of data they often need to crunch.” Read more
Derrick Harris of GigaOM reports, “When it comes to the future of web content… Yahoo might just have the inside track on innovation. I spoke recently with Ron Brachman, the head of Yahoo Labs, who’s now managing a team of 250 (and growing) researchers around the world. They’re experts in fields such as computational advertising, personalization and human-computer interaction, and they’re all focused on the company’s driving mission of putting the right content in front of the right people at the right time. However, Yahoo Labs’ biggest focus appears to be on machine learning, a discipline that can easily touch nearly every part of a data-driven company like Yahoo. Labs now has a dedicated machine learning group based in New York; some are working on what Brachman calls ‘hardcore science and some theory,’ while others are building a platform that will open up machine learning capabilities across Yahoo’s employee base.” Read more
Derrick Harris of GigaOM reports, “Twitter has acquired a stealthy computer vision startup called Madbits, which was founded by former New York University researchers. Clément Farabet and Louis-Alexandre Etezad-Heydari. Farabet is a protégé of Facebook AI Lab director and New York University professor Yann LeCun, while Etezad-Heydari was advised by Larry Maloney and Eero Simoncelli.” Read more
Daniela Hernandez of Wired reports, “Drawing on the work of a clever cadre of academic researchers, the biggest names in tech—including Google, Facebook, Microsoft, and Apple—are embracing a more powerful form of AI known as ‘deep learning,’ using it to improve everything from speech recognition and language translation to computer vision, the ability to identify images without human help. In this new AI order, the general assumption is that Google is out in front… But now, Microsoft’s research arm says it has achieved new records with a deep learning system it calls Adam, which will be publicly discussed for the first time during an academic summit this morning at the company’s Redmond, Washington headquarters.” Read more
NEXT PAGE >>