Posts Tagged ‘deep learning’

AlchemyAPI’s New Face Detection And Recognition API Boosts Entity Information Courtesy Of Its Knowledge Graph

AlcaclhinfohemyAPI has released its AlchemyVision Face Detection/Recognition API, which, in response to an image file or URI, returns the position, age, gender, and, in the case of celebrities, the identities of the people in the photo and connections to their web sites, DBpedia links and more.

According to founder and CEO Elliot Turner, it’s taking a different direction than Google and Baidu with its visual recognition technology. Those two vendors, he says in an email response to questions from The Semantic Web Blog, “use their visual recognition technology internally for their own competitive advantage.  We are democratizing these technologies by providing them as an API and sharing them with the world’s software developers.”

The business case for those developers to leverage the Face Detection/Recognition API include that companies can use facial recognition for demographic profiling purposes, allowing them to understand age and gender characteristics of their audience based on profile images and sharing activity, Turner says.

Read more

Harnessing the Power of Deep Learning

5621803163_5772be9d8cCade Metz of Wired reports, “When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence. Applying its massive cluster of computers to an emerging breed of AI algorithm known as ‘deep learning,’ the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.” Read more

Introducing RoboBrain: An Online Brain for Robots

robobrainDaniela Hernandez of Wired recently wrote, “If you walk into the computer science building at Stanford University, Mobi is standing in the lobby, encased in glass. He looks a bit like a garbage can, with a rod for a neck and a camera for eyes. He was one of several robots developed at Stanford in the 1980s to study how machines might learn to navigate their environment—a stepping stone toward intelligent robots that could live and work alongside humans. He worked, but not especially well. The best he could do was follow a path along a wall. Like so many other robots, his ‘brain’ was on the small side. Now, just down the hall from Mobi, scientists led by roboticist Ashutosh Saxena are taking this mission several steps further. They’re working to build machines that can see, hear, comprehend natural language (both written and spoken), and develop an understanding of the world around them, in much the same way that people do. Read more

Enlitic is Teaching Computers to Detect Cancer

enliticCaleb Garling of the MIT Technology Review reports, “Machines are doing more and more of the work typically completed by humans, and detecting diseases may be next: a new company called Enlitic takes aim at the examination room by employing computers to make diagnoses based on images. Enlitic cofounder and CEO Jeremy Howard—formerly the president and lead scientist at data-crunching startup Kaggle—says the idea is to teach computers how to recognize various injuries, diseases, and disorders by showing them hundreds of x-rays, MRIs, CT scans, and other films. Howard believes that with enough experience, a computer can start to spot trouble and flag the images immediately for a physician to investigate. That could save physicians from having to comb through stacks of films.” Read more

Google, Jetpac, and Deep Learning

jetpacChristian de Looper of TechTimes recently wrote, “Google is buying Jetpac Inc., an business that makes city guides using publicly available Instragram photos. Using that data, Jetpac determines things like the happiest city… Jetpac essentially algorithmically scans users Instagram photos to generate lists like ‘10 Scenic Hikes’ which can be very handy for those travelling in a city they’ve never been to before. Jetpac has created a total of around 6,000 city guides. Not only that, but the app also puts users’ knowledge of cities to the test in a number of quizzes” Read more

Nervana Systems Raises $3.3M for Deep Learning

nervanaDerrick Harris of GigaOM reports, “Nervana Systems, a San Diego-based startup building a specialized system for deep learning applications, has raised a $3.3 million series A round of venture capital. Draper Fisher Jurvetson led the round, which also included Allen & Co., AME Ventures and Fuel Capital. Nervana launched in April with a $600,00 seed round. The idea behind the company is that deep learning — the advanced type of machine learning that is presently revolutionizing fields such as computer vision and text analysis — could really benefit from hardware designed specifically for the types of neural networks on which it’s based and the amount of data they often need to crunch.” Read more

Yahoo Labs Hopes to Change the Future of Content Consumption

yahooDerrick Harris of GigaOM reports, “When it comes to the future of web content… Yahoo might just have the inside track on innovation. I spoke recently with Ron Brachman, the head of Yahoo Labs, who’s now managing a team of 250 (and growing) researchers around the world. They’re experts in fields such as computational advertising, personalization and human-computer interaction, and they’re all focused on the company’s driving mission of putting the right content in front of the right people at the right time. However, Yahoo Labs’ biggest focus appears to be on machine learning, a discipline that can easily touch nearly every part of a data-driven company like Yahoo. Labs now has a dedicated machine learning group based in New York; some are working on what Brachman calls ‘hardcore science and some theory,’ while others are building a platform that will open up machine learning capabilities across Yahoo’s employee base.” Read more

Deep Learning Startup Madbits Acquired by Twitter

MADBITSDerrick Harris of GigaOM reports, “Twitter has acquired a stealthy computer vision startup called Madbits, which was founded by former New York University researchers. Clément Farabet and Louis-Alexandre Etezad-Heydari. Farabet is a protégé of Facebook AI Lab director and New York University professor Yann LeCun, while Etezad-Heydari was advised by Larry Maloney and Eero Simoncelli.” Read more

Microsoft Introduces New Deep Learning System, Project Adam

adam

Daniela Hernandez of Wired reports, “Drawing on the work of a clever cadre of academic researchers, the biggest names in tech—including Google, Facebook, Microsoft, and Apple—are embracing a more powerful form of AI known as ‘deep learning,’ using it to improve everything from speech recognition and language translation to computer vision, the ability to identify images without human help. In this new AI order, the general assumption is that Google is out in front… But now, Microsoft’s research arm says it has achieved new records with a deep learning system it calls Adam, which will be publicly discussed for the first time during an academic summit this morning at the company’s Redmond, Washington headquarters.” Read more

Deep Neural Networks Could Help Discover Unique Particles Like Higgs Boson

ucirvine

Derrick Harris of GigaOM reports, “Researchers from the University of California, Irvine, have published a paper demonstrating the effectiveness of deep learning in helping discover exotic particles such as Higgs bosons and supersymmetric particles. The research, which was published in Nature Communications, found that modern approaches to deep neural networks might be significantly more accurate than the types of machine learning scientists traditionally use for particle discovery and might also save scientists a lot of work. To get a sense of how challenging particle discovery is, consider that a collider can produce 100 billion collisions per hour and only about 300 will produce a Higgs boson. Because the particles decay almost immediately, scientists can’t expressly identify them, but instead must analyze (and sometimes infer) the products of their decay.” Read more

NEXT PAGE >>