Posts Tagged ‘neural network’

Google Researchers Use End-to-End Neural Network To Caption Pictures

pizzaGoogle researchers have announced the development of a machine-learning system that can automatically produce captions to accurately describe images in properly formed sentences the first time it sees them.

“This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images,” report research Scientists Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan in a blog about how they’re building a neural image caption generator.

Getting there, the researchers say, involved merging recent computer vision and language models into a single jointly trained system that can directly produce a human readable sequence of words to describe a given image. The task is no easy one, they point out, explaining that unlike image classification or object recognition on its own, their work has to account not only for the objects contained in the image, but also for expressing how these objects relate to each other, as well as their attributes and the activities they are involved in.

The approach leverages an end- to-end neural network that can automatically view an image and generate a plan English description of it.

Read more

Apple’s Siri to Incorporate Neural Network?

Siri LogoWired’s Robert McMillan recently wrote, “…neural network algorithms are hitting the mainstream, making computers smarter in new and exciting ways. Google has used them to beef up Android’s voice recognition. IBM uses them. And, most remarkably, Microsoft uses neural networks as part of the Star-Trek-like Skype Translate, which translates what you say into another language almost instantly. People “were very skeptical at first,” Hinton says, “but our approach has now taken over.” One big-name company, however, hasn’t made the jump: Apple, whose Siri software is due for an upgrade. Though Apple is famously secretive about its internal operations–and did not provide comment for this article–it seems that the company previously licensed voice recognition technology from Nuance—perhaps the best known speech recognition vendor. But those in the tight-knit community of artificial intelligence researchers believe this is about to change. It’s clear, they say, that Apple has formed its own speech recognition team and that a neural-net-boosted Siri is on the way.”

Read more

Artificial Intelligence to Make Lawyers Redundant: Ipselex Launches an API for Law

ip

Hong Kong, Hong Kong (PRWEB) April 03, 2014–Ipselex, until now a secretive Hong Kong artificial intelligence company, today announced the launch of its web platform. The platform offers API-like access to a brain in the cloud that has taught itself to understand and make predictions about patents and patent applications.

 

Combining state of the art natural language processing with neural network technology designed to simulate a human brain, the AI at the core of Ipselex has learned what makes a good patent through a mix of self-study and guidance from an experienced patent attorney. It can, for example, analyze products for infringement and, in certain industry sectors, estimate the likelihood that a given patent application will be granted. Read more

Nara Delivers Neural Networking For All

rsz_naramalayNara is officially on its way from being solely a consumer-lifestyle brand – with its neural networking technology helping users find dining and hotel experiences that match their tastes – to also being the power behind other companies’ recommendation and curation offerings. This summer it made a deal with Singapore Communications’ Singtel Digital Life Division to use its technology to help their users hone in on personalized eating options, and today that online food and dining guide, HungryGoWhereMalysia, goes live.

But Singtel won’t be the only outside party to plug into Nara’s backbone, as the company today also is announcing that it is licensing its capabilities to other parties interested in leveraging them. “An enterprise can plug into our neural network in the cloud through our API,” says CEO Tom Copeman, accessing its smarts for analyzing and then personalizing tons of data from anywhere on the web, tailored to the type of service they’d like to offer.

HungryGoWhereMalaysia, for example, is much like Nara for personalized restaurant discovery here in the states, except culturally branded to their markets; local consumers will get tailored list of dining recommendations from over 35,000 restaurants throughout the country, and as the service gets to know them better, suggestions will be more finely honed to match their Digital DNA profiles. “We believe we’re the first in computer science to receive third-party data from outside sources through our API into our neural network, to make the calculations and comparisons, and send back down a more organized, personalized and targeted selections based on individual preferences.”

Read more

Next Steps For Semantic Services About Where To Eat And What You’re Eating

What’s on the menu for semantic technology this week? Two vendors in the foodie field are offering up some new treats.

From Nara, whose neural networking technology is behind a service to help users better personalize and curate their restaurant dining experiences (see how in our story here), comes a new feature that should make picking a restaurant for a group dinner an easier affair. It combines users’ “digital DNA” – the sum of what it learns of what each one likes and doesn’t like regarding dining venues – to serve up restaurant choices that should appeal to the entire group across its range of preferences.

“It’s a really fun way to start getting [the service] into social,” says Nara founder and CEO Tom Copeman.

Read more

Google Glass Powers Ahead, Though Privacy Battle May Be On The Horizon

The NY Times reports today that Google acknowledged it had violated people’s privacy during its StreetView mapping project. Thirty-eight states had brought a case against Google on the grounds that the project resulted in people’s passwords and other personal information being unknowingly recorded by the search giant. Google has agreed to settle it by paying a $7 million fine as well as by becoming more aggressive in ensuring that its employees’ efforts don’t violate privacy and informing the public about how to avoid having their privacy compromised.

In its discussion of the settlement, the article brings up that the way now is paved for another privacy battle, this time over Google Glass. Concerns are that Google Glass eyewear also can be used to record photos, videos and audios of the wearer’s surroundings, without the permission of the individuals featured in those surroundings. With Google Glass, users can use their voice to input commands to take a picture or make a video, as well as to take steps less likely to compromise privacy, such as search for facts about landmarks or events.

How that privacy question plays out is yet to be seen. But concerns aren’t stoping the project – which was demonstrated at last week’s SXSW conference – from moving ahead. Google yesterday announced that the glasses will accommodate frames and lenses that match users’ eye prescriptions, for example.

Getting Google Glass to respond to voice commands and searches appears to leverage capabilities it has developed for its Voice Search App for Android, as well as its semantically-driven Knowledge Graph database of hundreds of millions of entities and billions of facts, and their relationships to each other.

Read more

Nara Neural Networking Dining Personalization Service Goes Mobile, Adds Cities, And Targets New Categories With Partners

Early in the summer, The Semantic Web Blog introduced readers to Nara, an advanced neural networking service to automate, personalize and curate web dining experiences for users. (See that story here.)

The service is moving ahead with the launch today of its mobile version, as well as in other respects. “We’re now doing a full-on consumer launch of a polished product on both the web and mobile [platforms],” says CTO Nathan Wilson. “People really are clamoring for the mobile component, especially for this [dining] use case.” Versions for both the iPhone’s iOS and Android operating systems are available.

Read more

Google Working on Simulating the Human Brain

John Markoff of the New York Times reports, “Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain. There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own. Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.” Read more

Where To Eat? Let Neural Network Computing Help You Decide

Dollars to donuts most folks haven’t ever found a place to eat courtesy of neural networking technology before. Generally, Internet searches for spots to have a bite come courtesy of friends’ Facebook recommendations, services like Yelp, and even some semantically-powered offerings such as BooRah, now an Intuit company.

But the collection of neuroscientists, computer scientists, astrophysicists, and creative artists behind Nara, launching into public beta today, have taken the advanced neural networking route to automate, personalize and curate web dining experiences for users – though there’s more to come on the future menu. President and CEO Tom Copeman says of the company, which in April secured $3.6 million of a $4.5 million equity offering, that its cutting-edge neural network and proprietary and patented algorithms and process for analyzing tons of web data, and personalizing it, including considering user feedback on the suggestions it offers, is creating a whole new category.

That is the pure-play digital lifestyle brand that “creates an emotional connection between us and the Web. We’re trying to change how people think about the web, and from sense of what it means to me, and makes sense to me, and how personal it is to me.”

Read more

Atigeo: Interview with Chief Scientist Dr. Oliver “Olly” Downs

— TONY SHAW, OLIVER (OLLY) DOWNS

Tony Shaw: Hi Olly, so to get us started, could you give me a high-level overview of what Atigeo does?

Olly Downs: Absolutely. Atigeo’s platform, xPatterns, enables enterprises to derive insights from large, disparate sources of unstructured data. In doing so, we’ve taken two approaches toward how our platform is productized. The first approach is around aspects of our core technology, which allows us to build simple ontologies for domains of unstructured data and then act upon the understanding of the data – in response to queries or profiles of entities, for example. The second aspect of our product enables enterprises to provide customers the ability to access and manage their profile or persona.

Read more