Jordan Novet of Venture Beat reports, “Many people know Google first and foremost as a search engine company. But really it’s a machine-learning company, using data to make predictions that get incorporated into applications like search and advertising without people even realizing it. Today Google is announcing in a blog post that people can now choose to apply its machine-learning savvy to Google Sheets, the company’s spreadsheet app, to make educated guesses and fill in blank cells. This applied use of machine learning follows Microsoft’s recent announcement of a cloud-based service for that purpose, Azure Machine Learning.” Read more
Posts Tagged ‘Google’
AlchemyAPI’s New Face Detection And Recognition API Boosts Entity Information Courtesy Of Its Knowledge Graph
AlchemyAPI has released its AlchemyVision Face Detection/Recognition API, which, in response to an image file or URI, returns the position, age, gender, and, in the case of celebrities, the identities of the people in the photo and connections to their web sites, DBpedia links and more.
According to founder and CEO Elliot Turner, it’s taking a different direction than Google and Baidu with its visual recognition technology. Those two vendors, he says in an email response to questions from The Semantic Web Blog, “use their visual recognition technology internally for their own competitive advantage. We are democratizing these technologies by providing them as an API and sharing them with the world’s software developers.”
The business case for those developers to leverage the Face Detection/Recognition API include that companies can use facial recognition for demographic profiling purposes, allowing them to understand age and gender characteristics of their audience based on profile images and sharing activity, Turner says.
Cade Metz of Wired reports, “When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence. Applying its massive cluster of computers to an emerging breed of AI algorithm known as ‘deep learning,’ the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.” Read more
Have you wanted to get involved in the schema.org project? Your contribution to the collaborative effort driven by Bing, Google, Yahoo and Yandex for a shared markup vocabulary for web pages is more than welcome. As Dan Brickley, who is developer advocate at Google, noted during his presentation about schema.org’s progress to date at this summer’s Semantic Technology & Business Conference, the “pattern of collaboration with the project [is] we’re trying to push work off on people who are better qualified to do it, and then we mush it all together.”
What is meant by that is that the project is so broad, covering such a huge amount of topics, that the input of experts – whether from the library, media, sports or any other of the multitude of communities whose vocabularies are or aim to be represented – is incredibly valuable, and very much encouraged. In an overview of the 2013-2014 releases, which included TV/radio, civic services, and bibliographic additions, as well as accessibility properties, among others, Brickley related that during the year, “We listened a lot. We listened to people who knew better than us about accessibility, about how broadcast TV and radio are described, about describing social services, about libraries, journals, and ecommerce, and then integrated their suggestions into a unified set of schemas.”
Josh Ong of The Next Web reports, “Google today revealed details behind a new search feature called Structured Snippets that displays information pulled from data tables on webpages. The feature actually began rolling out last month, but the company’s research team explained the technology in a post today. The search engine has been progressively adding new information through its Knowledge Graph database. This latest feature adds more data below the snippets of text in a search query.” Read more
- In a sample of over 12 billion web pages, 21 percent, or 2.5 billion pages, use it to mark up HTML pages, to the tune of more than 15 billion entities and more than 65 billion triples;
- In that same sample, this works out to six entities and 26 facts per page with schema.org;
- Just about every major site in every major category, from news to e-commerce (with the exception of Amazon.com), uses it;
- Its ontology counts some 800 properties and 600 classes.
A lot of it has to do with the focus its proponents have had since the beginning on making it very easy for webmasters and developers to adopt and leverage the collection of shared vocabularies for page markup. At this August’s 10th annual Semantic Technology & Business conference in San Jose, Google Fellow Ramanathan V. Guha, one of the founders of schema.org, shared the progress of the initiative to develop one vocabulary that would be understood by all search engines and how it got to where it is today.
Daniel Sparks of The Motley Fool reported, “”Chinese companies are starting to dream,” said early investor in Baidu (NASDAQ: BIDU ) and managing partner at GGV Capital Jixun Foo. Foo’s proclamation was made in an in-depth article by MIT Technology Review, which examined the Chinese search giant’s new effort to change the world with artificial intelligence. The company’s new AI lab does, indeed, accompany some lofty aspirations — ones big enough to hopefully help Baidu become a global Internet powerhouse and to compete with the likes of Google in increasingly important emerging markets where the default search engine hasn’t yet taken the throne. But what are the implications for investors? Fortunately, Baidu’s growing infatuation with AI looks like it could give birth to winning strategies that could build sustainable value over the long haul.”
Barbara Starr of Search Engine Land recently observed that, “Search is changing – and it’s changing faster than ever. Increasingly, we are seeing organic elements in search results being displaced by displays coming from the Knowledge Graph. Yet the shift from search over documents (e.g. web pages) to search over data (e.g. Knowledge Graph) is still in its infancy. Remember Google’s mission statement: Google’s mission is to organize the world’s information to make it universally accessible and useful. The Knowledge Graph was built to help with that mission. It contains information about entities and their relationships to one another – meaning that Google is increasingly able to recognize a search query as a distinct entity rather than just a string of keywords. As we shift further away from keyword-based search and more towards entity-based search, internal data quality is becoming more imperative.”
Christian de Looper of TechTimes recently wrote, “Google is buying Jetpac Inc., an business that makes city guides using publicly available Instragram photos. Using that data, Jetpac determines things like the happiest city… Jetpac essentially algorithmically scans users Instagram photos to generate lists like ‘10 Scenic Hikes’ which can be very handy for those travelling in a city they’ve never been to before. Jetpac has created a total of around 6,000 city guides. Not only that, but the app also puts users’ knowledge of cities to the test in a number of quizzes” Read more
David Hirsch, co-founder of Metamorphic Ventures, recently wrote for Tech Crunch, “There has been a lot of talk in the venture capital industry about automating the home and leveraging Internet-enabled devices for various functions. The first wave of this was the use of the smartphone as a remote control to manage, for instance, a thermostat. The thermostat then begins to recognize user habits and adapt to them, helping consumers save money. A lot of people took notice of this first-generation automation capability when Google bought Nest for a whopping $3.2 billion. But this purchase was never about Nest; rather, it was Google’s foray into the next phase of the Internet of Things.” Read more
NEXT PAGE >>