Google, in the midst of its I/O conference (see our story here), also has teamed up with NASA to form the Quantum Artificial Intelligence Lab at the agency’s Ames Research Center.

According to a post on Google’s Research Blog, the lab will house a D-Wave Systems quantum computer. The goal is to study how quantum computing can solve some of the most challenging computer science problems, with a focus on advancing machine learning. Machine learning, as Director of Engineering Hartmut Neven writes, “is all about building better models of the world to make more accurate predictions,” but it’s hard work to build a really good model. Real-world applications that he discusses include building a more useful search engine by better understanding spoken questions and what’s on the web to provide the best answer.

Neven notes that Google already has some quantum machine learning algorithms – for example, one to handle highly polluted training data and another to produce very compact, efficient recognizers, which it says is useful when you’re short on power, as on a mobile device. (Google, by the way, recently was granted a patent on mobile machine learning, for mobile phones to teach themselves how to work better — see this article for a good explanation). Along the way in its quantum machine learning algortihm journey, it’s picked up some intelligence, such as that you get the best results not with pure quantum computing, but by mixing quantum and classical computing.

Now, Neven says, the question is whether ideas can move “from theory to practice, building real solutions on quantum hardware? Answering this question is what the Quantum Artificial Intelligence Lab is for.”

The USRA (Universities Space Research Association) will invite researchers from around the world to share time on the computer, hopefully to construct more efficient and more accurate machine learning models for everything from speech recognition, to web search, to protein folding, Google says.

Machine learning is clearly higher on the search giant’s radar these days. Google recently has acquired companies with machine-learning pedigrees, for example, including Adrian Aoun’s news aggregation service Wavii and Professor Geoffrey Hinton’s startup out of the University of Toronto, DNNResearch, which focused on neural networks as very powerful learning devices for tasks like speech and object recognition.

Google also recently has published papers like this one, about using machine learning to train a face detector without having to label images as containing a face or not. It further boosted machine learning’s street cred at I/O yesterday when it discussed new Google + photo features that use machine-learning algorithms to pick the best pictures to highlight from users’ albums – looking for positive emotions and handsome settings and ditching blurs and bad takes.