The NY Times reports today that Google acknowledged it had violated people’s privacy during its StreetView mapping project. Thirty-eight states had brought a case against Google on the grounds that the project resulted in people’s passwords and other personal information being unknowingly recorded by the search giant. Google has agreed to settle it by paying a $7 million fine as well as by becoming more aggressive in ensuring that its employees’ efforts don’t violate privacy and informing the public about how to avoid having their privacy compromised.
In its discussion of the settlement, the article brings up that the way now is paved for another privacy battle, this time over Google Glass. Concerns are that Google Glass eyewear also can be used to record photos, videos and audios of the wearer’s surroundings, without the permission of the individuals featured in those surroundings. With Google Glass, users can use their voice to input commands to take a picture or make a video, as well as to take steps less likely to compromise privacy, such as search for facts about landmarks or events.
How that privacy question plays out is yet to be seen. But concerns aren’t stoping the project – which was demonstrated at last week’s SXSW conference – from moving ahead. Google yesterday announced that the glasses will accommodate frames and lenses that match users’ eye prescriptions, for example.
Getting Google Glass to respond to voice commands and searches appears to leverage capabilities it has developed for its Voice Search App for Android, as well as its semantically-driven Knowledge Graph database of hundreds of millions of entities and billions of facts, and their relationships to each other.
The Voice Search App also supports its Voice Actions technology for things like calling contacts, sending emails, getting directions, and more. Users on mobile devices can say their questions out loud, and the Google search app will reach out to Google’s Knowledge Graph to deliver the answers. As far as voice commands go on Google Glass — to take videos or pictures, for instance — users have to issue an “OK Glass” to initiate the action. Google Now is also in the Google Glass picture, to run translations or provide flight updates without your having to dig for the information.
There may be more in store for promoting voice-related search, command and other advances for Google technologies, perhaps Google Glass among them. This week comes word that Google acquired neural network startup DNNresearch, a start-up out of the Department of Computer Science, University of Toronto. It recently improved the state of the art in object recognition, according to the Google announcement. University Professor Geoffrey Hinton and two of his graduate students, Alex Krizhevsky and Ilya Sutskever, incorporated DNNresearch Inc. in 2012, the announcement notes, adding that, “Hinton is world-renowned for his work with neural nets, and this research has profound implications for areas such as speech recognition, computer vision and language understanding.”
- GraphLab Raises $6.75M to Build 'Hadoop for Graphs'
- Semantic Web Company and OpenLink Partner to Advance Enterprise Linked Data Integration
- Mindbreeze positioned as a “Challenger” in Gartner “Magic Quadrant for Enterprise Search”
- MESA Announces New White Paper: The Role of Semantic Models in Smarter Industrial Operations