Posts Tagged ‘Paul Allen’

Teaching Artificial Intelligence: An Ever-Changing Challenge


Russell Brandom of The Verge recently wrote, “Microsoft co-founder Paul Allen has been pondering artificial intelligence since he was a kid. In the late ’60s, eerily intelligent computers were everywhere, whether it was 2001′s HAL or Star Trek‘s omnipresent Enterprise computer. As Allen recalls in his memoir, ‘machines that behaved like people, even people gone mad, were all the rage back then.’ He would tag along to his father’s job at the library, overwhelmed by the information, and daydream about ‘the sci-fi theme of a dying or threatened civilization that saves itself by finding a trove of knowledge.’ What if you could collect all the world’s information in a single computer mind, one capable of intelligent thought, and be able to communicate in simple human language?” Read more

Oren Etzioni Tapped by Microsoft Co-Founder to Lead New AI Institute


AI Comic from xkcdJohn Cook of GeekWire recently reported, “Perhaps no one has been more synonymous with the startup ethos at the University of Washington than computer science professor Oren Etzioni, a mainstay on campus for more than two decades and an inspiration for budding entrepreneurs in academia. An expert in search, data mining and machine learning, Etzioni’s technologies have formed the basis of startup companies such as Netbot (acquired by Excite), Farecast (acquired by Microsoft) and (backed by Madrona, Maveron and others). Now, Etzioni, who earned his PhD in computer science from Carnegie Mellon University, is moving on from academia after nearly 30 years.” Read more

Easing The Job Of Working With Linked Data

Working with Linked Data could be a little bit easier than it is, and a collaborative project between MediaEvent Services and the Freie Universität of Berlin aims to make it so.

The open source Linked Data Integration Framework (LDIF), which will be a topic of discussion at the upcoming Semantic Technology & Business Conference in Berlin, seeks to address the pain that can occur when attempting to work with distinct data sets, perhaps each one spanning several gigabytes, that are loaded into one triple store. The LDIF approach is to perform transformations to unify the data outside of the triple store, to improve performance and scalability, as well as make it easier to stay up to date with various Linked data sets. A Hadoop version of the LDIF framework just launched, for processing a virtually unlimited amount of data on a cluster.

Read more