[Editor's Note: This guest article comes to us from Dr. Nathan Wilson, CTO of Nara. ]
There once was a time when the busiest and greatest minds –the Jeffersons, Hemingways and Darwins – would have time in their day for long walks, communion with nature, and leisurely handwritten correspondence. Today we awaken each day to an immediate cacophony of emails, tweets, websites and apps that are too numerous to navigate with full consciousness. Swimming in wires, pixels, data bits, and windows with endless tabs is toxic to you and to me, and the problem continues to escalate.
How do you connect to this teeming network without electrocuting your brain? “Filtering” is a simple, but ultimately blinding, approach that shields us from important swaths of knowledge. “Forgetting faster” is potentially a valid solution, but also underserves our mindfulness.
A History of Attempted Solutions So Far: How have we tried to solve information glut so far, and why is each solution inadequate?
Phase 1 – The Web as a Linnaean Taxonomy (1994-2000)
The first method to deal with our information explosion came in “Web 1.0” when portals like Yahoo! arose to elegantly categorize information that you could explore at your leisure. For instance, one could find information on the New England Patriots by following a trail of breadcrumbs from “Sports” to “Football” to “AFC East” and finally “New England Patriots” where you were presented with a list of topical websites.
Phase 2 – The Majority Rules (1999-2010)
Web 1.0 was quickly overrun by the exploding young internet. We developed a “Web 2.0” that was characterized by smarter portals like Yelp and TripAdvisor, which relied on communities of people to “surface” the best items in different categories, and filter for different attributes of interest. This was complemented by the second generation of search sites such as Google, which similarly surfaced the “best” match to different queries based on overall community searches for the same queries, and impressive indices on words within pages.
The problem with Web 2.0 is, and continues to be, that you are at the whim of popular consensus. Worse yet it has become a marketing machine that optimizes the page content for a high ranking with paid placements that have displaced real results. In this system, the rich get richer. With Web 2.0, lost is the individuality.
Phase 3 – Personalization Arises (2010-present day)
There is a new internet wave that is now being heralded as a solution. It is call “personalization” – truly unique one-to-one matching of users to experiences and products, that are selected or even tailored just for them. We are on the cusp of “Web 3.0”, but the promise still remains unfulfilled.
Anxious not to miss out, the great Internet companies of the 1990s and 2000s are joining the personalization movement in droves, but so far just by paying lip service to it. Apple’s Siri launched to great fanfare, showcasing the promise and hunger for personalization, but it quickly became a laughingstock. “Google Now” hints at what is possible in presenting predictive tiles, but it remains restricted to banal applications like weather forecasts and commute times. Pandora claims it is working hard to help users discover new bands, but it is working even harder to show users an advertisement based on those bands. And nearly 20 years after the brilliant Firefly Network launched and disappeared in the night, recommending something as simple as movies to the masses at Netflix remains surprisingly unfulfilled.
We’ve all received emails from machines that are pushed to be too smart for their own good – like when I got a “recommendation” in my email for stylized red stiletto heels, and my vegetarian friend got a Groupon for a new steakhouse that just opened. These offend our sensibilities and waste our precious time and human bandwidth. That being said, the avenue has now been opened and it is only a matter of time before companies achieve the confluence of inventory and user signals to accomplish meaningful matches across a broad array of categories. This will involve the semantic web, which is required for the structured attributes that will distinguish items, experiences, and users. In essence, “understanding meaning” is necessary for personalization.
The fear of many people with this model is that over time, humans in a sense become subservient to the algorithms, and by a vicious cycle increasingly get accustomed to doing what the algorithms ask us to do — you’ll walk through life with algorithms choosing your soundtrack, recommending your next meal or suggesting your reading material. In this scenario, we will become even more passive and less volitional than in our television days.
A Solution – True “Two-Way” AI-Driven Personalization:
In an increasingly algorithmic future, the key issue is whether algorithms alone will decide our behavior via opaque third-party services, or whether we interact with the algorithms through an open dialogue and full transparency to determine how the world is intelligently retrieved, filtered and engaged with around us.
There is opportunity for a “tunable interplay” that connects machine power to human intention. Restoring the balance meaningfully requires a sophisticated intermediary – an artificial intelligence avatar – that understands both sides. It is conversant with the machine side, and able to connect to a vast array of information and organize it. Yet it is open to a dialogue to us, and encourages us to ask hard questions of ourselves, to identify who we are. Most importantly, it is an agent of the individual and under our direct volitional control. If we are able to trust them and work with them, they in turn will work with us.
About The Author
Nara’s chief technology officer, Dr. Nathan Wilson is a scientist who has dedicated his career to understanding new models of computation based on the architecture of the brain. In 2011, he co-founded Nara, a company that solves the problem of web search by crafting a more personalized and liberating Web with a next-generation personal internet platform. With a doctorate and postdoctoral work in brain and cognitive sciences at MIT, and a masterʼs degree in computer science and artificial intelligence from Cornell, Nathan conceived and built the Nara Neural Network. This complex algorithm closely mirrors a human brain in terms of information synthesis and decision-making and is the underlying framework responsible for driving the technology inside of the Nara platform.
Nathan has numerous patents spanning computer science, medical devices, neurobiology and artificial intelligence, and has authored many academic research papers in both neuroscience and computer science that have been published in journals such as Nature, Neuron, and the Journal of Computational Neuroscience. He holds 15 years of applied experience as both architect and manager consulting for Internet, artificial intelligence, and software companies.
Photo courtesy flickr / dougww