Serdar Yegulalp of InfoWorld recently wrote, “After spending decades in the shadows as a specialty discipline, machine learning is suddenly front and center as a business tool. The hard part, though, is making it useful, especially to the developers and budding data scientists who are being tasked with the job. To that end, we rounded up some of the most common and useful open source machine learning tools we’ve spotted in the wild.” Read more
Bruce Rogers of Forbes recently wrote, “There is a wave of digital disruption coming at CMOs from all fronts. The world has shifted over the past five years, mostly because of the emergence of the ‘internet of things’–a world where nearly everyone and everything is interconnected in a web enabled network. But according to Alex Dayon, former co-founder of Business Objects and now president of Salesforce.com’s applications and platform products, ‘we could call ourselves the ‘internet of customers’ because we’ve always connected devices and apps. It means there’s a customer behind it. By 2020 there will be 50 billion connected devices. And behind every device, whether it’s a smartphone, a car, a toothbrush, or a light bulb, there is a customer’.” Read more
Terence Tse, Mark Esposito, and Olaf Groth of the Harvard Business Review write, “While we are surrounded by a wave of new disruptive technologies and apps, HR still hasn’t improved how it evaluates the prospective workforce. Traditional hiring processes that revolve around CVs are no longer sufficient – they don’t pinpoint the right qualities demanded of leaders today, and their dated criteria obscures many talented individuals from even hitting the radar. There is nothing inherently wrong with resumes – they highlight applicants’ past achievements and experience. But while CVs are good at showcasing formal skills, they’re not very useful for identifying values and behavior.” Read more
Kevin Casey of Information Week recently wrote, “Old-school organizations will fuel the next swell of data-driven initiatives in IT. So what’s in store for the early movers and, specifically, their big-data professionals? How will the data scientist and similar roles evolve? ‘The role is becoming bigger,’ said Olly Downs, chief scientist at big-data analytics firm Globys, in a recent interview. By bigger, he means in every way — what was once a niche is now, at least in some companies, a driving force.” Read more
[Editor's Note: This guest article comes to us from Dr. Nathan Wilson, CTO of Nara. ]
There once was a time when the busiest and greatest minds –the Jeffersons, Hemingways and Darwins – would have time in their day for long walks, communion with nature, and leisurely handwritten correspondence. Today we awaken each day to an immediate cacophony of emails, tweets, websites and apps that are too numerous to navigate with full consciousness. Swimming in wires, pixels, data bits, and windows with endless tabs is toxic to you and to me, and the problem continues to escalate.
How do you connect to this teeming network without electrocuting your brain? “Filtering” is a simple, but ultimately blinding, approach that shields us from important swaths of knowledge. “Forgetting faster” is potentially a valid solution, but also underserves our mindfulness.
A History of Attempted Solutions So Far: How have we tried to solve information glut so far, and why is each solution inadequate?
Phase 1 – The Web as a Linnaean Taxonomy (1994-2000)
The first method to deal with our information explosion came in “Web 1.0” when portals like Yahoo! arose to elegantly categorize information that you could explore at your leisure. For instance, one could find information on the New England Patriots by following a trail of breadcrumbs from “Sports” to “Football” to “AFC East” and finally “New England Patriots” where you were presented with a list of topical websites.
According to Mike Kavis of Forbes, “Companies are jumping on the Internet of Things (IoT) bandwagon and for good reasons. McKinsey Global Institute reports that the IoT business will deliver $6.2 trillion of revenue by 2025. Many people wonder if companies are ready for this explosion of data generated for IoT? As with any new technology, security is always the first point of resistance. I agree that IoT brings a wave of new security concerns but the bigger concern is how woefully prepared most data centers are for the massive amount of data coming from all of the “things” in the near future.”
Kavis went on to write that, “Some companies are still hanging on to the belief that they can manage their own data centers better than the various cloud providers out there. This state of denial should all but go away when the influx of petabyte scale data becomes a reality for enterprises. Enterprises are going to have to ask themselves, “Do we want to be in the infrastructure business?” because that is what it will take to provide the appropriate amount of bandwidth, disk storage, and compute power to keep up with the demand for data ingestion, storage, and real-time analytics that will serve the business needs. If there ever was a use case for the cloud, the IoT and Big Data is it. Processing all of the data from the IoT is an exercise in big data that boils down to three major steps: data ingestion (harvesting data), data storage, and analytics.”
To read a different perspective on these challenges and how Semantic Web technologies play a role in them, read Irene Polikoff’s recent guest post, “RDF is Critical to a Successful Internet of Things.”
Previously, it was reported on SemanticWeb.com that Google had acquired Nest Labs. Steve Lohr of The New York Times recently opined that: “Google did not pay $3.2 billion for Nest Labs this year just because it designed a smart thermostat that has redefined that humble household device. No, Google also bought into the vision of Nest’s founders, Tony Fadell and Matt Rogers, a pair of prominent Apple alumni, that the Nest thermostat is one step toward what they call the conscious home. That means a home brimming with artificial intelligence, whose devices learn about and adapt to its human occupants, for greater energy savings, convenience and security. Last Friday, Nest moved to broaden its reach in the home, buying a fast-growing maker of Internet-connected video cameras, DropCam, for $555 million. And on Tuesday, Nest is expected to announce a software strategy backed by manufacturing partners and a venture fund from Google Ventures and Kleiner Perkins Caufield & Byers.”
The author added: “Nest’s is the third high-profile announcement this month about software to link devices in the home in a network known as the consumer Internet of Things. At its Worldwide Developers Conference this month, Apple introduced HomeKit, its technology for linking and controlling smart home devices. HomeKit uses the iOS operating system, the software engine of iPhones and iPads. Quirky, a start-up that manufactures and sells products based on crowdsourced ideas, on Monday announced the creation of a separate software company, Wink. Its initiative has attracted the backing of a major retailer, Home Depot, and manufacturers like General Electric, Honeywell and Philips.
Read more here.
Photo courtesy: flickr/jbritton
Signe Brewster of Gigaom recently wrote, “In 2012, Google hired Ray Kurzweil to build a computer capable of thinking as powerfully as a human. It would require at least one hundred trillion calculations per second — a feat already accomplished by the fastest supercomputers in existence. The more difficult challenge is creating a computer that has a hierarchy similar to the human brain. At the Google I/O conference Wednesday, Kurzweil described how the brain is made up of a series of increasingly more abstract parts. The most abstract — which allows us to judge if something is good or bad, intelligent or unintelligent — is an area that has been difficult to replicate with a computer. A computer can calculate 10 x 20 or tell the difference between a person and a table, but it can’t judge if a person is kind or mean. To get there, humans will need to build computers that can build abstract consciousness from a more concrete level. Humans will program them to recognize patterns, and then from those patterns they will need to be smart enough to learn to understand more.”
James Kobielus of Info World recently shared his thoughts on the best definition for machine learning. He writes, “Increasingly, the term ‘machine learning’ is… beginning to acquire a catch-all status. Or, at the very least, machine learning has become a convenient handle that today’s data scientists use to refer to the wide range of leading-edge techniques for automating knowledge and pattern discovery from fresh data, much of it unstructured. People’s working definitions of machine learning seem to be creeping into broader, vaguer territory. That’s my impression from reading the recent article “Learning and Teaching Machine Learning: A Personal Journey.” In it, author Joseph R. Barr of San Diego State University and True Bearing Analytics discusses both the history of machine learning and his own education in the topic. He states that ‘it’s safe to regard machine learning, data mining, predictive analysis, and advanced analytics as more or less synonymous’.” Read more
Nancy Gohring of Computerworld recently wrote, “The market for connected devices like fitness wearables, smart watches and smart glasses, not to mention remote sensing devices that track the health of equipment, is expected to soar in the coming years. By 2020, Gartner expects, 26 billion units will make up the Internet of Things, and that excludes PCs, tablets and smartphones. With so many sensors collecting data about equipment status, environmental conditions and human activities, companies are growing rich with information. The question becomes: What to do with it all? How to process it most effectively and use it in the smartest way possible?” Read more
NEXT PAGE >>