Bruce Rogers of Forbes recently wrote, “There is a wave of digital disruption coming at CMOs from all fronts. The world has shifted over the past five years, mostly because of the emergence of the ‘internet of things’–a world where nearly everyone and everything is interconnected in a web enabled network. But according to Alex Dayon, former co-founder of Business Objects and now president of Salesforce.com’s applications and platform products, ‘we could call ourselves the ‘internet of customers’ because we’ve always connected devices and apps. It means there’s a customer behind it. By 2020 there will be 50 billion connected devices. And behind every device, whether it’s a smartphone, a car, a toothbrush, or a light bulb, there is a customer’.” Read more
Terence Tse, Mark Esposito, and Olaf Groth of the Harvard Business Review write, “While we are surrounded by a wave of new disruptive technologies and apps, HR still hasn’t improved how it evaluates the prospective workforce. Traditional hiring processes that revolve around CVs are no longer sufficient – they don’t pinpoint the right qualities demanded of leaders today, and their dated criteria obscures many talented individuals from even hitting the radar. There is nothing inherently wrong with resumes – they highlight applicants’ past achievements and experience. But while CVs are good at showcasing formal skills, they’re not very useful for identifying values and behavior.” Read more
Kevin Casey of Information Week recently wrote, “Old-school organizations will fuel the next swell of data-driven initiatives in IT. So what’s in store for the early movers and, specifically, their big-data professionals? How will the data scientist and similar roles evolve? ‘The role is becoming bigger,’ said Olly Downs, chief scientist at big-data analytics firm Globys, in a recent interview. By bigger, he means in every way — what was once a niche is now, at least in some companies, a driving force.” Read more
[Editor's Note: This guest article comes to us from Dr. Nathan Wilson, CTO of Nara. ]
There once was a time when the busiest and greatest minds –the Jeffersons, Hemingways and Darwins – would have time in their day for long walks, communion with nature, and leisurely handwritten correspondence. Today we awaken each day to an immediate cacophony of emails, tweets, websites and apps that are too numerous to navigate with full consciousness. Swimming in wires, pixels, data bits, and windows with endless tabs is toxic to you and to me, and the problem continues to escalate.
How do you connect to this teeming network without electrocuting your brain? “Filtering” is a simple, but ultimately blinding, approach that shields us from important swaths of knowledge. “Forgetting faster” is potentially a valid solution, but also underserves our mindfulness.
A History of Attempted Solutions So Far: How have we tried to solve information glut so far, and why is each solution inadequate?
Phase 1 – The Web as a Linnaean Taxonomy (1994-2000)
The first method to deal with our information explosion came in “Web 1.0” when portals like Yahoo! arose to elegantly categorize information that you could explore at your leisure. For instance, one could find information on the New England Patriots by following a trail of breadcrumbs from “Sports” to “Football” to “AFC East” and finally “New England Patriots” where you were presented with a list of topical websites.
According to Mike Kavis of Forbes, “Companies are jumping on the Internet of Things (IoT) bandwagon and for good reasons. McKinsey Global Institute reports that the IoT business will deliver $6.2 trillion of revenue by 2025. Many people wonder if companies are ready for this explosion of data generated for IoT? As with any new technology, security is always the first point of resistance. I agree that IoT brings a wave of new security concerns but the bigger concern is how woefully prepared most data centers are for the massive amount of data coming from all of the “things” in the near future.”
Kavis went on to write that, “Some companies are still hanging on to the belief that they can manage their own data centers better than the various cloud providers out there. This state of denial should all but go away when the influx of petabyte scale data becomes a reality for enterprises. Enterprises are going to have to ask themselves, “Do we want to be in the infrastructure business?” because that is what it will take to provide the appropriate amount of bandwidth, disk storage, and compute power to keep up with the demand for data ingestion, storage, and real-time analytics that will serve the business needs. If there ever was a use case for the cloud, the IoT and Big Data is it. Processing all of the data from the IoT is an exercise in big data that boils down to three major steps: data ingestion (harvesting data), data storage, and analytics.”
To read a different perspective on these challenges and how Semantic Web technologies play a role in them, read Irene Polikoff’s recent guest post, “RDF is Critical to a Successful Internet of Things.”
Previously, it was reported on SemanticWeb.com that Google had acquired Nest Labs. Steve Lohr of The New York Times recently opined that: “Google did not pay $3.2 billion for Nest Labs this year just because it designed a smart thermostat that has redefined that humble household device. No, Google also bought into the vision of Nest’s founders, Tony Fadell and Matt Rogers, a pair of prominent Apple alumni, that the Nest thermostat is one step toward what they call the conscious home. That means a home brimming with artificial intelligence, whose devices learn about and adapt to its human occupants, for greater energy savings, convenience and security. Last Friday, Nest moved to broaden its reach in the home, buying a fast-growing maker of Internet-connected video cameras, DropCam, for $555 million. And on Tuesday, Nest is expected to announce a software strategy backed by manufacturing partners and a venture fund from Google Ventures and Kleiner Perkins Caufield & Byers.”
The author added: “Nest’s is the third high-profile announcement this month about software to link devices in the home in a network known as the consumer Internet of Things. At its Worldwide Developers Conference this month, Apple introduced HomeKit, its technology for linking and controlling smart home devices. HomeKit uses the iOS operating system, the software engine of iPhones and iPads. Quirky, a start-up that manufactures and sells products based on crowdsourced ideas, on Monday announced the creation of a separate software company, Wink. Its initiative has attracted the backing of a major retailer, Home Depot, and manufacturers like General Electric, Honeywell and Philips.
Read more here.
Photo courtesy: flickr/jbritton
Signe Brewster of Gigaom recently wrote, “In 2012, Google hired Ray Kurzweil to build a computer capable of thinking as powerfully as a human. It would require at least one hundred trillion calculations per second — a feat already accomplished by the fastest supercomputers in existence. The more difficult challenge is creating a computer that has a hierarchy similar to the human brain. At the Google I/O conference Wednesday, Kurzweil described how the brain is made up of a series of increasingly more abstract parts. The most abstract — which allows us to judge if something is good or bad, intelligent or unintelligent — is an area that has been difficult to replicate with a computer. A computer can calculate 10 x 20 or tell the difference between a person and a table, but it can’t judge if a person is kind or mean. To get there, humans will need to build computers that can build abstract consciousness from a more concrete level. Humans will program them to recognize patterns, and then from those patterns they will need to be smart enough to learn to understand more.”
James Kobielus of Info World recently shared his thoughts on the best definition for machine learning. He writes, “Increasingly, the term ‘machine learning’ is… beginning to acquire a catch-all status. Or, at the very least, machine learning has become a convenient handle that today’s data scientists use to refer to the wide range of leading-edge techniques for automating knowledge and pattern discovery from fresh data, much of it unstructured. People’s working definitions of machine learning seem to be creeping into broader, vaguer territory. That’s my impression from reading the recent article “Learning and Teaching Machine Learning: A Personal Journey.” In it, author Joseph R. Barr of San Diego State University and True Bearing Analytics discusses both the history of machine learning and his own education in the topic. He states that ‘it’s safe to regard machine learning, data mining, predictive analysis, and advanced analytics as more or less synonymous’.” Read more
Nancy Gohring of Computerworld recently wrote, “The market for connected devices like fitness wearables, smart watches and smart glasses, not to mention remote sensing devices that track the health of equipment, is expected to soar in the coming years. By 2020, Gartner expects, 26 billion units will make up the Internet of Things, and that excludes PCs, tablets and smartphones. With so many sensors collecting data about equipment status, environmental conditions and human activities, companies are growing rich with information. The question becomes: What to do with it all? How to process it most effectively and use it in the smartest way possible?” Read more
Do you still remember a time when a utility company worker came to your house to check your electric meter? For many of us already, this is in the past. Smart meters send information directly to the utility company and as a result, it knows our up-to-the-minute power usage patterns. And, while we don’t yet talk to our ovens or refrigerators through the Internet, many people routinely control thermostats from their smart phones. The emerging Internet of Things is real and we interact with it on the daily basis.
The term Internet of Things refers to devices we wouldn’t traditionally expect to be smart or connected, such as a smoke detector or other home appliance. They are being made ‘smart’ by enabling them to send data to an application. From smart meters to sensors used to track goods in a supply chain, the one thing these devices have in common is that they send data – data that can then be used to create more value by doing things better, faster, cheaper, and more conveniently.
The physical infrastructure needed for these devices to work is largely in place or being put in place quickly. We get immediate first order benefits simply by installing new equipment. For example, having a smart meter provides cost savings because there is no need for a person to come to our houses. Similarly, the ability to change settings on a thermostat remotely can lower our heating costs. However, far vaster changes and benefits are projected or are already beginning to be delivered from inter-connecting the data sent by smart devices:
- Health: Connecting vital measurements from wearable devices to the vast body of medical information will help to improve our health, fitness and, ultimately, save lives.
- Communities: Connecting information from embedded devices and sensors will enable more efficient transportation. When a sprinkler system meter understands weather data, it will use water more efficiently. Once utilities start connecting and correlating data from smart meters, they might deliver electricity more efficiently and be more proactive in handling infrastructure problems.
- Environment: Connecting readings from fields, forests, oceans, and cities about pollution levels, soil moisture, and resource extraction will allow for closer monitoring of problems.
- Goods and services: Connecting data from sensors and readers installed throughout factories and supply chains will more precisely track materials and speed up and smooth out the manufacture and distribution of goods.
NEXT PAGE >>