Leonard Kleinrock of Wired reports, “[On Friday] in Hong Kong 24 new inductees were welcomed into the Internet Hall of Fame, which was launched by the Internet Society in 2012 to recognize individuals who have pushed the boundaries of technological and social innovation through the design and advancement of the global Internet. Because I was a member of the original inductee class, the Hall of Fame asked me to interview some of this year’s inductees about their visions for the future of the Internet, and what obstacles might stand in the way of these ideals. Hailing from Africa, Europe, Asia and Latin America, these inductees provided interesting insights into how the Internet is likely to evolve over the next decade in their corners of the globe, and what we as a global society need to do to prepare for the coming challenges of this evolution.” Read more
Scott Brinker, whom we have covered many times in the past because of his insights into semantic technology and marketing, has written a new short book about modern marketing trends. The author presents “seven transformative meta-trends in modern marketing.” In the forward, he identifies these trends as “…wield[-ing] tremendous influence on the current evolution of marketing strategy and management.” The trends Brinker identifies are:
- From traditional to digital
- From media silos to converged media
- From outbound to inbound
- From communications to experiences
- From art and copy to code and data
- From rigid plans to agile iterations
- From agencies to in-house marketing
While he does not mention semantics explicitly in the book, knowing Scott as we do, we were curious about his thoughts on the subject. We caught up with him to ask, “So, how does this fit in with Semantic Web Technologies?”
Brinker responded, “Semantic web technologies are a great example of how technology is continuously changing what’s possible in marketing and business. But in the absence of ‘marketing technologists’ — these hybrid professionals who can translate technology capabilities to marketing opportunities, and vice versa — much of that potential remains untapped.”
“Structured and linked data can have such a tremendous impact on shaping customer experiences in a digital world. While not every marketer needs to understand the technical layer of how to make that happen, they need to have a sense of what’s possible — and they need to be able to work with more technical talent, as part of the modern marketing team, to make it happen.”
Brinker, who coined the term ”Chief Marketing Technologist,” is offering the 40-pager as a free download on his website.
Last week the world learned that the hacks at Target hit more customers than originally thought – somewhere in the 100 million vicinity – and that Neiman Marcus also saw customer credit card information spirited away by data thieves. They’re not the first big-name outfits to suffer a security setback, could they be the last?
No one can ever say never, of course. But it’s possible that new tools that leverage machine learning predictive analytics could put a serious dent in the black hats’ handiwork, while also improving IT’s hand at application performance management.
A big problem in both the APM and security space today is that there’s just a ton of data coming at IT pros dealing with those issues, much of it just describing the normal state of affairs, and no one’s got time to spend reviewing that. What IT staffers want to know about are problems, which leads to a lot of rules-writing to identify thresholds that could point to issues, and to a lot of rewriting of those rules to account for the fact that things change fast in today’s world of system complexity – and to a lot of misses because of the impossibility of keeping up. Sixty percent of problems are still reported by users, not the tools IT is using, says Kevin Conklin, marketing vp at Prelert, whose machine learning predictive analytics technology is used in CA’s Application Behavior Analytics and available as Anomaly Detective for the Splunk IT apps ecosystem.
Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.
Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.
Bill Roberts, CEO, Swirrl: Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems. I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.
Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:
Phil Archer, Data Activity Lead, W3C:
For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.
I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!
[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.
Miguel Paz of NiemanLab.org recently wrote, “Call me stupid, but I think journalism is a exciting way to change the world. But in order to do that these days, we need to favor change and promote disruptive innovation within the news and information ecosystem — and start thinking way outside the box. This is something I’ve been working towards as hard as possible for several years: as a Knight Fellow at the International Center for Journalists; through Poderopedia.org, a website that reveals the links among Chilean business and political elites; and Hacks/Hackers Chile and Poderomedia Foundation, an organization that promotes the open web and the use of technologies to rethink journalism, teach new skills to journalists, and foster cultural change in newsrooms in Latin America. Read more
Cadell Last of The Huffington Post recently wrote, “The Internet is a ubiquitous phenomenon that seemingly emerged out of nowhere. In my opinion, this emergence represents the most powerful example of exponential computational improvements. You don’t need me to tell you that the effects of networked computers have been overwhelmingly pervasive, but I find that too few people realize that the Internet is still in its infancy, and it’s still got a lot of evolving to do.” Read more
Jeni Tennison recently wrote a clever article for the Open Data Institute on the five stages of data grief. She writes, “As organisations come to recognise how important and useful data could be, they start to think about using the data that they have been collecting in new ways. Often data has been collected over many years as a matter of routine, to drive specific processes or sometimes just for the sake of it. Suddenly that data is repurposed. It is probed, analysed and visualised in ways that haven’t been tried before. Data analysts have a maxim: ‘If you don’t think you have a quality problem with your data, you haven’t looked at it yet.’ …In our last ODI Board meeting, Sir Tim Berners-Lee suggested that the data curators need to go through something like the five stages of grief described by the Kübler-Ross model. So here is an outline of what that looks like.” Read more
In case you missed it, last week Jim Benedetto, CTO of Gravity, shared an interesting idea on GigaOM for how to push the semantic web forward. He writes, “Everyone is always asking me how big our ontology is. How many nodes are in your ontology? How many edges do you have? Or the most common — how many terabytes of data do you have in your ontology? We live in a world where over a decade of attempted human curation, of a semantic web has born very little fruit. It should be quite clear to everyone at this point that this is a job only machines can handle. Yet we are still asking the wrong questions and building the wrong datasets.” Read more
The Mike2.0 Governance Association has shared an article on Smart Data Collective about semantic business vocabularies and rules. The article states, “Classically business management establishes policies which are sent to an Information Technology department for incorporation to new and existing applications. It is then the job of systems analysts to stare at these goats and translate them into coding specifications for development and testing. Agile and other methodologies help speed this process internally to the IT department, however until the fundamental dynamic between management and IT changes, this cycle remains slow, costly and mistake-prone.” Read more
NEXT PAGE >>