Nova Spivack, CEO of Bottlenose, recently opined in TechCrunch, “Cards are fast becoming the hot new design paradigm for mobile apps, but their importance goes far beyond mobile. Cards are modular, bite-sized content containers designed for easy consumption and interaction on small screens, but they are also a new metaphor for user-interaction that is spreading across all manner of other apps and content. The concept of cards emerged from the stream — the short content notifications layer of the Internet — which has been evolving since the early days of RSS, Atom and social media.” Read more
Posts Tagged ‘Nova Spivack’
[EDITOR’S NOTE: This guest post comes to us from John Breslin. Thanks to John for allowing us to re-post this tribute, which originally appeared in his LinkedIn stream. This post was modified at 5:24 ET]
It is with deep regret that I learned this week of the passing of my good friend, colleague and StreamGlider co-founder, Bill McDaniel. Bill was, among many other things, a Semantic Web innovator and serial entrepreneur who co-founded a multitude of companies, shipped more than 70 products, and co-authored seven books and many more publications during his career.
I first met Bill McDaniel at the International Semantic Web Conference (ISWC), held in Galway in 2005. Bill was working as a senior research scientist at Adobe at that time. By chance, I happened to be seated across from Bill and another colleague of his from Adobe at the conference dinner, and I mentioned to them both that there were some job opportunities for researchers at DERI in NUI Galway that could be of interest. It turned out to be an opportune time for him to pursue a new challenge, as Bill joined DERI soon afterwards as a project executive in the eLearning Research Cluster.
Those who knew Bill through his Semantic Web work may be unaware of his long and varied career in information technology, with CEO, CTO and CRO roles in diverse areas such as electronic printing, wireless demand chain management, wireless retail loyalty, advanced 2D bar coding, AI-based military logistics, and of course semantically-powered mobile applications.
His career in IT stretches back nearly 40 years to 1975 when he worked as an operations programmer and manager with NCH Corp (at the time, saving the company $1M a year in order processing costs). He then joined Image Sciences in the 1980s as an R&D director, responsible for their $20M flagship product DocuMerge. From then into the 1990s he was CTO and co-owner of GenText, sold to Xenos for $12M in 1998.
Serial entrepreneur and thought leader Nova Spivack recently wrote for Gigaom, “When we talk about the future of artificial intelligence (AI), the discussion often focuses on the advancements and capabilities of the technology, or even the risks and opportunities inherent in the potential cultural implications. What we frequently overlook, however, is the future of AI as a business. IBM Watson’s recent acquisition and deployment of Cognea signals an important shift in the AI and intelligent virtual assistant (IVA) market, and offers an indication of both of the potentials of AI as a business and the areas where the market still needs development. The AI business is about to be transformed by consolidation. Consolidation carries real risks, but it is generally a sign of technological maturation. And it’s about time, as AI is no longer simply a side project, or an R&D euphemism. AI is finally center stage.”
Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.
Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.
Bill Roberts, CEO, Swirrl: Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems. I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.
Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:
Phil Archer, Data Activity Lead, W3C:
For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.
I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!
[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.
As we prepare to greet the New Year, we take a look back at the year that was. Some of the leading voices in the semantic web/Linked Data/Web 3.0 and sentiment analytics space give us their thoughts on the highlights of 2013.
Phil Archer, Data Activity Lead, W3C:
The completion and rapid adoption of the updated SPARQL specs, the use of Linked Data (LD) in life sciences, the adoption of LD by the European Commission, and governments in the UK, The Netherlands (NL) and more [stand out]. In other words, [we are seeing] the maturation and growing acknowledgement of the advantages of the technologies.
I contributed to a recent study into the use of Linked Data within governments. We spoke to various UK government departments as well as the UN FAO, the German National Library and more. The roadblocks and enablers section of the study (see here) is useful IMO.
Bottom line: Those organisations use LD because it suits them. It makes their own tasks easier, it allows them to fulfill their public tasks more effectively. They don’t do it to be cool, and they don’t do it to provide 5-Star Linked Data to others. They do it for hard headed and self-interested reasons.
Christine Connors, founder and information strategist, TriviumRLG:
What sticks out in my mind is the resource market: We’ve seen more “semantic technology” job postings, academic positions and M&A activity than I can remember in a long time. I think that this is a noteworthy trend if my assessment is accurate.
There’s also been a huge increase in the attentions of the librarian community, thanks to long-time work at the Library of Congress, from leading experts in that field and via schema.org.
The enterprise version of Bottlenose has formally launched. Now dubbed Nerve Center, the service to provide real-time trend intelligence for brands and businesses, which The Semantic Web Blog previewed here, includes a dashboard featuring live visualization of all trending topics, hashtags and people, top positive and negative influences and sentiment trends, trending images, videos, links and popular messages, the ability to view trending messages by types (complaints vs. endorsements, for example) and real-time KPIs. As with its original service, Nerve Center leverages the company’s Sonar technology to automatically detect new topics and trends that matter to the enterprise.
“Broadly speaking, every large enterprise has to be doing social listening and social analytics,” CEO Nova Spivack told The Semantic Web Blog in an earlier interview, “including in realtime, which is one thing we specialize in. I don’t think any other product out there shows change as it happens as we do.” It’s important, he said, to understand that Bottlenose focuses on the discovery of trends, not just finding what users explicitly search for or track. Part of the release, he added, “will be some pretty powerful alerting to tell you when there is something to look at.”
Bottlenose earlier this month raised $3.6 million in Series A funding to help with its launch of Bottlenose Enterprise, the upcoming tool aimed at helping large companies discover and visualize trends from among a host of data sources, measuring and comparing them for those with the most “trendfluence.” Users will get a realtime dynamic view of change as it happens and a host of analytics for automating insights, the company says.
The Enterprise edition will be a big departure from the current Bottlenose Lite version for individual professionals. That difference starts with the amount of data it can handle. “The free, Lite version looks only at public API data like Twitter’s. The enterprise version uses the firehose,” says CEO Nova Spivack. Another big difference is that the enterprise version adds a lot more views and analytics, in comparison to the personal-use edition, where its Sonar technology provides the chief service of real-time detection of talk around topics personalized to users’ interests so they can visualize and track those topics over time.
Spivack calls what Enterprise does “enterprise-scale trend detection in the cloud,” leveraging a massive Hadoop infrastructure and technologies including Cassandra, MongoDB, and the Storm distributed realtime computation system to process data for deep dives. The cloud handles the computation, and results are shared at the edge, where certain kinds of analytics and visualizations occur locally in the browser for a realtime expience with no latency. With sources such as social streams, stock information, even a company’s proprietary data, and more, the Enterprise version helps brands discover important trends like keywords to bid on or viral content to share, who are their influencers and detractors, what sentiment and demographic movements are taking shape, and to create correlations across data points, too.
Here are some final thoughts from our panel of semantic web experts on what to expect to see as the New Year rings in:
Broader deployment of the schema.org terms is likely. In the study by Muehlisen and Bizer in July this year, we saw Open Graph Protocol, DC, FOAF, RSS, SIOC and Creative Commons still topping the ranks of top semantic vocabularies being used. In 2013 and beyond, I expect to see schema.org jump to the top of that list.
Christine Connors, Chief Ontologist, Knowledgent:
I think we will see an uptick in the job market for semantic technologists in the enterprise; primarily in the Fortune 2000. I expect to see some M&A activity as well from systems providers and integrators who recognize the desire to have a semantic component in their product suite. (No, I have no direct knowledge; it is my hunch!)
We will see increased competition from data analytics vendors who try to add RDF, OWL or graphstores to their existing platforms. I anticipate saying, at the end of 2013, that many of these immature deployments will leave some project teams disappointed. The mature vendors will need to put resources into sales and business development, with the right partners for consulting and systems integration, to be ready to respond to calls for proposals and assistance.
Yesterday we began our look back at the year in semantic technology here. Today we continue with more expert commentary on the year in review:
Ivan Herman, W3C Semantic Web Activity Lead:
I would mention two things (among many, of course).
- Schema.org had an important effect on semantic technologies. Of course, it is controversial (role of one major vocabulary and its relations to others, the community discussions on the syntax, etc.), but I would rather concentrate on the positive aspects. A few years ago the topic of discussion was whether having ‘structured data’, as it is referred to (I would simply say having RDF in some syntax or other), as part of a Web page makes sense or not. There were fairly passionate discussions about this and many were convinced that doing that would not make any sense, there is no use case for it, authors would not use it and could not deal with it, etc. Well, this discussion is over. Structured data in Web sites is here to stay, it is important, and has become part of the Web landscape. Schema.org’s contribution in this respect is very important; the discussions and disagreements I referred to are minor and transient compared to the success. And 2012 was the year when this issue was finally closed.
- On a very different aspect (and motivated by my own personal interest) I see exciting moves in the library and the digital publishing world. Many libraries recognize the power of linked data as adopted by libraries, of the value of standard cataloging techniques well adapted to linked data, of the role of metadata, in the form of linked data, adopted by journals and soon by electronic books… All these will have a profound influence bringing a huge amount of very valuable data onto the Web of Data, linking to sources of accumulated human knowledge. I have witnessed different aspects of this evolution coming to the fore in 2012, and I think this will become very important in the years to come.
NEXT PAGE >>