Ron Miller of TechCrunch reports, “IBM today announced a new product called Watson Analytics, one they claim will bring sophisticated big data analysis to the average business user. Watson Analytics is a cloud application that does all of the the heavy lifting related to big data processing by retrieving the data, analyzing it, cleaning it, building sophisticated visualizations and offering an environment for communicating and collaborating around the data. And lest you think that IBM is just slapping on the Watson label because it’s a well known brand (as I did), Eric Sall, vp of worldwide marketing for business analytics at IBM says that’s the not the case. The technology underlying the product including the ability to process natural language queries is built on Watson technology.” Read more
Apigee wants the development community to be able to seamlessly take advantage of predictive analytics in their applications.
“One of the biggest things we want to ensure is that the development community gets comfortable with powering their apps with data and insights,” says Anant Jhingran, Apigee’s VP of products and formerly CTO for IBM’s information management and data division. “That is the next wave that we see.”
Apigee wants to help them ride that wave, enabling their business to better deal with customer issues, from engagement to churn, in more personal and contextual ways. “We are in the business of helping customers take their digital assets and expose them through clean APIs so that these APIs can power the next generation of applications,” says Jhingran. But in thinking through that core business, “we realized the interactions happening through the APIs represent very powerful signals. Those signals, when combined with other contextual data that may be in the enterprise, enable some very deep insights into what is really happening in these channels.”
With today’s announcement of a new version of its Apigee Insights big data platform, all those signals generated – through API and web channels, call centers, and more – can come together in the service of predictive analytics for developers to leverage.
What best practices should inform your company’s text analytics initiatives? Executive Lessons on Modern Text Analytics, a new white paper prepared by: Geoff Whiting, principal at GWhiting.com and Alesia Siuchykava, project director at Data Driven Business provides some insight. Contributors to the lessons shared in the report include Ramkumar Ravichandran, Director, Analytics, at Visa and Matthew P.T. Ruttley, Manager of Data Science at Mozilla Corp
One of the interesting points made in the paper is that text analytics can be applied to many use cases: customer satisfaction and management effectiveness, product design insights, and enhancing predictive data modeling as well as other data processes. But at the same time, a takeaway is that it is better for text analytics teams to follow a narrow path than to try to accommodate a wide-ranging deployment. “All big data initiatives, and especially initial text analytics, need a specific strategy,” the writers note, preferable focusing on “low-hanging fruit through simple business problems and use cases where text analytics can provide a small but fast ROI.
Customer experience management vendor Clarabridge wants to bring the first-person narrative from call center interactions to life for marketing analysts, customer care managers, call center leaders and other customer-focused enterprise execs. With its just released Clarabridge Speech, it now brings via the cloud a solution that integrates Voci Technologies’ speech recognition smarts with its own capabilities for using NLP to analyze and categorize text, sentiment and emotion in surveys, social media, chat sessions, emails and call center agents’ own notes.
Agent notes certainly are helpful when it comes to assessing whether customers are having negative experiences and whether their loyalty is at stake, among other concerns. But, points out Clarabridge CEO Sid Banerjee, “an agent almost never types word for word what the customer says,” nor will they necessarily characterize callers’ tones as angry, confused, and so on. With the ability now to take the recorded conversation and turn it into a transcript, the specific emotion and sentiment words are there along with the entire content of the call to be run through Clarabridge’s text and sentiment algorithms.
“You get a better sense of the true voice of the customer and the experience of that interaction – not just the agent perspective but the customer perspective,” Banerjee says.
Lars Hard of Beta News recently wrote, “Artificial intelligence (AI) has become a bit of a buzzword among technology professionals (and even within the mainstream public) but truthfully, most people do not know how it works or how it is already being integrated within leading enterprise businesses. AI for businesses is today mostly made up of machine learning, wherein algorithms are applied in order to teach systems to learn from data to automate and optimize processes and predict outcomes and gain insights. This simplifies, scales and even introduces new important processes and solutions for complex business problems as machine learning applications learn and improve over time. From medical diagnostics systems, search and recommendation engines, robotics, risk management systems, to security systems, in the future nearly everything connected to the internet will use a form of a machine learning algorithm in order to bring value.” Read more
Chloe Green of Information Age recently wrote, “Handling immense data sets requires a combination of scientific and technological skills to determine how data is stored, searched and accessed. In science, the importance of data scientists in ensuring that data is handled correctly from the outset is not underestimated; other industries can learn from the scientific approach. Text-mining tools and the use of relevant taxonomies are essential. If we think about big data as a huge number of data points in some multi-dimensional space, the problem is one of analysis, i.e. frequently finding very similar or very dissimilar points which cannot be compared. In life sciences, taxonomies assign data points a class, thus comparison of two points is as easy as looking up other data points in the same class.” Read more
Seth Grimes, president and principal consultant of Alta Plana Corp. and founding chair of the Sentiment Analysis Symposium, has put together a thorough new report, Text Analytics 2014: User Perspectives on Solutions and Providers. Among the interesting findings of the report is that “growth in text analytics, as a vendor market category, has slackened, even while adoption of text analytics, as a technique, has continued to expand rapidly.”
Grimes explains that in a fragmented market, consisting of everything from text analytics services to solution-embedded technologies, the opportunities for users to practice text analytics is strong, but that increasingly text analytics is not the main focal point of the solutions being leveraged.
Reflecting the diversity of options, respondents listed among their providers a number of open-source offerings such as Apache OpenNLP and GATE, API services such as AlchemyAPI and Semantria, and enterprise software solution and business suite providers like SAP. The word cloud above was generated by Alta Plana at Wordle.net to show how users responded to the question of companies they know provide text/content analytics functionality. Nearly 50 percent of users are likely to recommend their most important provider.
OpenText yesterday made its secure file sharing and synchronization product, Tempo Box, available for free to customers using its OpenText Content Suite enterprise information management tool.
“A lot of our customers have major concerns about employees sharing documents with cloud tools like Dropbox,” says Lubor Ptacek, vp of strategic marketing. They want them to be available, synched and sharable across all their devices, but using such services can create security and compliance problems. By deploying Tempo Box on top of their existing infrastructure, at no charge to all internal employees and any external parties they may need to share content with, companies get a seamless and cost-effective way to share files in the cloud without compromising security, records management requirements and storage optimization, he says – “the things that enterprise customers care about, especially those operating in regulated environments.”
Among those capabilities is applying automatic content classification, which is usually required for records management reasons – for example, helping companies determine if a document is an employee record they must keep for five years or a tax record they have to hold for seven years. That under-the-hood classification engine is an outgrowth of OpenText’s acquisition a few years back of text mining, analytics and search company Nstein. Since the acquisition, says Ptacek, the company has been looking at ways to apply the technology to specific business problems and make it part of its applications.
Data scientists can add another tool to their toolset today: GraphLab has launched GraphLab Create 1.0, which bundles up everything starting from tools for data cleaning and engineering through to state-of-the-art machine learning and predictive analytics capabilities.
Think of it, company execs say, as the single platform that data scientists or engineers can leverage to unleash their creativity in building new data products, enabling them to write code at scale on their own laptops. The driving concept behind the solution, they say, is to make large-scale machine learning and predictive analytics easy enough that companies won’t have to hire huge teams of data scientists and engineers and build the big hardware infrastructures that lie behind many of today’s Big Data-intensive products. And, the data scientists and engineers that do use it won’t need to be experts at machine-learning algorithms – just experienced enough to write Python code.
Versium Leverages Microsoft Azure Machine Learning For New Predictive GivingScore Solution To Improve Fundraising
Versium, which earlier this year launched its Predictive FraudScore solution (covered here) today releases its Predictive GivingScore solution, designed to help charitable institutions and political organizations better predict who is likely to donate, be a repeat donator, or make the more significant contribution. PredictiveGiving Score is the latest of the company’s predictive Score products, which also include churn, social influencer and shopper scoring – and it’s by no means the last.
It was built with Microsoft Azure Machine Learning, a managed cloud service for building predictive analytics solutions publicly unveiled just a short time ago. CEO Chris Matty says that platform is an aid to Versium in rapidly building its new score solutions. (Just shy of ten Versium scoring products are currently in use or in development.) Azure ML, Matty notes, contains dozens of machine learning algorithms and mathematical computation models it leverages to easily and effectively experiment, create and tune models to get the highest accuracy in predictive scoring solutions.
“Once we have a score built it just takes little tuning. But when we are building a new score we need to look at some different models and see what works better,” he says. “We want to move quickly by evaluating the different models, and we can visualize very easily the process of building the predictive model.”
NEXT PAGE >>