Stephen Shankland of CNet recently reported, “Google updated its Hangouts app on Wednesday so the multipurpose communication tool can detect when people are trying to find each other and make it easier to connect. The updated app, available for Google’s Android mobile operating system first but submitted to Apple for approval on iOS, also lets people express themselves with stickers and video filter effects, Bradley Horowitz, vice president of product at Google, said at the LeWeb conference here.”
CAMBRIDGE, Mass. (PRWEB) December 10, 2014 — Cambridge Semantics, the leading provider of smart data solutions driven by Semantic Web technology, and SPARQL City, providers of a high performance scalable Hadoop-based graph database infrastructure, today announced a collaboration to jointly offer an integrated graph analytics solutions with semantic understanding to help enterprises get better understanding and value from big data.
Cambridge Semantics offers the award-winning Anzo Smart Data Platform (Anzo SDP), leveraged by customers and partners for building interactive Smart Data solutions that help enterprises rapidly discover, understand, combine, analyze, link and manage data from diverse sources, both from within and across organizational boundaries. SPARQL City’s Hadoop-based graph analytics engine provides a simple and powerful way for people to query semi-structured and structured data and find more nuanced relationships within and across these datasets in easy-to-grasp graphical representations. Read more
Want to keep your high-risk customers from heading out the door? Well, Framed Data wants to give you a hand.
The company, which today announced that it has raised $2 million from Google Ventures, Innovation Works, Jotter, and NYU Innovation Fund as well as a number of angels, applies machine learning to ending subscription churn. It’s focused especially on B2B SaaS companies, uncovering less than obvious behavioral traits of churned users and applying that knowledge for future use, says director of marketing Tim Wu.
“Churn is the biggest pain we can tackle,” he says, providing the company a way to distinguish itself in an increasingly commoditized data analytics space. And in the B2B SaaS market, “the lifetime value of customers is really clear – they know that if customers leave, they lose x dollars.”
Amit Chowdhry of Forbes reports, “This week, Facebook is adding the ability to search for keywords contained within posts, photos, articles and videos that were shared with you. Facebook’s semantic Graph Search features like the ability to search for the phrases “My friends that work at Facebook” and “Photos of my friends” has not changed. Facebook launched Graph Search in beta last year. Facebook decided to add the post search feature after receiving feedback from users… ‘With a quick search, you can get back to a fun video from your graduation, a news article you’ve been meaning to read, or photos from your friend’s wedding last summer,’ said Tom Stocky.” Read more
There’s a new version of Lexalytics’ Salience Text Analytics Engine: Some of its key new capabilities in Version 6 are enabled by underlying Syntax Matrix technology that the vendor has been working on for the last 12 to 18 months.
Syntax Matrix, explains vp of product and marketing Seth Redmore, takes on the job of doing efficient chunk parsing, so that customers who can be dealing with hundreds of millions of documents a day can maintain that scale without sacrificing accuracy or performance. “What [chunk parsing] means is that we can tear apart a sentence to understand quickly how all the phrases in the sentence relate to each other,” he says, much as Salience’s existing Concept Matrix technology leverages Wikipedia to help it tell what entities are related to each other and how closely.
Deep learning infuses the Syntax Matrix, which is trained on billions of words to support its rich approach to extracting phrases, each with some 200 different features associated with it. “With deep learning we extract so many different features and understand all the interrelationships between them,” he says, providing users with the chunks of the sentence that are most interesting to them, and what they mean so they can take action. “Sentiment Matrix lets us tear apart these sentences in a grammatically meaningful fashion and do it in such a way that you can build other stuff on top of it,” he says.
Anand Srinivasan of Smart Data Collective recently wrote, “There has been talk about the arrival of Web 3.0 –the semantic web – for quite some time now. There is already a lot of progress here with the implementation of things like the RDF schema and the semantic web stacks. The approach so far has been top-down with standardization tools introduced and webmasters being asked to implement them on their websites. However, a truly semantic web may not be possible unless there is a desperate need for businesses to implement them on their websites. And that, in my opinion, is going to come from the evolution of the enterprise content management (ECM) industry. ECM, for the uninitiated, comprises of the technologies, methods and tools that are used by businesses to organize and store their organization’s documents.” Read more
Make it as easy to add and connect new data sources into the enterprise analytics infrastructure as it is to add a new web site onto the modern web. That’s where next-gen data curation company Tamr, a startup born from an MIT research project to bring together lots of tabular data sources in a scalable and repeatable way.
Just like Google does all the work to find and connect web sites hosting the information that users want, “we want to do the same with tabular data sources inside the enterprise,” says Tamr co-founder and CEO Andy Palmer. “Tamr provides systems of reference. If you are looking for attributes to add to an analysis or want data to support something, you have this reference place to go in the enterprise with a catalogue of all the data that exists across the company.”
So often businesses want to use analytics to address hard questions, but can’t do so successfully unless they are integrating lots of disparate data sources and creating a referential catalog. With Tamr, Palmer says, they can ingest data sources very quickly into a semantic triple store, make them available in real time, and connect them using machine learning to map attributes and match records, in support of providing a unified view of a given entity that can then be consumed by various business intelligence and analytics tools. To be useable, he points out, data has to be “very, very thoroughly connected into everything else for there to be context and reference for how it can be consumed and whether it is reliable.”
Kurt Cagle, a Principal Evangelist for Semantic Technologies at Avalon Consulting recently wrote, “I’m not a recruiter. I have from time to time submitted resumés for jobs to Monster or Linked-In to individual company sites as a developer or architect, but even there I’ve discovered what millions of job hunters already know: submitting online resumés is a pain. Consider the process. You create a profile, identifying yourself to job submission system X. This site may or may not have a way of uploading a text resumé, but one thing you find in the data management space is that structure matters, and the farther you deviate from the structure, the harder it is for some OCR Artificial Intelligence to actually make sense of what you’ve written.” Read more
Ian Harris of Search Engine Journal recently wrote, “Semantic search gives the industry a chance to go back to basics and provide information rather than force it. Let’s take a look at how to embrace semantics.” First off, Harris suggests thinking like a user: “Simply put, if you’re going to optimize for the user, you need to think like the user. In the world of semantics, keywords just don’t cut it… Take the above example. You can see that semantics for a generic term already highlights a wealth of information that a search engine has matched to the keyword. Imagine you are building up your semantic relevance for your delivery service. Optimization on your website should be geared towards information surrounding that service, not only to gain a ranking within relevant SERPs, but to provide answers relevant to your expertise. This could mean you’re providing information about your delivery service, the logistics of your business, and what’s happening in the industry. If the site only focuses on keywords, opportunities will be missed.” Read more
PALO ALTO, CA–(Marketwired – Nov. 17, 2014) – Expert System (EXSY), the leader in semantic technology, announced today the availability of the Cogito API. As a fully configured API series, Cogito API transforms a business’ ability to organize, link, find and understand unstructured information within corporate intelligence, CRM, human resources, data management, knowledge management and social media monitoring applications.
Using Cogito API, the topics, concepts, entities, relationships and sentiment expressed in any massive collection of text can be analyzed and understood; the data output is returned in a UTF-8 JSON or XML format and made instantly ready to use in enterprise solutions ranging from customer care, sentiment analysis and advanced business intelligence. Read more
NEXT PAGE >>