Colin Jeavons of Search Engine Journal recently wrote, “Millions of people are already using semantic search and they don’t know it. Some of the world’s most popular search engines and social networking sites are using the technology to make it easier to make connections, learn, and explore interests. It’s quietly become a part of our lives, and innovative companies are pushing the technology and industry toward new horizons. So, what changed? Actually, it was users. Last year, 20 percent of Google searches were new, due to the fact that people started typing sentences and paragraphs into search engines, expecting keyword searches to operate like natural language. Today, user demands for answers to their questions are satiated with innovations like Google’s Hummingbird. People are now searching for ‘cheap flights to Miami on January 7th” rather than just ‘cheap flights.’ This change in consumer behavior is a significant milestone.” Read more
Posts Tagged ‘semantic technology’
Expert System Releases Complete Solution for Taxonomy Creation, Deployment and Document Categorization
CHICAGO, ILLINOIS–(Marketwired – Nov 5, 2013) - Expert System, the semantic technology company, today announces the launch of its Cogito Categorization Platform at the Taxonomy Boot Camp conference held in Washington, D.C. The platform leverages the Cogito semantic technology for an end-to-end taxonomy creation, deployment and document categorization solution.
Top-down or theoretical approaches in taxonomy development often result in unnecessary complexity, and often include nodes that are not representative of the content being categorized. Cogito Categorization Platform uses intuitive semantic analysis to ensure creation of a taxonomy driven by the most relevant concepts and topics in the actual content. Read more
Lucas Laursen of IEEE Spectrum reports, “As [Henry] Markram has been telling everyone since he got the €1 billion nod to lead the Human Brain Project, the way researchers study the brain needs to change. His approach—and it’s not the only one—stands on an emerging type of computing that he and others claim will let machines learn more like humans do. They could then offer generalizations from what’s known about a handful of neural pathways and find shortcuts to understanding the rest of the brain, he argues. The concept will rely as much on predictions of neural behavior as on experimental observations.” Read more
Symphony Teleca Receives Frost & Sullivan Technology Leadership Award For Vehicle Relationship Management
MOUNTAIN VIEW, Calif.–(BUSINESS WIRE)–Based on its recent analysis of the vehicle relationship management market, Frost & Sullivan recognizes Symphony Teleca Corporation (STC), a global innovation and development services company, with the 2013 North American Frost & Sullivan Award for Enabling Technology Leadership. The company’s Vehicle Relationship Management (VRM) solution, InSight Connect™ VRM, is a sought-after solution that strongly empowers automotive OEMs and decreases spending on warranty and recalls. Read more
Recent updates to YarcData’s software for its Urika analytics appliance reflect the fact that the enterprise is starting to understand the impact that semantic technology has on turning Big Data into actual insights.
The latest update includes integration with more enterprise data discovery tools, including the visualization and business intelligence tools Centrifuge Visual Network Analytics and TIBCO Spotfire, as well as those based on SPARQL and RDF, JDBC, JSON, and Apache Jena. The goal is to streamline the process of getting data in and then being able to provide connectivity to the tools analysts use every day.
As customers see the value of using the appliance to gain business insight, they want to be able to more tightly integrate this technology into wider enterprise workflows and infrastructures, says Ramesh Menon, YarcData vice president, solutions. “Not only do you want data from all different enterprise sources to flow into the appliance easily, but the value of results is enhanced tremendously if the insights and the ability to use those insights are more broadly distributed inside the enterprise,” he says. “Instead of having one analyst write queries on the appliance, 200 analysts can use the appliance without necessarily knowing a lot about the underlying, or semantic, technology. They are able to use the front end or discovery tools they use on daily basis, not have to leave that interface, and still get the benefit of the Ureka appliance.”
There have been a slate of chief data officer appointments of late. It’s been particularly noticeable in the marketing space. Marketing communications firm Ogilvy & Mather, for example, in August made Todd Cullen its global chief data officer to push data-driven marketing to the next level, and a short while later marketing services provider Mindshare put Bob Ivins in the first CDO slot, reporting directly to its CEO, “to harness and act on consumer insights in real-time.”
But the trend extends beyond that arena. According to The Big Data Executive Survey 2013: The State of Big Data in the Large Corporate World, released this month by NewVantage Partners, it’s becoming commonplace for large corporations to define or consider new roles, such as establishing a chief data officer. Forty-eight percent of respondents to its survey said they have established or are considering that, and are implementing new processes and organizational structures to ensure successful business adoption.
A global media organization that provides fixed-line internet IP TV to some 10 million customers had a new business initiative that was going to require it to gain some insight into its client base. After some 15 years in business, though, it’s not surprising to learn that that information exists – and re-exists in many different forms – across many legacy applications, and trying to map those customers’ old purchase relationships to a new product catalog as part of a new payment and sales platform could have been just the thing to slow down the company.
Does that situation sound familiar? If your company’s been in business for some length of time, the answer probably is a resounding yes. Like this media business, you may well be in a market with plenty of competitive threats, meaning that unless you constantly innovate, your bread and butter is threatened. And so, you too, probably always are turning to your IT infrastructure team with new requirements.
“And it can be hard for them to build what they need to deliver,” says Carl Bray, product manager at Ontology Systems.
It’s getting to be that time again – yup, school days are getting into full swing. Education, of course, is going through a lot of change these days. The Common Core State Standards initiative is changing what students must learn and what teachers must teach in the early grades, while school-specific online courses now are joined by massive online learning courses (MOOCs) that are bringing new learning experiences on a large-scale to everyone from high school and college students to adults who haven’t taken courses inside a live classroom for decades.
It’s under these circumstances that startup Cognii is hoping to make its mark by applying natural language processing and semantic technology to automate assessments for online learning for students and to grade essays for educators. Its initial focus is on the online education sector, though founder and CTO Dharmendra Kanejiya – whose background involves developing algorithms to improve speech recognition at Vlingo, which were applied to Nuance Communications’ solutions when it acquired the company – says it also can have applicability in the real-world classroom.
Andy Flint of CloudTech recently wrote, “Analytics depends on data — the more, the merrier. If we’re trying to model, say, the behaviour of customers responding to marketing offers or clicking through a website, we can build a far stronger model with 10,000 samples than with 100. You would think, then, that the rise of Big Data and its seemingly inexhaustible supply of data would be every analyst’s dream. But Big Data poses its own challenges for modeling. Much of Big Data isn’t what we have historically thought of as ‘data’ at all. In fact, 80% of Big Data is raw, unstructured information, such as text, and doesn’t neatly fit into the columns and rows that feed most modeling programs.” Read more
NEXT PAGE >>