As the Internet of Things expands its reach, more and more of the everyday objects that we surround ourselves with are becoming “smart.” It’s already happening with our phones, our microwaves, our refrigerators, our lighting systems — entire houses are gaining an element of artificial intelligence, allowing us to communicate with objects via language. In most cases, those communications are as basic as possible, i.e. “Dim lights” or “Call Susan.” But with technologists and linguists putting their mental muscle to the test, we will soon find ourselves immersed in an array of objects, tools, and machine-based services that we will be able to converse with using natural language.

One of the experts pushing person-to-machine communication forward is Kathleen Dahlgren, a PhD linguist, adjunct professor at UCLA, and President of Cognition. The company is comprised of “a team of pioneering technologists applying computational linguistics, formal semantics and machine learning to bring you intelligent dialogue with machines.” At the recent Semantic Technology and Business Conference in San Francisco, Kathleen discussed the innovative work she is doing at Cognition using the example of a smart television.

Throughout her discussion of the progress being made in linguistic semantics, Kathleen underscored a singular point: “Statistics alone doesn’t work,” she told us. Machine learning systems must use formal rules as well as lexical semantics — the study of how and what the words of a given language mean. Only through the combination of machine learning and deep natural language processing (NLP) can devices achieve any semblance of true communication.

While current dialogue systems are making excellent progress, they suffer from a number of limitations, Kathleen said. Current systems get stuck and become unresponsive when they can’t understand what a person is saying. They’re built to follow structured formats, but when a speaker goes off topic or strays from the standard conventions that the system understands, communication breaks down.

Additionally, current systems are unable to follow the thread of a conversation — they can’t understand references back to previous topics, references to past or future events, subtlety, negation, or turns of phrase. Systems that use only machine learning rely heavily on popular ways of speaking and scripting. “Even the most vanilla person in the world won’t always be understood by such a system,” Kathleen commented, because these systems can’t adapt to any change from what is expected.

The above problems are all major ones, but Kathleen and her team at Cognition are working on semantic solutions to these issues by adding deep NLP to the mix. By combining linguistics and statistics, Kathleen noted, Cognition is creating a dialogue system that allows fine-grained interpretation of user input and free text databases. Such a system could also adapt to changes from the expected, follow the central ideas of a conversation, and communicate in a more natural method.

Cognition is pushing NLP forward from a variety of angles. First, they’re tackling the issue of tokenization, or the process of breaking a stream of utterances into words. If the Cognition system thinks it hears someone say, “Show me an ant elope,” it filters the input against metrics of syntactic and semantic plausibility, letting the system figure out that the user probably meant, “Show me an antelope.”

Second, Cognition uses linguistic semantics to create entity and phrase recognition, allowing their system to decipher that “Silver Linings Playbook” is the name of a movie, not a sports guide, and “I’m all ears” is a turn of phrase, not an unfortunate medical condition. In the case of a smart television, the TV needs data on all types of events, time periods, people, and… well, just about everything. This is so that when a user asks the television, “What was the name of that flick about the musketeer who had to wear a mask in prison for all those years?” the system will be able to respond, “That movie is called The Man in the Iron Mask. Would you like me to play it now?”

Cognition uses ontologies to break down the sense of words and decipher what a person is talking about in any given context. With formal semantic tools, Cognition’s system is learning how to both understand and engage in unstructured dialogue, following the thread of a conversation, interpreting all of the information that the user is sharing, and following referential expressions like pronouns. A semantic system can also handle negation (i.e. “Not that movie, but another one with Robert DeNiro”), the existence and non-existence of entities and events (i.e. “Is there a baseball game on tonight?”), and interpretation of the past, present, and future.

With formal semantics, a smart television could follow not just the words that are being spoken, but the connections between them and the larger goal of the conversation. Take the following conversation:

This interaction represents the as yet unachieved goal, the ideal level of communication between an object and its user. No one is quite yet there yet, but Cognition is leading the pack thanks to its innovative combination of machine learning principals and semantic tools. Neither approach can achieve real communication on its own, but combined, machine learning and semantic NLP could take artificial intelligence to the next level.