Posts Tagged ‘machine intelligence’

IBM Unveils First Watson Machine-Learning APIs

WatsonSerdar Yegulalp of InfoWorld reports, “Those who have been chomping at the bit to use IBM’s Watson machine-intelligence service with their apps need gnaw no longer. Watson APIs are now available for public use, albeit only through IBM’s Bluemix cloud services platform. IBM’s Watson Developer Cloud now offers eight services for building what IBM describes as cognitive apps, with more services promised later on.” Read more

Jeff Hawkins on the Future of Artificial Intelligence

3214197147_a36c631f71Derrick Harris of GigaOM recently wrote, “Jeff Hawkins is best known for bringing us the Palm Pilot, but he’s working on something that could be much, much bigger. For the past several years, Hawkins has been studying how the human brain functions with the hope of replicating it in software. In 2004, he published a book about his findings. In 2012, Numenta, the company he founded to commercialize his work, finally showed itself to the world after roughly seven years operating in stealth mode. I recently spoke with Hawkins to get his take on why his approach to artificial intelligence will ultimately overtake other approaches, including the white-hot field of deep learning. We also discussed how Numenta has survived some early business hiccups and how he plans to keep the lights on and the money flowing in.” Read more

Semantic Technology Job: Analytical Linguist, Machine Intelligence

Google LogoGoogle is looking for an analytical linguist. The job description states: “Understanding natural language is at the core of Google’s technologies. The Natural Language Understanding (NLU) team in Google Research guides, builds, and innovates methodologies around semantic analysis and representation, syntactic parsing and realization, morphology and lexicon development. Our work directly impacts Conversational Search in Google Now, the Knowledge Graph, and Google Translate, as well as other Machine Intelligence research. As an Analytical Linguist, you will collaborate with Researchers and Engineers in NLU/Machine Intelligence to achieve high quality data that improves our ability to understand and generate natural language systems. To this end, you will also be managing a team of junior linguists and vendors to derive linguistic databases as well as propose direction for approaches to language specific problems. Target languages: Brazilian Portuguese.”

Read more

Semantic Search and the Data Center

6722296265_0ef996ef24

Haim Koshchitzky of Sys-Con Media recently wrote, “Enterprise applications can ‘live’ in many places and their logs might be scattered and unstandardized. First generation log analysis tools made some of the log data searchable, but the onus was on the developer to know what to look for. That process could take many hours, potentially leading to unacceptable downtime for critical applications. Proprietary log formats also confuse and confound conventional keyword search. That’s why semantic search can be so helpful. It uses machine intelligence to understand the context of words, so it becomes possible for a Google user to type ‘cheap flights to Tel Aviv on February 10th’ rather than just ‘cheap flights’ and receive a listing of actual flights rather than links to airline discounters. Bing Facebook, Google and some vertical search engines include semantic technology to better understand natural language. It saves time and creates a better experience.” Read more

Semantic Tech Turns Up Biomarkers And Phenotypes, Avoids Dead Ends And Higher Costs

Image Courtesy: ipharmd.net

Dr. Carlo Trugenberger, co-founder and Chief Scientific Officer at InfoCodex Semantic Technologies AG, has co-authored a report reflecting the topic he discussed at last fall’s London SemTech event: An approach to drug research that relies on identifying relevant biochemical information using the company’s autonomous self-organizing semantic engines to text mine large repositories of biomedical research papers.

The model, says Trugenberger, is a departure from many other semantically-engineered approaches to streamlining drug research, which are based on natural language processing (NLP). That’s good for extracting information from documents, he says, but not as adept at discovering knowledge. “That’s what our InfoCodex software is designed for, to find new facts and hidden correlations” in repositories of unstructured information.

Read more

Discover The Mobile App You Really Want

The semantic technology platform behind restaurant dish discovery service Dishtip (which The Semantic Web Blog discussed here) has made its way to a new domain: mobile apps. The company last week unveiled AppCrawlr, which uses its TipSense content discovery and knowledge extraction technology to cut through the noise to help users find the app that’s right for them in a world of hundreds of thousands of options for iPhone, iPad and Android devices.

“With traditional search models there’s no easy way for guided discovery to narrow down from all the apps out there to what you want,” says Dave Schorr, who with Joel Fisher is a co-founder of TipSense LLC. Keyword searches aren’t going to help you find apps that help when you are having a bad day, for instance, or understand that someone looking for a dating app (as in relationships) is looking for something different than someone looking for a date (as in scheduling and productivity) app. But searches on AppCrawlr can suss those out, taking data from from all across the web – blogs, tweets, reviews, and so on – and surfacing and organizing the concepts and topics buried in all that unstructured data.

“It’s a new paradigm to manage a large data set,” says Schorr. “We’re using concepts to come up with a much better experience for discovery.”

Read more

Tagging the Visual Web: Visual Media Doesn’t Have To Be Dumb Anymore

Instagram. Tumblr. Pinterest. The web in 2012 is a tremendously visual place, and yet, “visual media still as dumb today as it was 20 years ago,” says Todd Carter, founder and CEO of Tagasauris.

It doesn’t have to be that way, and Tagasauris has put its money on changing the state of things.

Why is dumb visual media a problem, especially at the enterprise-level? Visual media, in its highly un-optimized state, hasn’t been thought of in the same way that companies think about how making other forms of data more meaningful and reasonable can impact their business processes. A computer’s ability to assess image color, pattern and texture isn’t highly useful in the marketplace, and as a result visual media has “just been outside the realm of normal publishing processes, normal workflow processes,” Carter says. Therefore, what so many organizations – big media companies, photo agencies, and so on –  would rightly acknowledge to be their treasure troves of images don’t yield anywhere near the economic value that they can.

Read more