David Hirsch, co-founder of Metamorphic Ventures, recently wrote for Tech Crunch, “There has been a lot of talk in the venture capital industry about automating the home and leveraging Internet-enabled devices for various functions. The first wave of this was the use of the smartphone as a remote control to manage, for instance, a thermostat. The thermostat then begins to recognize user habits and adapt to them, helping consumers save money. A lot of people took notice of this first-generation automation capability when Google bought Nest for a whopping $3.2 billion. But this purchase was never about Nest; rather, it was Google’s foray into the next phase of the Internet of Things.” Read more
Posts Tagged ‘Google’
Dan Gillick and Dave Orr recently wrote, “Language understanding systems are largely trained on freely available data, such as the Penn Treebank, perhaps the most widely used linguistic resource ever created. We have previously released lots of linguistic data ourselves, to contribute to the language understanding community as well as encourage further research into these areas. Now, we’re releasing a new dataset, based on another great resource: the New York Times Annotated Corpus, a set of 1.8 million articles spanning 20 years. 600,000 articles in the NYTimes Corpus have hand-written summaries, and more than 1.5 million of them are tagged with people, places, and organizations mentioned in the article. The Times encourages use of the metadata for all kinds of things, and has set up a forum to discuss related research.”
The blog continues with, “We recently used this corpus to study a topic called “entity salience”. To understand salience, consider: how do you know what a news article or a web page is about? Reading comes pretty easily to people — we can quickly identify the places or things or people most central to a piece of text. But how might we teach a machine to perform this same task? This problem is a key step towards being able to read and understand an article. One way to approach the problem is to look for words that appear more often than their ordinary rates.”
Photo credit : Eric Franzon
Mark Albertson of the Examiner recently wrote, “It was an unusual sight to be sure. Standing on a convention center stage together were computer engineers from the four largest search providers in the world (Google, Yahoo, Microsoft Bing, and Yandex). Normally, this group couldn’t even agree on where to go for dinner, but this week in San Jose, California they were united by a common cause: the Semantic Web… At the Semantic Technology and Business Conference is San Jose this week, researchers from around the world gathered to discuss how far they have come and the mountain of work still ahead of them.” Read more
That’s how Blab characterizes the work it’s doing to add structure to the chaotic world of online conversation, normalizing and patterning the world’s discussions across 50,000 social network, news outlet, blog, video and other channels, regardless of language – to the tune of some hundred million posts per day and 1 million predictions per minute. Near realtime predictions, says CEO Randy Browning, of what a target audience will be interested in a 72-hour forward-looking window based on what they’re talking about now, so that customers can tailor their buying strategies for AdWords or search terms as well as create or deploy content that’s relevant to those interests.
“We predict what will be important to people so they can buy search terms or AdWords at a great price before the market or Google sees it,” he says. That’s the main reason customers turn to Blab today, with optimizing their own content taking second place. Crisis management is the third deployment rationale. “If a brand has multiple issues, we can tell them which will be significant or which will be a blip and then fade away, so they can get a predictive understanding of where to focus their resources to mitigate issues coming down the pike.
These vistas will be explored in a session hosted by Kevin Ford, digital project coordinator at the Library of Congress at next week’s Semantic Technology & Business conference in San Jose. The door is being opened by the Bibliographic Framework Initiative (BIBFRAME) that the LOC launched a few years ago. Libraries will be moving from the MARC standards, their lingua franca for representing and communicating bibliographic and related information in machine-readable form, to BIBFRAME, which models bibliographic data in RDF using semantic technologies.
Mark Langshaw of Digital Spy reports, “Google has acquired the company behind Emu, a messaging app that doubles as a personal assistant. The existing application, which was for iPhone only, will be pulled from the App Store later this month.’As of August 25, 2014, we’ll be shutting down the Emu app. It will no longer be available in the App Store, and existing users won’t be able to send, receive, or download messages. We know it’s an inconvenience, and we regret that,’ said the firm in a statement. Emu uses intelligent learning and natural language processing to present the user with relevant information in real-time, and can be integrated with other services.” Read more
Among the mainstream content management systems, you could make the case that Drupal was the first open source semantic CMS out there. At next week’s Semantic Technology and Business Conference, software engineer Stéphane Corlosquet of Acquia, which provides enterprise-level services around Drupal, and Bock & Co. principal Geoffrey Bock will discuss in this session Drupal’s role as a semantic CMS and how it can help organizations and institutions that are yearning to enrich their data with more semantics – for search engine optimization, yes, but also for more advanced use cases.
“It’s very easy to embed semantics in Drupal,” says Bock, who analyses and consults on digital strategies for content and collaboration. At its core it has the capability to manage semantic entities, and in the upcoming version 8 it takes things to a new level by including schema.org as a foundational data type. “It will become increasingly easier for developers to build and deliver semantically enriched environments,” he says, which can drive a better experience both for clients and stakeholders.
Corlosquet, who has taken a leadership role in building semantic web capabilities into Drupal’s core and maintains the RDF module in Drupal 7 and 8, explains that the closer embrace of schema.org in Drupal is of course a help when it comes to SEO and user engagement, for starters. Google uses content marked up using schema.org to power products like Rich Snippets and Google Now, too.
Google is looking for a Software Engineer, Crisis Response in New York, NY. The post states, “The Crisis Response team works to help users affected by disasters get the information they need, before, during, and after crisis events. Some areas the team is already working on and experimenting with include: Crowdsourcing: When a disaster happens, how can we enable connected users to contribute meaningful information that can help other users, first responders, and aid agencies alike? Public alerting: What information do end users need to take action and keep themselves and their loved ones safe? How can we collect, verify and disseminate that information globally in a scalable way? Mobile application development: How can we utilize the features of mobile-connected devices to keep users safer and better informed at scale?” Read more
Context is king – at least when it comes to enterprise search. “Organizations are no longer satisfied with a list of search results — they want the single best result,” wrote Gartner in its latest Magic Quadrant for Enterprise Search report, released in mid-July. The report also says that the research firm estimates the enterprise search market to reach $2.6 billion in 2017.
The leaders list this time around includes Google with its Search Appliance, which Google touts as benefitting from Google.com’s continually evolving technology, thanks to machine learning from billions of search queries. Also on that part of the quadrant is HP Autonomy, which Gartner says is “exceptionally good at handling searches driven by queries that include surmised or contextual information;” and Coveo and Perceptive Software, both of which are quoted as offering “considerable flexibility for the design of conversational search capabilities, to reduce the ambiguity of results.”
Benjamin Spiegel of Marketing Land reports, “Historically, consumers have used Google for research in every step of the purchasing process, all the way up the sales funnel… In recent years, however, we have observed some interesting changes in customer behavior — one of the main ones being that consumers are starting to favor native search over Google search for lower-funnel terms. This is something retailers can take advantage of during the upcoming holiday shopping season — and, indeed, year-round.So what exactly is native search? Native search is the search functionality inside the different platforms or websites. Simply put, it’s the search box on e-retail sites like Amazon, Walmart and CVS, or in category sites like Edmunds and Newegg.” Read more
NEXT PAGE >>