Frederick Vallaeys of Search Engine Land recently wrote, “By late August, Product Listing campaigns for AdWords will be retired in favor of Shopping campaigns, so if you haven’t started migrating, don’t delay much longer. You can run both campaign types simultaneously, so start tweaking Shopping campaigns now so that they’ll be performing great by the time PLAs go away later this summer… Unlike with Search ads which are entirely managed in AdWords, a lot of the settings for Shopping ads are handled outside of the AdWords interface. They get their titles, images, descriptions and promotions from feeds in the Google Merchant Center. While you can use the AdWords interface to set bids, structure campaigns and set up product groups, you will need to work with your product feed if you want to have ads appear for different keywords. How to manipulate the feed depends on its size and how it is generated.” Read more
Dave Lloyd of ClickZ recently wrote, “2014 has been heralded the year of content marketing. At the same time, we’re optimizing our search marketing practices for the semantic search environment. Together, there’s a need to merge the two different objectives into a unified strategy. From a search marketing perspective, it makes sense to integrate content marketing and semantic search optimization practices. The introduction of Hummingbird has taught us to deploy search optimization strategies that contextualize queries. Digital marketing with content, on the other hand, is deployed to drive traffic and engage prospects. You can see where the two might combine to form a natural single-track strategy, right? Read more
Derrick Harris of GigaOM reports, “Researchers from the University of California, Irvine, have published a paper demonstrating the effectiveness of deep learning in helping discover exotic particles such as Higgs bosons and supersymmetric particles. The research, which was published in Nature Communications, found that modern approaches to deep neural networks might be significantly more accurate than the types of machine learning scientists traditionally use for particle discovery and might also save scientists a lot of work. To get a sense of how challenging particle discovery is, consider that a collider can produce 100 billion collisions per hour and only about 300 will produce a Higgs boson. Because the particles decay almost immediately, scientists can’t expressly identify them, but instead must analyze (and sometimes infer) the products of their decay.” Read more
Marty Loughlin of Wall Street & Technology recently noted that in this era of “massive business and IT transformation,” organizations in the financial industry “will need to change how they track, manage, and consume data. For many organizations, this data is not easily accessible — it is distributed across the organization, often trapped in local business units, applications, data warehouses, spreadsheets, and documents. Traditional technologies are struggling to address this challenge and many believe a new approach is required. Some of the new big-data solutions do help. They are good at liberating and colocating data. However, they often struggle to make it usable. Creating a ‘data lake’ where rigid structure is not required can result in yet another silo of unusable data where context, meaning, and sources are lost. Many organizations are turning to semantic technology for the answer.” Read more
July 7, 2014 – Wolters Kluwer Health, a leading global provider of information for healthcare professionals and students, announced today that Holy Name Medical Center (HNMC) has selected Health Language® to improve problem and diagnosis searches within its electronic health record (EHR). HNMC will use the Health Language Workflow-Enhancing Search solution to support encoding its problem lists in SNOMED-CT® for Stage 2 Meaningful Use and the transition to ICD-10. Read more
As July 4 approaches, the subject of open government data can’t help but be on many U.S. citizens’ minds. That includes the citizens who are responsible for opening up that data to their fellow Americans. They might want to take a look at NuCivic Data Enterprise, the recently unveiled cloud-based, open source, open data platform for government from NuCivic, in partnership with Acquia and Carahsoft. It’s providing agencies an OpenSaaS approach to meeting open data mandates to publish and share datasets online, based on the Drupal open source content management system.
NuCivic’s open source DKAN Drupal distribution provides the core data management components for the NuCivic Data platform; it was recognized last week as a grand prize winner for Amazon Web Services’ Global City on a Cloud Innovation Challenge in the Partner in Innovation category. Projects in this category had to demonstrate that the application solves a particular challenge faced by local government entities. As part of the award, the NuCivic team gets $25,000 in AWS services to further support its open data efforts.
The role that cognitive computing can play in healthcare was explored last week in this story published at The Semantic Web Blog’s sister site Dataversity.net. That article looked at how Modernizing Medicine is leveraging IBM Watson for its new schEMA tablet app that helps doctors use the wealth of published medical research from highly reputable sources, such as the Journal of the American Medical Association (JAMA) and New England Journal of Medicine, to answer their questions.
Today, we’re complementing that article to further explore such aspects of the health care and cognitive computing connection based on an email conversation with IBM Watson Group CTO Robert High. “IBM Watson is transforming the patient experience and healthcare delivery system by helping physicians make sense of the enormous amount of data generated by an increasingly connected healthcare environment,” High writes.
“Content curation is a critical part of the solution delivery process. Without reputable and reliable sources of medical literature, therapy choices offered by Watson may not have the supporting evidence needed to inform clinicians in the use of those treatments. We work with the top clinicians at our partners to collect their feedback on supporting evidence and cull inappropriate information from their sources.” IBM, along with its solutions partners, works with a variety of content providers based on the relevance of their materials to treatment options, he adds.
According to Mike Kavis of Forbes, “Companies are jumping on the Internet of Things (IoT) bandwagon and for good reasons. McKinsey Global Institute reports that the IoT business will deliver $6.2 trillion of revenue by 2025. Many people wonder if companies are ready for this explosion of data generated for IoT? As with any new technology, security is always the first point of resistance. I agree that IoT brings a wave of new security concerns but the bigger concern is how woefully prepared most data centers are for the massive amount of data coming from all of the “things” in the near future.”
Kavis went on to write that, “Some companies are still hanging on to the belief that they can manage their own data centers better than the various cloud providers out there. This state of denial should all but go away when the influx of petabyte scale data becomes a reality for enterprises. Enterprises are going to have to ask themselves, “Do we want to be in the infrastructure business?” because that is what it will take to provide the appropriate amount of bandwidth, disk storage, and compute power to keep up with the demand for data ingestion, storage, and real-time analytics that will serve the business needs. If there ever was a use case for the cloud, the IoT and Big Data is it. Processing all of the data from the IoT is an exercise in big data that boils down to three major steps: data ingestion (harvesting data), data storage, and analytics.”
To read a different perspective on these challenges and how Semantic Web technologies play a role in them, read Irene Polikoff’s recent guest post, “RDF is Critical to a Successful Internet of Things.”
Apple Insider recently reported, “According to Apple’s ‘Jobs at Apple’ website, the company is seeking ‘Siri Language Engineers’ fluent in Arabic, Brazilian Portuguese, Danish, Dutch, Norwegian, Swedish, Thai, Turkish and Russian, all of which are currently unsupported by the voice recognizing digital assistant. The job postings were first uncovered by MacRumors. Along with the nine new languages, Apple is looking to enhance Siri‘s existing lexicon with hires fluent in Australian and British English, Cantonese and Japanese. All listings ask not only for fluency, but for native speakers to handle colloquialisms locals may use when speaking to Siri. Apple also strives to make Siri’s own speech as natural as possible, meaning the potential hires will likely be working on responses to user queries.” Read more
John Biggs of Tech Crunch reports, “A new research project by a computer science team at Cornell University is using human volunteers to train robots to perform tasks. How is it unique? They’re showing robots how to infer actions based on very complex, human comments. Instead of having to say ‘move arm left 5 inches’ they are hoping that, one day, robots will respond to ‘Make me some ramen’ or ‘Clean up my mess.’ The commands are quite rudimentary right now and focus mostly around loose requests like “boil the ramen for a few minutes” which, with enough processing, can be turned into a step-by-step set of commands. For example, in the video above a subject asks for an affogato, basically coffee with ice cream. The robot has learned the basic recipe and so uses what is at hand — a barrel of ice cream, a bowl, and a coffee dispenser — to produce a tasty treat for its human customer.” Read more