Jennifer Zaino

Semantic Interoperability of Electronic Healthcare Info On The Agenda At U.S. Veterans Health Administration

LVScreenThe Yosemite Project, unveiled at this August’s Semantic Technology & Business Conference during the second annual RDF as a Universal Healthcare Exchange Language panel, lays out a roadmap for leveraging RDF in support of making all structured healthcare information semantically interoperable. (The Semantic Web Blog’s sister publication, Dataversity.net, has an article on its site explaining the details of that roadmap.)

The Yosemite Project grew out of the Yosemite Manifesto that was announced at the 2013 SemTechBiz conference (see our story here). The goals of the Manifesto have now been mapped out into the Project’s guidelines to follow on the journey to semantic interoperability by David Booth, senior software architect at Hawaii Resource Group (who led the RDF Healthcare panels at both the 2013 and 2014 conferences). The approach taken by the Yosemite Project matches that of others in the healthcare sector who want to see semantic interoperability of electronic healthcare information.

Among them are Booth’s fellow panelists at this year’s event, including Rafael Richards. Richards, who is physician informaticist at the U.S. Veterans Health Administration – which counts 1,200 care sites in its portfolio – comments on that alignment as it relates to the work he is leading in the Linked Vitals project to integrate the VA’s VistA electronic health records system with data types conforming to Fast Healthcare Interoperability Resources, orFHIR,standard for data exchange, and with information types supporting the Logical Observation Identifiers Names and Codes, or LOINC, database that facilitates the exchange and pooling of results for clinical care, outcomes management, and research.

Read more

You Can Help Make Linked Data Core To The Future of Identity, Payment On The Web Platform

ld1At the end of September, The World Wide Web Consortium (W3C) may approve the world’s first Web Payments Steering Group, to explore issues such as navigating around obstacles to seamless payments on the web and ways to better facilitate global transactions while respecting local laws. Identity and digital signatures have a role here, at the same time as they go beyond the realm of payment into privacy and other arenas. At the end of October, there also will be a W3C technical plenary, to discuss identity, graph normalization, digital signatures and payments technologies.

Expect Linked Data to come up in the context of both events, Manu Sporny told attendees at this August’s 10th annual Semantic Technology & Business conference in San Jose during his keynote address, entitled Building Linked Data Into the Core of the Web. “It is the foundational data model to build all this technology off of,” said Sporny, who is the founder and CEO of Digital Bazaar, which develops technology and services to make it easier to buy and sell digital content over the Internet. (See our stories about the company and its technology here.)  He also is founder and chair of the W3C Web Payments Community Group, chair of its RDFa Working Group, and founder, and chair and lead editor of the JSON-LD Community Group.

Read more

Schema.Org: The Fire’s Been Lit

schemaorgadoptspecificWhy has schema.org made the following strides since its debut in 2011?

  • In a sample of over 12 billion web pages, 21 percent, or 2.5 billion pages, use it to mark up HTML pages, to the tune of more than 15 billion entities and more than 65 billion triples;
  • In that same sample, this works out to six entities and 26 facts per page with schema.org;
  • Just about every major site in every major category, from news to e-commerce (with the exception of Amazon.com), uses it;
  • Its ontology counts some 800 properties and 600 classes.

A lot of it has to do with the focus its proponents have had since the beginning on making it very easy for webmasters and developers to adopt and leverage the collection of shared vocabularies for page markup. At this August’s 10th annual Semantic Technology & Business conference in San Jose, Google Fellow Ramanathan V. Guha, one of the founders of schema.org, shared the progress of the initiative to develop one vocabulary that would be understood by all search engines and how it got to where it is today.

Read more

Big Data Review: What The Surveys Say

bd2Big Data has been getting its fair share of commentary over the last couple months. Surveys from multiple sources have commented on trends and expectations. The Semantic Web Blog provides some highlights here:

  • From Accenture Anayltics’s new Big Success With Big Data report: There remain some gaps in what constitutes Big Data for respondents to its survey: Just 43 percent, for instance, classify unstructured data as part of the package. That option included open text, video and voice. Those are gaps that could be filled leveraging technologies such as machine learning, speech recognition and natural language understanding, but they won’t be unless executives make these sources a focus of Big Data initiatives to start with.
  • From Teradata’s new survey on Big Data Analytics in the UK, France and Germany: Close to 50 percent of respondents in the latter two countries are using three or more data types (from sources ranging from social media, to video, to web blogs, to call center notes, to audio files and the Internet of Things) in their efforts, compared to just 20 percent in the UK.  A much higher percentage of UK businesses (51 percent) are currently using just a single type of new data, such as video data, compared with France and Germany, where only 21 percent are limiting themselves to one type of new data, it notes. Forty-four percent of execs in Germany and 35 percent in France point social media as the source of the new data. About one-third of respondents in each of those countries are investigating video, as well.

Read more

Word To Developers: Power Your Apps With Data And Insights

aplogoApigee wants the development community to be able to seamlessly take advantage of predictive analytics in their applications.

“One of the biggest things we want to ensure is that the development community gets comfortable with powering their apps with data and insights,” says Anant Jhingran, Apigee’s VP of products and formerly CTO for IBM’s information management and data division. “That is the next wave that we see.”

Apigee wants to help them ride that wave, enabling their business to better deal with customer issues, from engagement to churn, in more personal and contextual ways. “We are in the business of helping customers take their digital assets and expose them through clean APIs so that these APIs can power the next generation of applications,” says Jhingran. But in thinking through that core business, “we realized the interactions happening through the APIs represent very powerful signals. Those signals, when combined with other contextual data that may be in the enterprise, enable some very deep insights into what is really happening in these channels.”

With today’s announcement of a new version of its Apigee Insights big data platform, all those signals generated – through API and web channels, call centers, and more – can come together in the service of predictive analytics for developers to leverage.

Read more

RoadMap Your Text Analytics Initiative

analyticspixWhat best practices should inform your company’s text analytics initiatives? Executive Lessons on Modern Text Analytics, a new white paper prepared by: Geoff Whiting, principal at GWhiting.com and Alesia Siuchykava, project director at Data Driven Business provides some insight. Contributors to the lessons shared in the report include Ramkumar Ravichandran, Director, Analytics, at Visa and Matthew P.T. Ruttley, Manager of Data Science at Mozilla Corp

One of the interesting points made in the paper is that text analytics can be applied to many use cases: customer satisfaction and management effectiveness, product design insights, and enhancing predictive data modeling as well as other data processes. But at the same time, a takeaway is that it is better for text analytics teams to follow a narrow path than to try to accommodate a wide-ranging deployment. “All big data initiatives, and especially initial text analytics, need a specific strategy,” the writers note, preferable focusing on “low-hanging fruit through simple business problems and use cases where text analytics can provide a small but fast ROI.

Read more

Clarabridge Goes Straight To The Customers’ Mouth To Analyze Call Center Interactions

cbridge logoCustomer experience management vendor Clarabridge wants to bring the first-person narrative from call center interactions to life for marketing analysts, customer care managers, call center leaders and other customer-focused enterprise execs. With its just released Clarabridge Speech, it now brings via the cloud a solution that integrates Voci Technologies’ speech recognition smarts with its own capabilities for using NLP to analyze and categorize text, sentiment and emotion in surveys, social media, chat sessions, emails and call center agents’ own notes.

Agent notes certainly are helpful when it comes to assessing whether customers are having negative experiences and whether their loyalty is at stake, among other concerns. But, points out Clarabridge CEO Sid Banerjee, “an agent almost never types word for word what the customer says,” nor will they necessarily characterize callers’ tones as angry, confused, and so on. With the ability now to take the recorded conversation and turn it into a transcript, the specific emotion and sentiment words are there along with the entire content of the call to be run through Clarabridge’s text and sentiment algorithms.

“You get a better sense of the true voice of the customer and the experience of that interaction – not just the agent perspective but the customer perspective,” Banerjee says.

Read more

eXframe Platform Demos Power Of The Semantic Web For Biology

sccommonsA Drupal ++ platform for semantic web biomedical data – that’s how Sudeshna Das describes eXframe, a reusable framework for creating online repositories of genomics experiments. Das – who among other titles is affiliate faculty of the Harvard Stem Cell Institute – is one of the developers of eXframe, which leverages Stéphane Corlosquet’s RDF module for Drupal to produce, index (into an RDF store powered by the ARC2 PHP library) and publish semantic web data in the second generation version of the platform.

“We used the RDF modules to turn eXframe into a semantic web platform,” says Das. “That was key for us because it hid all the complexities of semantic technology.”

One instance of the platform today can be found in the repository for stem cell data as part of the Stem Cell Commons, the Harvard Stem Cell Institute’s community for stem cell bioinformatics. But Das notes the importance of the reusability aspect of the software platform to build genomics repositories that automatically produce Linked Data as well as a SPARQL endpoint, is that it becomes easy to build new repository instances with much less effort. Working off Drupal as its base, eXframe has been customized to support biomedical data and to integrate biomedical ontologies and knowledge bases.

Read more

Making Progress On MOOCs

8341639815_b7864c3743_z

Image courtesy palphy/Flickr

As the school year gets into full swing, folks might be starting to think about how MOOCs (massive online open courses) can help them on their own educational journeys – whether towards a degree or simply for growing their own knowledge for personal or career reasons. After a meteoric rise, MOOCs such as those offered by CourseraEdX and Udacity, have taken a few hits. Early results from a study last year by the University of Pennsylvania, for instance, said that MOOC course completion rates average just 4 percent across all courses, and range from 2 to 14 percent depending on the course and measurement of completion. The New York Times reported on some other setbacks here – but also noted that while MOOCs may be reshaped, they’re unlikely to disappear.

Some of that reshaping is underway. Among the efforts is a project announced this summer to take place at Carnegie Mellon University, in a multi-year program funded through a Google Focused Research Award. The announcement says the project will approach the problem from multiple directions, including a data-driven effort that will use machine-learning techniques to personalize the MOOC learning experience.

Read more

XSB and SemanticWeb.Com Partner In App Developer Challenge To Help Build The Industrial Semantic Web

Semantic Web Developer Challenge - sponsored by XSB and SemanticWeb.comAn invitation was issued to developers at last week’s Semantic Technology and Business Conference: XSB and SemanticWeb.com have joined to sponsor the Semantic Web Developer Challenge, which asks participants to build sourcing and product life cycle management applications leveraging XSB’s PartLink Data Model.

XSB is developing PartLink as a project for the Department of Defense Rapid Innovation Fund. It uses semantic web technology to create a coherent Linked Data model for all part information in the Department of Defense’s supply chain – some 40 million parts strong.

“XSB recognized the opportunity to standardize and link together information about the parts, manufacturers, suppliers, materials, [and] technical characteristics using semantic technologies. The parts ontology is deep and detailed with 10,000 parts categories and 1,000 standard attributes defined,” says Alberto Cassola, vp sales and marketing at XSB, a leading provider of master data management solutions to large commercial and government entities. PartLink’s Linked Data model, he says, “will serve as the foundation for building the industrial semantic web.”

Read more

NEXT PAGE >>