Andrew Osborne, CTO of GS1 UK recently shared an overview of how the non-profit is leveraging the Semantic Web to improve customer experiences. He writes, “For those of you unfamiliar with what GS1 actually does, we are a not-for-profit standards development organisation. Put simply, our role is to define data structures and how these are used to identify things, a role we have been performing since the 1970s. We provide a series of ‘keys’ for industry which identify various types of entity (products, locations, assets and so on) and which have highly developed allocation rules. We have also defined product attributes for bar coding (the application identifier standards), have over 1,000 product attributes defined for synchronisation in the Global Data Synchronisation Network and an extensive Global Product Classification that is used to categorise products. For visibility systems we have a standard ‘Core Business Vocabulary.’ “ Read more
Posts Tagged ‘GS1’
Supply chain and products standards organization GS1 – which this week joined the World Wide Web Consortium (W3C) to contribute to work on improving global commerce and logistics – also now has released the GTIN (Global Trade Item Number) Validation Guide. In the states the GTIN, which is the GS1-developed numbering sequence within bar codes for identifying products at point of sale, is known as the Universal Product Code (UPC).
The guide is part of the organization’s effort to drive awareness about “the business importance of having accurate product information on the web,” says Bernie Hogan, Senior Vice President, Emerging Capabilities and Industries. The guide has the endorsement of players including Google, eBay and Walmart, which are among the retailers that require the use of GTINs by onboarding suppliers, and support GTIN’s extension further into the online space to help ensure more accurate and consistent product descriptions that link to images and promotions, and help customers better find, compare and buy products.
“This is an effort to help clean up the data and get it more accurate,” he says. “That’s so foundational to any kind of commerce, because if it’s not the right number, you can have the best product data and images and the consumer still won’t find it.” The search hook, indeed, is the link between the work that GS1 is doing to encourage using GS1 standards online for improved product identification data with semantic web efforts such as schema.org, which The Semantic Web discussed with Hogan here.
GS1, the standards organization responsible for barcodes and the Global Data Synchronization Network (GDSN), among other things, is working to extend the standards used for the identification of goods in the brick and mortar retail world into the web realm. As part of an overall conversation with its retail industry members about focusing more broadly on the digital space, it’s exploring how GS1 systems and standards fit into the semantic web.
What we call the UPC code in North America – and the GTIN (Global Trade Identification Network) code elsewhere – is a key part of the discussion. “The interesting thing is that the schema.org folks did some work to show how the GS1 system could be represented in their schemas,” says Bernie Hogan, Senior Vice President, Emerging Capabilities and Industries, who is spearheading GS1 US’s work in the online space. The schema.org/Product properties include quantitative values based on GTIN codes . “We started looking at that and started asking how we can build upon it.” (Barbara Starr’s recent SearchEngineLand column provides insight into the benefits today of using GS1 identifiers and structured data, including semantic markup on websites, for e-commerce.)
Today, GS1 US’s B2C Alliance now is working with its community to test some of the concepts around embedding the GS1 system in the web, and how that may positively or negatively impact how retailers’ and brand owners’ products are seen by search engines, says Hogan. “Everything with a unique identifier on the web is merging with Linked Open Data, and that gets pretty interesting, so we are working on a strategy to learn how we can fit into this whole thing,” he says, with the help of the GS1 Auto ID Labs research arm. “We ultimately want to make some standards recommendations, but first we are going through the process of testing and getting consensus and doing some research on how that might be done. But it is all about improving search and relevance for identifying products and finding related information.”
The World Wide Web Consortium has headline news today: The Semantic Web, as well as eGovernment, Activities are being merged and superseded by the Data Activity, where Phil Archer serves as Lead. Two new workgroups also have been chartered: CSV on the Web and Data on the Web Best Practices.
What’s driving this? First, Archer explains, the Semantic Web technology stack is now mature, and it’s time to allow those updated standards to be used. With RDF 1.1, the Linked Data Platform, SPARQL 1.1, RDB To RDF Mapping Language (R2RML), OWL 2, and Provenance all done or very close to it, it’s the right time “to take that very successful technology stack and try to implement it in the wider environment,” Archer says, rather than continue tinkering with the standards.
The second reason, he notes, is that a large community exists “that sees Linked Data, let alone the full Semantic Web, as an unnecessarily complicated technology. To many developers, data means JSON — anything else is a problem. During the Open Data on the Web workshop held in London in April, Open Knowledge Foundation co-founder and director Rufus Pollock said that if he suggested to the developers that they learn SPARQL he’d be laughed at – and he’s not alone.” Archer says. “We need to end the religious wars, where they exist, and try to make it easier to work with data in the format that people like to work in.”
The new CSV on the Web Working Group is an important step in that direction, following on the heels of efforts such as R2RML. It’s about providing metadata about CSV files, such as column headings, data types, and annotations, and, with it, making it easily possible to convert CSV into RDF (or other formats), easing data integration. “The working group will define a metadata vocabulary and then a protocol for how to link data to metadata (presumably using HTTP Link headers) or embed the metadata directly. Since the links between data and metadata can work in either direction, the data can come from an API that returns tabular data just as easily as it can a static file,” says Archer. “It doesn’t take much imagination to string together a tool chain that allows you to run SPARQL queries against ’5 Star Data’ that’s actually published as a CSV exported from a spreadsheet.”