The official blog of schema.org yesterday announced the release of version 1.92 of schema.org. The post, by Dan Brickley, states, “With this update we ‘soft launch’ a substantial collection of improvements that will form the basis for a schema.org version 2.0 release in early 2015. There remain a number of site-wide improvements, bugfixes and clarifications that we’d like to make before we feel ready to use the name ‘v2.0’. However the core vocabulary improvements are stable and available for use from today. As usual see the release notes page for details.” Read more
Posts Tagged ‘schema.org’
Retailers are pushing holiday shopping deals earlier and earlier each year, but for many consumers the Thanksgiving weekend still signals the official start of the gift-buying season. With that in mind, we present some thoughts on how the use of semantic technology may impact your holiday shopping this year.
- Pinterest has gained a reputation as the go-to social network for online retailers that want to drive traffic and sales. Shoppers get an advantage, too, as more e-tailers deploy Rich Pins, a feature made available for general use late last year, for their products, using either schema.org or Open Graph. Daily updated Product Rich Pins now include extra information such as real-time pricing, availability and where to buy metatags right on the Pin itself. And, anyone who’s pinned a product of interest will get a notification when the price has dropped. Overstock, Target, and Shopify shops are just some of the sites that take advantage of the feature. Given that 75 percent of its traffic comes from mobile devices, it’s nice that a recent update to Pinterest’s iPhone mobile app – and on the way for Andoid and iPads – also makes Pins information and images bigger on small screens.
- Best Buy was one of the earliest retailers to look to semantic web technologies to help out shoppers (and its business), adding meaning to product data via RDFa and leveraging ontologies such as GoodRelations, FOAF and GEO. Today, the company’s web site properties use microdata and schema.org, continually adding to shopper engagement with added data elements, such as in-stock data and store location information for products in search results, as you can see in this presentation this summer by Jay Myers, Best Buy’s Emerging Digital Platforms Product Manager, given at Search Marketing Expo.
- Retailers such as Urban Decay, Crate&Barrel, Golfsmith and Kate Somerville are using Edgecase’s Adaptive Experience platform, generating user-friendly taxonomies from the data they already have to drive a better customer navigation and discovery experience. The system relies on both machine learning and human curation to let online buyers shop on their terms, using the natural language they want to employ (see our story here for more details).
- Walmart at its Walmart Labs has been steadily driving semantic technology further into its customer shopping experience. Last year, for example, Walmart Labs senior director Abhishek Gattani discussed at the Semantic Technology and Business conference capabilities it’s developed such as semantic algorithms for color detection so that it can rank apparel, for instance, by the color a shopper is looking for and show him items in colors close to read when red itself is not available, as well as categorizing queries to direct people to the department that’s really most interesting to them. This year, WalMart Labs added talent from Adchemy when it acquired the company to bring further expertise in semantic search and data analytics to its team, as well as Luvocracy, an online community that enables the social shopping experience—from discovery of products recommended by people a users trusts to commerce itself. Search and product discovery is at the heart of new features its rolling out to drive the in-store experience too, via mobile apps such as Search My Store to find exactly where items on their list are located at any retail site.
What’s your favorite semantically-enhanced shopping experience? Share it with our readers below to streamline their holiday shopping!
The W3C’s Web Components model is positioned to solve many of the problems that beset web developers today. “Developers are longing for the ability to have reusable, declarative, expressive components,” says Brian Sletten, a specialist in semantic web and next-generation technologies, software architecture, API design, software development and security, and data science, and president of software consultancy Bosatsu Consulting, Inc.
Web Components should fulfill that longing: With Templates, Custom Elements, Shadow DOM, and Imports draft specifications (and thus still subject to change), developers get a set of specifications for creating their web applications and elements as a set of reusable components. While most browsers don’t yet support these specifications, there are Web Component projects like Polymer that enable developers who want to start taking advantage of these capabilities right away to build Web objects and applications atop the specs today.
That in itself is exciting, Sletten says, but even more so is the connection he made that semantic markup can be added to any web component.
Nova Spivack, CEO of Bottlenose, recently opined in TechCrunch, “Cards are fast becoming the hot new design paradigm for mobile apps, but their importance goes far beyond mobile. Cards are modular, bite-sized content containers designed for easy consumption and interaction on small screens, but they are also a new metaphor for user-interaction that is spreading across all manner of other apps and content. The concept of cards emerged from the stream — the short content notifications layer of the Internet — which has been evolving since the early days of RSS, Atom and social media.” Read more
Have you wanted to get involved in the schema.org project? Your contribution to the collaborative effort driven by Bing, Google, Yahoo and Yandex for a shared markup vocabulary for web pages is more than welcome. As Dan Brickley, who is developer advocate at Google, noted during his presentation about schema.org’s progress to date at this summer’s Semantic Technology & Business Conference, the “pattern of collaboration with the project [is] we’re trying to push work off on people who are better qualified to do it, and then we mush it all together.”
What is meant by that is that the project is so broad, covering such a huge amount of topics, that the input of experts – whether from the library, media, sports or any other of the multitude of communities whose vocabularies are or aim to be represented – is incredibly valuable, and very much encouraged. In an overview of the 2013-2014 releases, which included TV/radio, civic services, and bibliographic additions, as well as accessibility properties, among others, Brickley related that during the year, “We listened a lot. We listened to people who knew better than us about accessibility, about how broadcast TV and radio are described, about describing social services, about libraries, journals, and ecommerce, and then integrated their suggestions into a unified set of schemas.”
- In a sample of over 12 billion web pages, 21 percent, or 2.5 billion pages, use it to mark up HTML pages, to the tune of more than 15 billion entities and more than 65 billion triples;
- In that same sample, this works out to six entities and 26 facts per page with schema.org;
- Just about every major site in every major category, from news to e-commerce (with the exception of Amazon.com), uses it;
- Its ontology counts some 800 properties and 600 classes.
A lot of it has to do with the focus its proponents have had since the beginning on making it very easy for webmasters and developers to adopt and leverage the collection of shared vocabularies for page markup. At this August’s 10th annual Semantic Technology & Business conference in San Jose, Google Fellow Ramanathan V. Guha, one of the founders of schema.org, shared the progress of the initiative to develop one vocabulary that would be understood by all search engines and how it got to where it is today.
Barbara Starr of Search Engine Land recently observed that, “Search is changing – and it’s changing faster than ever. Increasingly, we are seeing organic elements in search results being displaced by displays coming from the Knowledge Graph. Yet the shift from search over documents (e.g. web pages) to search over data (e.g. Knowledge Graph) is still in its infancy. Remember Google’s mission statement: Google’s mission is to organize the world’s information to make it universally accessible and useful. The Knowledge Graph was built to help with that mission. It contains information about entities and their relationships to one another – meaning that Google is increasingly able to recognize a search query as a distinct entity rather than just a string of keywords. As we shift further away from keyword-based search and more towards entity-based search, internal data quality is becoming more imperative.”
Mark Albertson of the Examiner recently wrote, “It was an unusual sight to be sure. Standing on a convention center stage together were computer engineers from the four largest search providers in the world (Google, Yahoo, Microsoft Bing, and Yandex). Normally, this group couldn’t even agree on where to go for dinner, but this week in San Jose, California they were united by a common cause: the Semantic Web… At the Semantic Technology and Business Conference is San Jose this week, researchers from around the world gathered to discuss how far they have come and the mountain of work still ahead of them.” Read more
These vistas will be explored in a session hosted by Kevin Ford, digital project coordinator at the Library of Congress at next week’s Semantic Technology & Business conference in San Jose. The door is being opened by the Bibliographic Framework Initiative (BIBFRAME) that the LOC launched a few years ago. Libraries will be moving from the MARC standards, their lingua franca for representing and communicating bibliographic and related information in machine-readable form, to BIBFRAME, which models bibliographic data in RDF using semantic technologies.
Among the mainstream content management systems, you could make the case that Drupal was the first open source semantic CMS out there. At next week’s Semantic Technology and Business Conference, software engineer Stéphane Corlosquet of Acquia, which provides enterprise-level services around Drupal, and Bock & Co. principal Geoffrey Bock will discuss in this session Drupal’s role as a semantic CMS and how it can help organizations and institutions that are yearning to enrich their data with more semantics – for search engine optimization, yes, but also for more advanced use cases.
“It’s very easy to embed semantics in Drupal,” says Bock, who analyses and consults on digital strategies for content and collaboration. At its core it has the capability to manage semantic entities, and in the upcoming version 8 it takes things to a new level by including schema.org as a foundational data type. “It will become increasingly easier for developers to build and deliver semantically enriched environments,” he says, which can drive a better experience both for clients and stakeholders.
Corlosquet, who has taken a leadership role in building semantic web capabilities into Drupal’s core and maintains the RDF module in Drupal 7 and 8, explains that the closer embrace of schema.org in Drupal is of course a help when it comes to SEO and user engagement, for starters. Google uses content marked up using schema.org to power products like Rich Snippets and Google Now, too.
NEXT PAGE >>