Mark Albertson of the Examiner recently wrote, “It was an unusual sight to be sure. Standing on a convention center stage together were computer engineers from the four largest search providers in the world (Google, Yahoo, Microsoft Bing, and Yandex). Normally, this group couldn’t even agree on where to go for dinner, but this week in San Jose, California they were united by a common cause: the Semantic Web… At the Semantic Technology and Business Conference is San Jose this week, researchers from around the world gathered to discuss how far they have come and the mountain of work still ahead of them.” Read more
Posts Tagged ‘schema.org’
These vistas will be explored in a session hosted by Kevin Ford, digital project coordinator at the Library of Congress at next week’s Semantic Technology & Business conference in San Jose. The door is being opened by the Bibliographic Framework Initiative (BIBFRAME) that the LOC launched a few years ago. Libraries will be moving from the MARC standards, their lingua franca for representing and communicating bibliographic and related information in machine-readable form, to BIBFRAME, which models bibliographic data in RDF using semantic technologies.
Among the mainstream content management systems, you could make the case that Drupal was the first open source semantic CMS out there. At next week’s Semantic Technology and Business Conference, software engineer Stéphane Corlosquet of Acquia, which provides enterprise-level services around Drupal, and Bock & Co. principal Geoffrey Bock will discuss in this session Drupal’s role as a semantic CMS and how it can help organizations and institutions that are yearning to enrich their data with more semantics – for search engine optimization, yes, but also for more advanced use cases.
“It’s very easy to embed semantics in Drupal,” says Bock, who analyses and consults on digital strategies for content and collaboration. At its core it has the capability to manage semantic entities, and in the upcoming version 8 it takes things to a new level by including schema.org as a foundational data type. “It will become increasingly easier for developers to build and deliver semantically enriched environments,” he says, which can drive a better experience both for clients and stakeholders.
Corlosquet, who has taken a leadership role in building semantic web capabilities into Drupal’s core and maintains the RDF module in Drupal 7 and 8, explains that the closer embrace of schema.org in Drupal is of course a help when it comes to SEO and user engagement, for starters. Google uses content marked up using schema.org to power products like Rich Snippets and Google Now, too.
In Part 3 of this series, Jarek Wilkiewicz details activating the small Knowledge Graph (built on Cayley) with Schema.org Actions. He begins by explaining how Actions can be thought of as a combination of “Entities” (things) and “Affordances” (uses). As he defines it, “An affordance is a quality of an object, or an environment, which allows an individual to perform an action.”
For example, an action, might be using the “ok Google” voice command on a mobile device. The even more specific example that Wilkiewicz gives in the video (spoiler alert) is that of using the schema.org concept of potentialAction to trigger the playing of a specific artist’s music in a small music store’s mobile app.
To learn more, and to meet Jarek Wilkiewicz and his Google colleague, Shawn Simister, in person, register for the Semantic Technology & Business Conference where they will present “When 2 Billion Freebase Facts is Not Enough.”
Standard Analytics, which was a participant at the recent TechStars event in New York City, has a big goal on its mind: To organize the world’s scientific information by building a complete scientific knowledge graph.
The company’s co-founders, Tiffany Bogich and Sebastien Ballesteros,came to the conclusion that someone had to take on the job as a result of their own experience as researchers. A problem they faced, says Bogich, was being able to access all the information behind published results, as well as search and discover across papers. “Our thesis is that if you can expose the moving parts – the data, code, media – and make science more discoverable, you can really advance and accelerate research,” she says.
In a post yesterday at the official schema.org blog, Vicki Tardif Holland (Google) and Jason Johnson (Microsoft) have announced that schema.org has created a way to more richly describe relationships between entities in structured markup. The addition of the “Role” schema allows for the description of more complex relationships than were previously possible. In the post, the authors cite the business need as one that is often found in the domains of entertainment and sports.
For example, in schema.org, it can be asserted that Bill Murray was an actor in the film Ghostbusters [Fig. 1].
That’s all well and good, but how can one extend this relationship to include more detail such as the name of the character Mr. Murray played in the film? More on that in a moment.
In the winter of 2012, The New York Times began its implementation of the schema.org compatible version of rNews, a standard for embedding machine-readable publishing metadata into HTML documents, to improve the quality and appearance of its search results, as well as generate more traffic through algorithmically generated links. The semantic markup for news articles brought to its web pages structured data properties to define author, the date a work was created, its editor, headline, and so on.
But according to a leaked New York Times internal innovation report that appears here, there’s more work to be done in the structured data realm as part of a grand plan to truly put digital first in the face of falling website and smartphone app readership and hotter competition from both old guard and new age newsrooms and social media properties that are transforming how journalism is delivered for an audience increasingly invested in mobile, social, and personalized technologies.
The report was put together with insights from parties including Evan Sandhaus, director for search, archives and semantics at The NY Times, who was instrumental in the rNews/schema.org effort as well as the TimesMachine relaunch, a digital archive of 46,592 issues of The New York Times whose use includes surrounding current news stories with context. While the report notes that the Gray Lady has not been standing still in the face of its challenges, citing newsroom advances to grow audience with efforts such as using data to inform decisions, it needs to do more – faster – to make it easy to get its content in front of digital readers.
A couple of weeks back The Semantic Web Blog reported on research from SEO optimization vendor Searchmetrics about the virtues of semantic markup. Now the 2014 Content Search Marketers Survey, which recently came out from enterprise SEO platform vendor Brightedge, adds some more interesting statistics to show about what matters to optimized search.
Among them: Half of the respondents consider a page/content-based approach to driving page traffic, conversions and revenue as being much more important for SEO in 2014 than in 2013. Another 50 percent said it would be more or as important this year than last.
“The page-based approach to SEO in the world of secure search is important for 100 percent of SEOs, and 85 percent stated that it would be more or much more important for them in 2014,” the report states. “SEOs are also still focused on the business impact of the keyword (90 percent), though the shift in focus to the page leaves only 50% percent stating that measuring the business impact of the keyword will be more important in 2014.”
MindMeld – you may know the term best from StarTrek and those fun-loving Vulcan practices. But it lives too at Expect Labs, as an app that listens to and understands conversations and finds relevant information within them, and as an API that lets developers create apps that leverage contextually-driven search and discovery – and may even find the information users need before they explicitly look for it.
Anticipatory computing is the term Expect Labs uses for that. “This is truly a shift in the way that search occurs,” says director of research Marsal Gavaldà. “Anticipatory computing is the most general term in the sense that we have so much information about what users are doing online that we can create accurate models to predict what a user might need based on long-ranging history of that user profile, but also about the context.”
The more specific set of functionality that contributes to the overarching theme of anticipatory computing, he explains, “means that you can create intelligent assistants that have contextual search capabilities, because our API makes it very easy to provide a very continuous stream of updates about what a user is doing or where a user is.”
This week saw schema.org introduce vocabulary that enables websites to describe the actions they enable and how these actions can be invoked, in the hope that these additions will help unleash new categories of applications, according to a new post by Dan Brickley.
This represents an expansion of the vocabulary’s focus point from describing entities to taking action on these entities. The work has been in progress, Brickley explains here, for the last couple of years, building on the http://schema.org/Action types added last August by providing a way of describing the capability to perform actions in the future.
The three action status type now includes PotentialActionStatus for a description of an action that is supported, ActiveActionStatus for an in-progress action, and CompletedActionStatus, for an action that has already taken place.
NEXT PAGE >>