Posts Tagged ‘Dublin Core’

Metadata Manifesto

MetadataMatters.com has posted a Manifesto for Managing Metadata in an Open World. The manifesto begins, “Metadata is produced and stored locally, published globally, consumed and aggregated locally, and finally integrated and stored locally. This is the create/publish/consume/integrate cycle. Providing a framework for managing metadata throughout this cycle is the goal of the Dublin Core Abstract Model and the Dublin Core Application Profile (DCAM/DCAP).”

It continues, “The basic guidelines and requirements for this cycle are: (1) Metadata MUST be syntactically VALID and semantically COHERENT when it’s CREATED and PUBLISHED. (2) Globally PUBLISHED Metadata SHOULD be TRUE, based on the domain knowledge of the publisher. (3) PUBLISHERS of metadata MUST publish the semantics of the metadata, or reference publicly available semantics. (4) CONSUMERS of published metadata SHOULD assume that the global metadata is locally INVALID, INCOHERENT, and UNTRUE. (5) CONSUMED metadata MUST be checked for syntactic validity and semantic coherence before integration. Read more

SemTechBiz’s Schema.org Panel: Which Way Will It Go?

Perhaps one of the most anticipated panels at next week’s Semantic Technology & Business Conference in San Francisco is the Wednesday morning session on Schema.org. Since the announcement of Schema.org just prior to last year’s SemTech Business Conference on the west coast, using the Schema.org shared vocabularies along with the microdata format to mark up web pages has been much debated, and created questions in the minds of webmasters and web search marketers along the lines of, “Which way should we go? Microdata or RDFa?”

Read more

The Problem With Names

New Amsterdam... or not?

Earlier this week I spent an enjoyable hour on the phone, discussing the work done by a venerable world-class museum in making data about its collections available to a new audience of developers and app-builders. Much of our conversation revolved around consideration of obstacles and barriers, and the most intractable of those proved something of a surprise.

Reluctance amongst senior managers to let potentially valuable data walk out the door? Nope. In fact, not even close; managers pushed museum staff to adopt a more permissive license for metadata (CC0) than the one (CC-BY) they had been considering.

Reluctance amongst curators to let their carefully crafted metadata be abused and modified by non-professionals? Possibly a little bit, but apparently nothing the team couldn’t handle.

A bean-counter’s obsession with measuring every click, every query, every download, such that the whole project became bogged down in working out what to count and when (and, sadly, that really is the case elsewhere!)? Again, no. “The intention was to create a possibility” by releasing data. The museum didn’t know what adoption would be like, and sees experimentation and risk-taking as part of its role. Monitoring is light, and there’s no intention to change that.

Read more

Lessons Learned On the Road To Linked Data

What’s the path from an XML based e-government metadata application to a linked data version? At the upcoming Semantic Tech & Business Conference in Berlin, the road taken by the Dutch government will be described by Paul Hermans, lead architect of Belgian project Erfgoedplus.be, which uses RDF/XML, OWL and SKOS to describe relationships to heritage types, concepts, objects, people, place and time.

Some 1,000 individual organizations compose the Dutch government, each with their own websites. An effort to employ a search engine a few years ago to spider those different and separate web sites to have one single point of access didn’t work as anticipated. The next step to bring some order was to assign all the documents published on those sites a common kernel of metadata fields, which led to building an XML application to enable a structured approach. Linked Data entered the picture about a year and a half ago.

Read more

Discussing the Issues Surrounding Schema.org

A recent post by Ivan Herman, Semantic Web Activity Lead for the W3C, takes a look at the primary discussions that have been sparked since the emergence of schema.org and what the semantic web community needs to talk about next. The first major topic of discussion, according to the article, has been, “What is the evolution path of the schema.org vocabularies; how do they relate to vocabulary developments around the world that has already brought us such widely used vocabularies like Dublin Core, GoodRelations, FOAF, vCard, the different microformat vocabularies, etc?” Read more

Volkswagen: Das Auto Company is Das Semantic Web Company!

Photo courtesy: Flickr/ glen edelson

You know Volkswagen as Das Auto company. But perhaps it’s time to start thinking of it as “Das Semantic Web Company.”

William Greenly is the Volkswagen Technical Lead for the auto vendor’s Volkswagen.co.uk online platform at integrated communications agency Tribal DDB. In that capacity he is taking the partnership the companies have had for more than four decades to a new level. His role there has encompassed managing data around Volkswagen’s products, its retailer and web site content, and its interfaces with social networks and many third-party back-end systems, including those germaine to the auto industry such as manufacturer consortiums.

Now, the focus is on using semantic web technology to drive a more elastic, flexible and streamlined digital world for “The Car” company.

The journey began as a strategic brief about contextual search engines serving content based on context within the site and possibly across affiliate sites, a big idea that was quite quickly bound to something more tactical. That being improving site search, Greenly says. “So the objectives were about site search and improving it, but in the long-run it was always the idea to contextualize content, to facet content, to promote it in different contexts.”

Read more

Best Buy: Next Steps Into the Semantic Web

Just a few months ago Jay Myers, lead web development engineer at Best Buy, talked to The Semantic Web Blog about using RDFa to mark up the retailer’s product detail pages and more semantic things he’d like to do, including mashing up its online catalog data with some other data sources.

Well, in just the last week he’s been stoking the semantic data foundation – pushing Best Buy’s product visibility and discovery further along with the help of RDFa and pulling in some semantic data too, all geared to building up what he calls the company’s Insight Engine. And there’s more coming soon, as Myers’ has a personal agenda of stretching RDFa just about as far as he can in Best Buy product pages. “My goal is to make our web site as data- rich as possible while preserving the front-end user experience we have now,” he says. “It’s totally possible and I think we achieved that so far.”

Read more

GoogleArt – Semantic Data Wrapper (Technical Update)

StarryNight-GoogleArt[EDITOR’S NOTE: Recently, we reported on the creation of a semantic data wrapper for the GoogleArt project. At the time, the wrapper only offered data for individual paintings and there was no good way to access the full data set. In this deeply technical guest post by the wrapper’s creator, Christophe Guéret, he outlines how to grab the full data set.

If you do something interesting with this data, we would love to hear about it! Leave a comment below.]

Some weeks ago, a first version of a wrapper for the GoogleArt project from Google was put online (see also this blog post).
This wrapper, initially offering semantic data only for individual paintings, has now been extended to museums. The front page of GoogleArt is also available as RDF, providing a machine-readible list of museums. This index page makes it possible, and easy, to download an entire snapshot of the data set so let’s see how to do that.

Read more

GoogleArt Gets a Semantic Touch-up

On Tuesday in London, the Google Art Project was announced. The project includes artworks from 17 of the world’s leading institutions including New York’s Metropolitan Museum of Art, the Museum of Modern Art and the Frick Collection; the Smithsonian’s Freer Gallery of Art in Washington DC; London’s Tate Museum, and museums in Madrid, Moscow, Amsterdam and Florence, among others. The paintings are presented in High Definition, and the site has a wonderful User Interface for exploring the artworks.

Christophe Guéret noticed that there was something missing: machine-readable, semantic data. Read more

David Wood – O’Reilly Media Joins the Semantic Web

O’Reilly Media (http://oreilly.com/), the current name for the geek publishing giant founded by Tim O’Reilly, has finally joined the Semantic Web.  O’Reilly’s coining of the term "Web 2.0" and early misunderstandings of the Semantic Web stack lead some to think that he didn’t see much value in machine readable information.  That seems to have changed, at least in within <a href="http://labs.oreilly.com/">O’Reilly Labs</a>.

Read more

NEXT PAGE >>