Nextgov reports, “When government technology leaders first described a public repository for government data sets more than five years ago, the vision wasn’t totally clear. ‘I just didn’t understand what they were talking about,’ said Marion Royal of the General Services Administration, describing his first introduction to the project. ‘I was thinking, ‘this is not going to work for a number of reasons.’’ A few minutes later, he was the project’s program director. He caught onto and helped clarify that vision and since then has worked with a small team to help shepherd online and aggregate more than 100,000 data sets compiled and hosted by agencies across federal, state and local governments.” Read more
Posts Tagged ‘data.gov’
…and make it machine readable. Jason Miller of Federal News Radio reports, “White House technology leaders are close to issuing a new policy that will change the way agencies release data to the public. Todd Park, the federal chief technology officer, said Friday the new policy is one of several steps to spur the release of more data from agencies. ‘We are going to continue to enlist additional federal agencies in the open data initiatives program as fast track liberators of key existing data sets that could create large scale economics benefit while protecting privacy,’ Park said at the President’s Council of Advisors on Science and Technology meeting in Washington. ‘We also, as per the recently announced Digital Government Strategy just this past summer, with OMB will be releasing policy soon that makes open and computer readable the default status of new data created by the government going forward’.” Read more
The team at Nextgov reports, “The team that manages Data.gov is well on its way to making the government data repository open source using a new back-end called the Open Government Platform, officials said during a Web discussion Wednesday. The governments of India and Ghana have already launched beta versions of their data catalogues on the open source platform, said Jeanne Holm who heads the Data.gov team. Government developers from the U.S. and India built the OGPL jointly. They posted it to the code sharing site GitHub where other nations and developers can adopt it as is or amend it to meet their specific needs.” Read more
Yesterday we began our look back at the year in semantic technology here. Today we continue with more expert commentary on the year in review:
Ivan Herman, W3C Semantic Web Activity Lead:
I would mention two things (among many, of course).
- Schema.org had an important effect on semantic technologies. Of course, it is controversial (role of one major vocabulary and its relations to others, the community discussions on the syntax, etc.), but I would rather concentrate on the positive aspects. A few years ago the topic of discussion was whether having ‘structured data’, as it is referred to (I would simply say having RDF in some syntax or other), as part of a Web page makes sense or not. There were fairly passionate discussions about this and many were convinced that doing that would not make any sense, there is no use case for it, authors would not use it and could not deal with it, etc. Well, this discussion is over. Structured data in Web sites is here to stay, it is important, and has become part of the Web landscape. Schema.org’s contribution in this respect is very important; the discussions and disagreements I referred to are minor and transient compared to the success. And 2012 was the year when this issue was finally closed.
- On a very different aspect (and motivated by my own personal interest) I see exciting moves in the library and the digital publishing world. Many libraries recognize the power of linked data as adopted by libraries, of the value of standard cataloging techniques well adapted to linked data, of the role of metadata, in the form of linked data, adopted by journals and soon by electronic books… All these will have a profound influence bringing a huge amount of very valuable data onto the Web of Data, linking to sources of accumulated human knowledge. I have witnessed different aspects of this evolution coming to the fore in 2012, and I think this will become very important in the years to come.
Last week, the 11th International Semantic Web Conference (ISWC 2012) took place in Boston. It was an exciting week to learn about the advances of the Semantic Web and current applications.
The first two days, Sunday November 11 and Monday November 12, consisted of 18 workshops and 8 tutorials. The following three days (Tuesday November 13 – Thursday November 15) consisted of keynotes, presentation of academic and in-use papers, the Big Graph Data Panel and industry presentations. It is basically impossible to attend all the interesting presentations. Therefore, I am going to try my best to summarize and offer links to everything that I can.
Jim Hendler and Theresa A. Pardo have created a primer on machine readability for online documents on Data.gov. The pair explain, “Historically, efforts to make government information available to the public have focused on pushing static information about government programs and services to the web. The intended user has been a human who can read, print, and take actions based on reading the material or by engaging in a form-based transaction. In some cases, users were able to query the data or map the results using sophisticated geospatial displays. Access to the data itself, on the other hand, was rarely provided.” Read more
A call for comments is out for a proposal for a ‘Datasets‘ addition to schema.org, via the W3C’s Web Schemas task force group that is used by the schema.org project to collaborate with the wider community.
The proposal extending schema.org for describing datasets and data catalogs introduces three new types, with associated properties, as follows:
Writing at the Schema.org blog, Dan Brickley calls it a “small but useful vocabulary,” with particular relevance to open government and public sector data.
NEXT PAGE >>