The World Wide Web Consortium is looking for a Research Scientist in Cambridge, MA. According to the post, “Computer Science and Artificial Intelligence Laboratory (CSAIL)-World Wide Web Consortium (W3C) (part-time, 50%), to bring social and identity technologies on the Web to their full potential. Will conduct cutting-edge research on social and identity technologies and apply that research to the construction of new standards; provide leadership and management to implement these technologies into Web standards; collaborate across disciplines with other researchers at MIT, outside universities, and across industries to build the expert groups required to build and implement technological standards; and perform other duties as needed.” Read more
Posts Tagged ‘W3C’
The World Wide Web Consortium has headline news today: The Semantic Web, as well as eGovernment, Activities are being merged and superseded by the Data Activity, where Phil Archer serves as Lead. Two new workgroups also have been chartered: CSV on the Web and Data on the Web Best Practices.
What’s driving this? First, Archer explains, the Semantic Web technology stack is now mature, and it’s time to allow those updated standards to be used. With RDF 1.1, the Linked Data Platform, SPARQL 1.1, RDB To RDF Mapping Language (R2RML), OWL 2, and Provenance all done or very close to it, it’s the right time “to take that very successful technology stack and try to implement it in the wider environment,” Archer says, rather than continue tinkering with the standards.
The second reason, he notes, is that a large community exists “that sees Linked Data, let alone the full Semantic Web, as an unnecessarily complicated technology. To many developers, data means JSON — anything else is a problem. During the Open Data on the Web workshop held in London in April, Open Knowledge Foundation co-founder and director Rufus Pollock said that if he suggested to the developers that they learn SPARQL he’d be laughed at – and he’s not alone.” Archer says. “We need to end the religious wars, where they exist, and try to make it easier to work with data in the format that people like to work in.”
The new CSV on the Web Working Group is an important step in that direction, following on the heels of efforts such as R2RML. It’s about providing metadata about CSV files, such as column headings, data types, and annotations, and, with it, making it easily possible to convert CSV into RDF (or other formats), easing data integration. “The working group will define a metadata vocabulary and then a protocol for how to link data to metadata (presumably using HTTP Link headers) or embed the metadata directly. Since the links between data and metadata can work in either direction, the data can come from an API that returns tabular data just as easily as it can a static file,” says Archer. “It doesn’t take much imagination to string together a tool chain that allows you to run SPARQL queries against ’5 Star Data’ that’s actually published as a CSV exported from a spreadsheet.”
Sem Tech Solution For Materials Design And Development Wins Small Business Innovation Research Grant
A Small Business Innovation Research grant, sponsored by the U.S. Air Force through the Office of the Secretary of Defense, has gone to a semantic technology solution for materials design and development. The project is aligned with the Materials Genome Initiative that’s targeted to accelerate the pace of discovery and deployment of advanced material systems.
“This will help to showcase the semantic web technologies and methodologies in a large and meaningful domain and application area,” says Sam Chance, Innovation Evangelist and Special Programs Lead at iNovex Information Systems, which is the principal investigator and technical agent for development and integration for the project that will leverage W3C semantic standards. The project also includes as team members the University of Queensland, Penn State University, SRI International’s Materials Laboratory, and with Cambridge Semantics for access to its semantic tools to be bootstrapped onto the solution.
The University of Queensland brings to the effort a high-level domain ontology to represent structured knowledge about materials, their structure and properties and the processing steps involved in their composition and engineering, as well as certain case studies and data sources around jet fuels for hypersonic flight that are considered part of the materials design domain. The group at Penn State has been involved with the Materials Genome Initiative and also brings to the work particular case studies and data sources, this time for materials with nickel alloy as the base system. SRI has case studies and data sources for turbine blade coating.
Interested in how schema.org has trended in the last couple of years since its birth? If you were at The International Semantic Web Conference event in Sydney a couple of weeks back, you may have caught Google Fellow Ramanathan V. Guha — the mind behind schema.org — present a keynote address about the initiative.
Of course, Australia’s a far way to go for a lot of people, so The Semantic Web Blog is happy to catch everyone up on Guha’s thoughts on the topic.
We caught up with him when he was back stateside:
The Semantic Web Blog: Tell us a little bit about the main focus of your keynote.
Guha: The basic discussion was a progress report on schema.org – its history and why it came about a couple of years ago. Other than a couple of panels at SemTech we’ve maintained a rather low profile and figured it might be a good time to talk more about it, and to a crowd that is different from the SemTech crowd.
The short version is that the goal, of course, is to make it easier for mainstream webmasters to add structured data markup to web pages, so that they wouldn’t have to track down many different vocabularies, or think about what Yahoo or Microsoft or Google understands. Before webmasters had to champion internally which vocabularies to use and how to mark up a site, but we have reduced that and also now it’s not an issue of which search engine to cater to.
It’s now a little over two years since launch and we are seeing adoption way beyond what we expected. The aggregate search engines see about 15 percent of the pages we crawl have schema.org markup. This is the first time we see markup approximately on the order of the scale of the web….Now over 5 million sites are using it. That’s helped by the mainstream platforms like Drupal and WordPress adopting it so that it becomes part of the regular workflow. Read more
Silver Spring, MD (PRWEB) October 30, 2013 — PhUSE and CDISC are happy to announce the completion of Phase I of the FDA/PhUSE Semantic Technology Working Group Project. The PhUSE Semantic Technology Working Group aims to investigate how formal semantic standards can support the clinical and non-clinical trial data life cycle from protocol to submission. This deliverable includes a draft set of existing CDISC standards represented in RDF. Read more
Drew Turney of WA Today recently wrote, “Today your smartphone knows your location, so everything from the local weather to nearby Facebook friends is available. What about tomorrow when your jacket can measure your vital signs or a hat can extrapolate your mood from your brain activity? Connect it with information on your schedule (from your calendar), spatial information such as whether you’re running or at rest, the time of day and a hundred other factors, and machines everywhere can decide on, find and present the information they think you need. The field is opened even wider by search technology that finds abstract connections for you, rather than you starting a search at a given point. A system out of Bangalore, India called CollabLayer lets you watch for specific keywords you assign to almost any kind of data in a network.” Read more
Ivan Herman of the W3C recently reported, ” The RDF Working Group and the JSON-LD Community Group published the Candidate Recommendation of JSON-LD 1.0, and JSON-LD 1.0 Processing Algorithms and API. This signals the beginning of the call for implementations for JSON-LD 1.0. JSON-LD harmonizes the representation of Linked Data in JSON by describing a common JSON representation format for expressing directed graphs; mixing both Linked Data and non-Linked Data in a single document. The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Linked Data Web services, and to store Linked Data in JSON-based storage engines.” Read more
The W3C announced today that three specifications have reached recommendation status:
RDFa 1.1 Core – Second Edition
XHTML+RDFa 1.1 – Second Edition
As the W3C website explains, “The last couple of years have witnessed a fascinating evolution: while the Web was initially built predominantly for human consumption, web content is increasingly consumed by machines which expect some amount of structured data. Sites have started to identify a page’s title, content type, and preview image to provide appropriate information in a user’s newsfeed when she clicks the ‘Like’ button. Search engines have started to provide richer search results by extracting fine-grained structured details from the Web pages they crawl. In turn, web publishers are producing increasing amounts of structured data within their Web content to improve their standing with search engines.”
“A key enabling technology behind these developments is the ability to add structured data to HTML pages directly. RDFa (Resource Description Framework in Attributes) is a technique that allows just that: it provides a set of markup attributes to augment the visual information on the Web with machine-readable hints. ”
Manu Sporny, the editor of the HTML+RDFa 1.1 specification, told us that, “The release of RDFa 1.1 for HTML5 establishes it as the first HTML-based Linked Data technology to achieve recognition as an official Web standard by the World Wide Web Consortium.” Read more
Danny Bradbury of our sister site, CoinDesk, recently wrote about Manu Sporny‘s presentation at the recent Inside Bitcoins Conference. Bradbury writes, “A representative working loosely with the Web’s standards body set out his vision for a web-based payments standard at the Inside Bitcoins conference today. Manu Sporny, who works with the World Wide Web consortium (W3C), is part of a working group on Web Payments. He advocated a standard payment mechanism that would be currency-agnostic, and which would do away with traditional online payment methods such as entering credit card data, or making electronic payments which proprietary networks such as PayPal. ‘Credit card numbers are effectively passwords to your bank account. You’re giving that password away to every merchant you do business with,’ said Sporny.” Read more
NEXT PAGE >>