Marketing and Communications, April 10, 2014 — The Texas A&M University Libraries is preparing to launch VIVO, a web-based community of research profiles to enhance faculty collaboration. By providing standard research profiles for all university faculty and graduate students, researchers can discover and contact individuals with similar interests whether they are across campus or at another VIVO institution. The data entry and standardization will continue through the summer with the VIVO debut planned for Open Access Week in October 2014. Read more
Posts Tagged ‘linked data’
Supply chain and products standards organization GS1 – which this week joined the World Wide Web Consortium (W3C) to contribute to work on improving global commerce and logistics – also now has released the GTIN (Global Trade Item Number) Validation Guide. In the states the GTIN, which is the GS1-developed numbering sequence within bar codes for identifying products at point of sale, is known as the Universal Product Code (UPC).
The guide is part of the organization’s effort to drive awareness about “the business importance of having accurate product information on the web,” says Bernie Hogan, Senior Vice President, Emerging Capabilities and Industries. The guide has the endorsement of players including Google, eBay and Walmart, which are among the retailers that require the use of GTINs by onboarding suppliers, and support GTIN’s extension further into the online space to help ensure more accurate and consistent product descriptions that link to images and promotions, and help customers better find, compare and buy products.
“This is an effort to help clean up the data and get it more accurate,” he says. “That’s so foundational to any kind of commerce, because if it’s not the right number, you can have the best product data and images and the consumer still won’t find it.” The search hook, indeed, is the link between the work that GS1 is doing to encourage using GS1 standards online for improved product identification data with semantic web efforts such as schema.org, which The Semantic Web discussed with Hogan here.
Following the newly minted “recommendation” status of RDF 1.1, Michael C. Daconta of GCN has asked, “What does this mean for open data and government transparency?” Daconta writes, “First, it is important to highlight the JSON-LD serialization format. JSON is a very simple and popular data format, especially in modern Web applications. Furthermore, JSON is a concise format (much more so than XML) that is well-suited to represent the RDF data model. An example of this is Google adopting JSON-LD for marking up data in Gmail, Search and Google Now. Second, like the rebranding of RDF to ‘linked data’ in order to capitalize on the popularity of social graphs, RDF is adapting its strong semantics to other communities by separating the model from the syntax. In other words, if the mountain won’t come to Muhammad, then Muhammad must go to the mountain.” Read more
The CHAIN-REDS FP7 project, co-funded by the European Commission, has as a goal building a knowledge base of information, gathered both from dedicated surveys and other web and document sources, for largely more than half of the countries in the world, which it presents to visitors through geographic maps and tables. Earlier this month, its Knowledge Base and Semantic Search Engine for exploring the more than 30 million documents in its Open Access Document Repositories (OADR) and Data Repositories (DR) became available in a smartphone and tablet app, while the results of its Semantic Search Engine also now are ranked according to the January 2014 Ranking Web of Repositories. So, users conducting searches should see results in the order of the highest-ranked repositories.
The project has its roots in using semantic web technologies to correlate the data used to write scientific papers with the documents themselves whenever possible, says Prof. Roberto Barbera, of the Department of Physics and Astronomy at the University of Catania, as well as with applications that can be used to analyse the information. To drive to these ends, the CHAIN-REDS consortium semantically enriched its repositories and built its search engine on the related Linked Data. Users in search of information can get papers and data and, if applications are available, can be redirected to them on the project’s cloud infrastructure to reproduce and reanalyze the data.
“There is a huge effort in the scientific world about the reproducibility of science,” says Barbera.
Sebastian Hellman recently announced the formation of The DBpedia Association. According to the group’s charter, the Association was founded “with the goal to support DBpedia and the DBpedia Contributors Community.” The DBpedia Association is located in Leipzig, Germany, and the group’s full charter can be read here.
The goals of the new Association are outlined as follows: “Coordinate the development efforts in the DBpedia community and language chapters. Support the maintenance of DBpedia resources with own staff and resources. Serve as a contact point and establish co-operations with other like-minded projects and organizations. Acquire and manage funds for the DBpedia Community. Support and manage the organisation of DBpedia Community meetings. Provide education and training on DBpedia. Uphold a free, public data infrastructure to exploit this wealth of data for the general public. Mediate commercial services of associated partners.” Read more
Anthony Clark of Gainsville.com reports, “A Gainesville startup company received a $1.1 million federal grant to develop a Web portal for chemists to better share information over the next generation of the World Wide Web. Neil Ostlund, CEO of Chemical Semantics, said he learned of the grant from the Department of Energy on Friday. Chemical Semantics is developing a portal and software for computational chemists to publish and find data over the semantic web, also referred to as Web 3.0 or the web of data.”
Clark continues, “Chemical Semantics has created the semantic web vocabulary — or ontology — for computational chemistry called the Gainesville Core. Read more
Juan Carlos Perez of InfoWorld reports, “Microsoft will add new software, developer tools and capabilities to Office 365 in an attempt to make the cloud applications suite a ‘smarter’ product that is better at helping people interact at work. At its SharePoint Conference, which kicks off in Las Vegas on Monday, Microsoft will demonstrate a new machine learning application code-named Oslo designed to understand how employees work in Office 365 and with whom. Oslo will base its insights on a variety of signals gleaned from how people use Office 365′s components, like Exchange Online for email, OneDrive for Business for storage, Lync Online for IM and video conferencing, SharePoint Online for team collaboration and Yammer for enterprise social networking. Microsoft calls this information the Office Graph.” Read more
NEXT PAGE >>