Washington, DC – January 21, 2014 – The new release (2.1) of Stardog, a leading RDF database, hits new scalability heights with a 50-fold increase over previous versions. Using commodity server hardware at the $10,000 price point, Stardog can manage, query, search, and reason over datasets as large as 50B RDF triples.
The new scalability increases put Stardog into contention for the largest semantic technology, linked data, and other graph data enterprise projects. Stardog’s unique feature set, including reasoning and integrity constraint validation, at large scale means it will increasingly serve as the basis for complex software projects.
“We’re really happy about the new scalability of Stardog,” says Mike Grove, Clark & Parsia’s Chief Software Architect, “which makes us competitive with a handful of top graph database systems. And our feature set is unmatched by any of them.”
The new scalability work required software engineering to remove garbage collection pauses during query evaluation, which the 2.1 release also accomplishes. Along with a new hot backup capability, Stardog is more mature and production-capable than ever before.
We reported yesterday on the news that JSON-LD has reached Recommendation status at W3C. Three formal vocabularies also reached that important milestone yesterday:
The W3C Documentation for The Data Catalog Vocabulary (DCAT), says that DCAT “is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web….By using DCAT to describe datasets in data catalogs, publishers increase discoverability and enable applications easily to consume metadata from multiple catalogs. It further enables decentralized publishing of catalogs and facilitates federated dataset search across sites. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation.”
Meanwhile, The RDF Data Cube Vocabulary addresses the following issue: “There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations. The Data Cube vocabulary is a core foundation which supports extension vocabularies to enable publication of other aspects of statistical data flows or other multidimensional data sets.”
Lastly, W3C now recommends use of the Organization Ontology, “a core ontology for organizational structures, aimed at supporting linked data publishing of organizational information across a number of domains. It is designed to allow domain-specific extensions to add classification of organizations and roles, as well as extensions to support neighbouring information such as organizational activities.”
JSON-LD has reached the status of being an official “Recommendation” of the W3C. JSON-LD provides yet another way for web developers to add structured data into web pages, joining RDFa.The W3C documentation says, “JSON is a useful data serialization and messaging format. This specification defines JSON-LD, a JSON-based format to serialize Linked Data. The syntax is designed to easily integrate into deployed systems that already use JSON, and provides a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build interoperable Web services, and to store Linked Data in JSON-based storage engines.” This addition should be welcome news for Linked Data developers familiar with JSON and/or faced with systems based on JSON.
SemanticWeb.com caught up with the JSON-LD specfication editors to get their comments…
Manu Sporny (Digital Bazaar), told us, “When we created JSON-LD, we wanted to make Linked Data accessible to Web developers that had not traditionally been able to keep up with the steep learning curve associated with the Semantic Web technology stack. Instead, we wanted people that were comfortable working with great solutions like JSON, MongoDB, and REST to be able to easily integrate Linked Data technologies into their day-to-day work. The adoption of JSON-LD by Google and schema.org demonstrates that we’re well on our way to achieving this goal.”
EU Initiative OpenCube partner consortium to develop software tools for publishing and reusing Linked Open Statistical Data
Thermi, Thessaloniki, Greece, January 14th, 2014 – A consortium of partners headed by the Centre for Research and Technology – Hellas (CERTH), recently launched the OpenCube project, an EU Initiative for Publishing and Enriching Linked Open Statistical Data for the Development of Data Analytics and Enhanced Visualization Services. The project intends to make Linked Open Statistical Data (LOSD) more accessible to publishers and users and to facilitate mining these data so as to enable the extraction of interesting and previously hidden insights. As part of the project, these innovative new technologies will be tested at four pilot sites: three government agencies from across Europe and a large financial institution.
Linked Statistical Data
Governments, organizations and companies are increasingly releasing their data for others to reuse. A major part of open data concerns statistics, such as population figures and economic and social indicators. Analysis of statistical open data can create value for citizens and businesses in areas ranging from business intelligence to epidemiological studies and evidence-based policy-making.
Recently, Linked Data emerged as a promising paradigm to enable use of the web as a platform for data integration. Linked Statistical Data has been proposed as the most suitable way to publish open data on the web. However, publishing and mining LOSD faces particular challenges as it requires appropriate tools and methods.
The Open Data Institute has announced that Jeni Tennison has been awarded an OBE in the “Queen’s New Year Honours.”
For those not familiar, King George V created these honors on 4 June 1917, during World War I. The honor was intended to reward services to the war effort by civilians at home in the UK and servicemen in support positions. Today, they are awarded for prominent national or regional roles and to those making distinguished or notable contributions in their own specific areas of activity. There are three ranks to the honors: Commander (CBE), Officer (OBE) and Member (MBE). Tennison is being given the OBE.
The official release reads:
Open Data Institute (ODI) founders, Sir Nigel Shadbolt and Sir Tim Berners-Lee have warmly welcomed news that the organisation’s Technical Director, Jeni Tennison has received an OBE in the Queen’s New Year Honours.
Tennison, who grew up in Cambridge, first trained as a psychologist before gaining a PhD in collaborative ontology development from the University of Nottingham.
Before joining the ODI, she was the technical architect and lead developer for legislation.gov.uk, which pioneered the use of open data APIs within the public sector, set a new standard in the publication of legislation on the web, and formed the basis of The National Archives’ strategy for bringing the UK’s legislation up to date as open, public data.
Speaking about today’s Honour, ODI Chairman, Sir Nigel Shadbolt said: “Jeni inspires affection, loyalty and admiration in all who know her. She has a special blend of deep technical know how and an intuitive sense of what works in the world of the Web. In Jeni the ODI has a fantastic CTO and the open data community a great role model. It has been a privilege to work with her for over two decades and it is wonderful to see her recognised in this way.”
Before taking up her post at the ODI, Tennison worked with Shadbolt on the early linked data work on data.gov.uk, helping to engineer new standards for the publication of statistics as linked data; building APIs for geographic, transport and education data; and supporting the publication of public sector organograms as open data.
DATAVERSITY and Wilshire Conferences acquire SemanticWeb.com and Semantic Technology and Business Conference
[LOS ANGELES, OCT. 24, 2013] DATAVERSITY Education LLC, a subsidiary of Wilshire Conferences, Inc. today announced the acquisition of the publishing and conference assets of the Semantic Technology and Business (SemTechBiz) Conferences and the SemanticWeb.com web site, from Mediabistro, Inc.
“We are pleased to be engaging again with the community of semantic technology professionals through these two high quality media channels” said Tony Shaw, CEO of DATAVERSITY. “Semantic technology is an ideal match with our DATAVERSITY subject matter portfolio which is focused primarily on the areas of enterprise data and big data.”
An agreement has been reached for Eric Franzon, the current Editor of SemanticWeb.com and Program Chair for the SemTechBiz Conference, to continue his role.
Further details regarding future business plans will be made available over the coming weeks. For questions, please contact Tony Shaw at email@example.com.
DATAVERSITY™ provides resources for information technology (IT) professionals, executives and business managers to learn about the uses and management of data. Our worldwide community of practitioners, advisers and customers participates in, and benefits from, DATAVERSITY’s educational conferences, discussions, articles, blogs, webinars, certification, news feeds and more. Members enjoy access to a deep knowledge base of presentations, research and training materials, plus discounts off many educational resources including webinars and conferences. For more information please visit: www.DATAVERSITY.net or email: firstname.lastname@example.org.
SemanticWeb.com is the voice of Semantic Technology Business: Linked Data, Big Data, Smart Data. We cover news about how companies are making money, saving money, and solving Big Data challenges using Semantic Technologies. SemanticWeb.com is the leading source for news and information about Semantic Technologies and the people and companies working with them today. Industry news, case studies and practical applications, company and product announcements, job postings, opinion, webcasts, podcasts, and a community-run Q&A platform called “Answers” are all found at SemanticWeb.com.
The W3C announced today that three specifications have reached recommendation status:
RDFa 1.1 Core – Second Edition
XHTML+RDFa 1.1 – Second Edition
As the W3C website explains, “The last couple of years have witnessed a fascinating evolution: while the Web was initially built predominantly for human consumption, web content is increasingly consumed by machines which expect some amount of structured data. Sites have started to identify a page’s title, content type, and preview image to provide appropriate information in a user’s newsfeed when she clicks the ‘Like’ button. Search engines have started to provide richer search results by extracting fine-grained structured details from the Web pages they crawl. In turn, web publishers are producing increasing amounts of structured data within their Web content to improve their standing with search engines.”
“A key enabling technology behind these developments is the ability to add structured data to HTML pages directly. RDFa (Resource Description Framework in Attributes) is a technique that allows just that: it provides a set of markup attributes to augment the visual information on the Web with machine-readable hints. ”
Manu Sporny, the editor of the HTML+RDFa 1.1 specification, told us that, “The release of RDFa 1.1 for HTML5 establishes it as the first HTML-based Linked Data technology to achieve recognition as an official Web standard by the World Wide Web Consortium.” Read more
Almost as soon as we announced the dates for our New York and European Semantic Technology & Business Conferences, we began hearing from members of the community that they would prefer to have more space between the two events.
We heard you and we have taken action! We are moving the Semantic Technology & Business Conference – Europe, to February of 2014, and the event will still take place in Berlin. We are finalizing details with the conference venue and will have more specifics to announce soon.
Those who have already registered, submitted speaking proposals, or signed up for sponsorship of the European event will receive separate communications specific to those discussions next week, and we will look forward to seeing you in New York October 2-3, 2013 and Berlin early next year!
This fall, we will offer two events in the popular Semantic Technology & Business Conference series (#SemTechBiz).
Eric Franzon, Conference Chair, says, “We have received many excellent proposals for both the European (Berlin, 18-19 September) and the New York (October 1-3) events. Perhaps due to Summer holidays, we have also received more requests than usual for deadline extensions. As a result, we are re-opening the CFP process and will now accept submissions for both events until end of day, Monday, July 22.”
Sponsorship opportunities and registration options are also available at this time. Details for each event are below.
We hope to see you at one or both events!