Dominik Schweiger, Zlatko Trajanoski and Stephan Pabinger recently wrote, “Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. Results: SPARQLGraph offers an intuitive drag &drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers.” Read more
Posts Tagged ‘RDF’
WASHINGTON, D.C. – SYSTAP, LLC. today announced that Syapse, the leading provider of software for enabling precision medicine, has selected Bigdata® as its backend semantic database. Syapse, which launched the Precision Medicine Data Platform in 2011, will use the Bigdata® database as a key element of their semantic platform. The Syapse Precision Medicine Data Platform integrates medical data, omics data, and biomedical knowledge for use in the clinic. Syapse software is delivered as a cloud-based SaaS, enabling access from anywhere with an internet connection, regular software updates and new features, and online collaboration and delivery of results, with minimal IT resources required. Syapse applications comply with HIPAA/HITECH, and data in the Syapse platform are protected according to industry standards.
Syapse’s Precision Medicine Data Platform features a semantic layer that provides powerful data modeling, query, and integration functionality. According to Syapse CTO and Co-Founder, Tony Loeser, Ph.D., “We have adopted SYSTAP’s graph database, Bigdata®, as our RDF store. Bigdata’s exceptional scalability, query performance, and high-availability architecture make it an enterprise-class foundation for our semantic technology stack.”
These vistas will be explored in a session hosted by Kevin Ford, digital project coordinator at the Library of Congress at next week’s Semantic Technology & Business conference in San Jose. The door is being opened by the Bibliographic Framework Initiative (BIBFRAME) that the LOC launched a few years ago. Libraries will be moving from the MARC standards, their lingua franca for representing and communicating bibliographic and related information in machine-readable form, to BIBFRAME, which models bibliographic data in RDF using semantic technologies.
Among the mainstream content management systems, you could make the case that Drupal was the first open source semantic CMS out there. At next week’s Semantic Technology and Business Conference, software engineer Stéphane Corlosquet of Acquia, which provides enterprise-level services around Drupal, and Bock & Co. principal Geoffrey Bock will discuss in this session Drupal’s role as a semantic CMS and how it can help organizations and institutions that are yearning to enrich their data with more semantics – for search engine optimization, yes, but also for more advanced use cases.
“It’s very easy to embed semantics in Drupal,” says Bock, who analyses and consults on digital strategies for content and collaboration. At its core it has the capability to manage semantic entities, and in the upcoming version 8 it takes things to a new level by including schema.org as a foundational data type. “It will become increasingly easier for developers to build and deliver semantically enriched environments,” he says, which can drive a better experience both for clients and stakeholders.
Corlosquet, who has taken a leadership role in building semantic web capabilities into Drupal’s core and maintains the RDF module in Drupal 7 and 8, explains that the closer embrace of schema.org in Drupal is of course a help when it comes to SEO and user engagement, for starters. Google uses content marked up using schema.org to power products like Rich Snippets and Google Now, too.
If you’re interested in Linked Data, no doubt you’re planning to listen in on next week’s Semantic Web Blog webinar, Getting Started With The Linked Data Platform (register here), featuring Arnaud Le Hors, Linked Data Standards Lead at IBM and chair of the W3C Linked Data Platform WG and the OASIS OSLC Core TC. It also may be on your agenda to attend this month’s Semantic Web Technology & Business Conference, where speakers including Le Hors, Manu Sporny, Sandro Hawke, and others will be presenting Linked Data-focused sessions.
In the meantime, though, you might enjoy reviewing the results of the LOD2 Project, the European Commission co-funded effort whose four-year run, begun in 2010, aimed at advancing RDF data management; extracting, creating and enriching structured RDF data; interlinking data from different sources; and authoring, exploring and visualizing Linked Data. To that end, why not take a stroll through the recently released Linked Open Data – Creating Knowledge Out of Interlinked Data, edited by LOD2 Project participants Soren Auer of the Institut für Informatik III Rheinische Friedrich-Wilhelms-Universität; Volha Bryl of the University of Mannheim, and Sebastian Tramp of the University of Leipzig?
Is SPARQL the SQL for NoSQL? The question will be discussed at this month’s Semantic Technology & Business Conference in San Jose by Arthur Keen, vp of solution architecture of startup SPARQL City.
It’s not the first time that the industry has considered common database query languages for NoSQL (see this story at our sister site Dataversity.net for some perspective on that). But as Keen sees it, SPARQL has the legs for the job. “What I know about SPARQL is that for every database [SQL and NoSQL alike] out there, someone has tried to put SPARQL on it,” he says, whereas other common query language efforts may be limited in database support. A factor in SPARQL’s favor is query portability across NoSQL systems. Additionally, “you can achieve much higher performance using declarative query languages like SPARQL because they specify the ‘What’ and not the ‘How’ of the query, allowing optimizers to choose the best way to implement the query,” he explains.
In mid-July Dataversity.net, the sister site of The Semantic Web Blog, hosted a webinar on Understanding The World of Cognitive Computing. Semantic technology naturally came up during the session, which was moderated by Steve Ardire, an advisor to cognitive computing, artificial intelligence, and machine learning startups. You can find a recording of the event here.
Here, you can find a more detailed discussion of the session at large, but below are some excerpts related to how the worlds of cognitive computing and semantic technology interact.
One of the panelists, IBM Big Data Evangelist James Kobielus, discussed his thinking around what’s missing from general discussions of cognitive computing to make it a reality. “How do we normally perceive branches of AI, and clearly the semantic web and semantic analysis related to natural language processing and so much more has been part of the discussion for a long time,” he said. When it comes to finding the sense in multi-structured – including unstructured – content that might be text, audio, images or video, “what’s absolutely essential is that as you extract the patterns you are able to tag the patterns, the data, the streams, really deepen the metadata that gets associated with that content and share that metadata downstream to all consuming applications so that they can fully interpret all that content, those objects…[in] whatever the relevant context is.”
At the upcoming Semantic Technology & Business Conference in San Jose, Dr. Terry Roach, principal of CAPSICUM Business Architects, and Dr. Dean Allemang, principal consultant at Working Ontologist, will host a session on A Semantic Model for an Electronic Health Record (EHR). It will focus on Australia’s electronic-Health-As-A-Service (eHaas) national platform for personal electronic health records, provided by the CAPSICUM semantic framework for strategically aligned business architectures.
Roach and Allemang participated in an email interview with The Semantic Web Blog to preview the topic:
The Semantic Web Blog: Can you put the work you are doing on the semantic EHR model in context: How does what Australia is doing with its semantic framework compare with how other countries are approaching EHRs and healthcare information exchange?
Roach and Allemang: The eHaaS project that we have been working on has been an initiative of Telstra, a large, traditional telecommunications provider in Australia. Its Telstra Health division, which is focused on health-related software investments, for the past two years has embarked on a set of strategic investments in the electronic health space. Since early 2013 it has acquired and/or established strategic partnerships with a number of local and international healthcare software providers ranging from hospital information systems [to] mobile health applications [to] remote patient monitoring systems to personal health records [to] integration platforms and health analytics suites.
At the core of these investments is a strategy to develop a platform that captures and maintains diverse health-related interactions in a consolidated lifetime health record for individuals. The eHaaS platform facilitates interoperability and integration of several health service components over a common secure authentication service, data model, infrastructure, and platform. Starting from a base of stand-alone, vertical applications that manage fragmented information across the health spectrum, the eHaaS platform will establish an integrated, continuously improving, shared healthcare data platform that will aggregate information from a number of vertical applications, as well as an external gateway for standards-based eHealth messages, to present a unified picture of an individual’s health care profile and history.
There’s a chance to learn everything you should know about RDF to get the most value from the W3C standard model for data interchange at the 10th annual Semantic Technology & Business Conference in San Jose next month. David Booth, senior software architect at Hawaii Resource Group, will be hosting a session explaining how the standard’s unique capabilities can have a profound effect on projects that seek to connect data coming in from multiple sources.
“One of the assumptions that people make looking at RDF is that it is analogous to any other data format, like JSON or XML,” says Booth, who is working at the Hawaii Research Group’s on a contract the firm has with the U.S. Department of Defense to use semantic web technologies to achieve healthcare data interoperability. “It isn’t.” RDF, he explains, isn’t just another data format – rather, it’s about the information content that is encoded in the format.
“The focus is different. It is on the meaning of data vs. the details of syntax,” he says.
New York’s Tektree Systems is in need of a Big Data Architect. The job description states, “Hadoop Data Architect with both hands-on Big Data and relational experience and deep knowledge of physical data modeling, data organization and storage technology, experienced with high volumes and able to architect and implement multi-tier solutions using the right technology in each tier, based on fit. Required Skills and Qualifications:
- Design and development of data models for a new HDFS Master Data Reservoir and one or more relational or object Current Data environments
- Design of optimum storage allocation for the data stores in the architecture.
- Development of data frameworks for code implementation and testing across the program
- Knowledge and experience with RDF and other Semantic technologies
- Participation in code reviews to assure that developed and tested code conforms with the design and architecture principles
- QA and testing of modules/applications/interfaces.
- End-to-End project experience through to completion and supervise turnover to Operations staff.
- Preparation of documentation of data architecture, designs and implemented code”.
NEXT PAGE >>