Derrick Harris of GigaOM reports, “Nervana Systems, a San Diego-based startup building a specialized system for deep learning applications, has raised a $3.3 million series A round of venture capital. Draper Fisher Jurvetson led the round, which also included Allen & Co., AME Ventures and Fuel Capital. Nervana launched in April with a $600,00 seed round. The idea behind the company is that deep learning — the advanced type of machine learning that is presently revolutionizing fields such as computer vision and text analysis — could really benefit from hardware designed specifically for the types of neural networks on which it’s based and the amount of data they often need to crunch.” Read more
A Drupal ++ platform for semantic web biomedical data – that’s how Sudeshna Das describes eXframe, a reusable framework for creating online repositories of genomics experiments. Das – who among other titles is affiliate faculty of the Harvard Stem Cell Institute – is one of the developers of eXframe, which leverages Stéphane Corlosquet’s RDF module for Drupal to produce, index (into an RDF store powered by the ARC2 PHP library) and publish semantic web data in the second generation version of the platform.
“We used the RDF modules to turn eXframe into a semantic web platform,” says Das. “That was key for us because it hid all the complexities of semantic technology.”
One instance of the platform today can be found in the repository for stem cell data as part of the Stem Cell Commons, the Harvard Stem Cell Institute’s community for stem cell bioinformatics. But Das notes the importance of the reusability aspect of the software platform to build genomics repositories that automatically produce Linked Data as well as a SPARQL endpoint, is that it becomes easy to build new repository instances with much less effort. Working off Drupal as its base, eXframe has been customized to support biomedical data and to integrate biomedical ontologies and knowledge bases.
David Hirsch, co-founder of Metamorphic Ventures, recently wrote for Tech Crunch, “There has been a lot of talk in the venture capital industry about automating the home and leveraging Internet-enabled devices for various functions. The first wave of this was the use of the smartphone as a remote control to manage, for instance, a thermostat. The thermostat then begins to recognize user habits and adapt to them, helping consumers save money. A lot of people took notice of this first-generation automation capability when Google bought Nest for a whopping $3.2 billion. But this purchase was never about Nest; rather, it was Google’s foray into the next phase of the Internet of Things.” Read more
XSB and SemanticWeb.Com Partner In App Developer Challenge To Help Build The Industrial Semantic Web
An invitation was issued to developers at last week’s Semantic Technology and Business Conference: XSB and SemanticWeb.com have joined to sponsor the Semantic Web Developer Challenge, which asks participants to build sourcing and product life cycle management applications leveraging XSB’s PartLink Data Model.
XSB is developing PartLink as a project for the Department of Defense Rapid Innovation Fund. It uses semantic web technology to create a coherent Linked Data model for all part information in the Department of Defense’s supply chain – some 40 million parts strong.
“XSB recognized the opportunity to standardize and link together information about the parts, manufacturers, suppliers, materials, [and] technical characteristics using semantic technologies. The parts ontology is deep and detailed with 10,000 parts categories and 1,000 standard attributes defined,” says Alberto Cassola, vp sales and marketing at XSB, a leading provider of master data management solutions to large commercial and government entities. PartLink’s Linked Data model, he says, “will serve as the foundation for building the industrial semantic web.”
Dan Gillick and Dave Orr recently wrote, “Language understanding systems are largely trained on freely available data, such as the Penn Treebank, perhaps the most widely used linguistic resource ever created. We have previously released lots of linguistic data ourselves, to contribute to the language understanding community as well as encourage further research into these areas. Now, we’re releasing a new dataset, based on another great resource: the New York Times Annotated Corpus, a set of 1.8 million articles spanning 20 years. 600,000 articles in the NYTimes Corpus have hand-written summaries, and more than 1.5 million of them are tagged with people, places, and organizations mentioned in the article. The Times encourages use of the metadata for all kinds of things, and has set up a forum to discuss related research.”
The blog continues with, “We recently used this corpus to study a topic called “entity salience”. To understand salience, consider: how do you know what a news article or a web page is about? Reading comes pretty easily to people — we can quickly identify the places or things or people most central to a piece of text. But how might we teach a machine to perform this same task? This problem is a key step towards being able to read and understand an article. One way to approach the problem is to look for words that appear more often than their ordinary rates.”
Photo credit : Eric Franzon
Derrick Harris of GigaOM reports, “When it comes to the future of web content… Yahoo might just have the inside track on innovation. I spoke recently with Ron Brachman, the head of Yahoo Labs, who’s now managing a team of 250 (and growing) researchers around the world. They’re experts in fields such as computational advertising, personalization and human-computer interaction, and they’re all focused on the company’s driving mission of putting the right content in front of the right people at the right time. However, Yahoo Labs’ biggest focus appears to be on machine learning, a discipline that can easily touch nearly every part of a data-driven company like Yahoo. Labs now has a dedicated machine learning group based in New York; some are working on what Brachman calls ‘hardcore science and some theory,’ while others are building a platform that will open up machine learning capabilities across Yahoo’s employee base.” Read more
These vistas will be explored in a session hosted by Kevin Ford, digital project coordinator at the Library of Congress at next week’s Semantic Technology & Business conference in San Jose. The door is being opened by the Bibliographic Framework Initiative (BIBFRAME) that the LOC launched a few years ago. Libraries will be moving from the MARC standards, their lingua franca for representing and communicating bibliographic and related information in machine-readable form, to BIBFRAME, which models bibliographic data in RDF using semantic technologies.
Research Information recently reported, “Symplectic Limited, a software company specialising in developing, implementing, and integrating research information systems, has become the first DuraSpace Registered Service Provider (RSP) for the VIVO Project. VIVO is an open-source, open-ontology, open-process platform for hosting information about the interests, activities and accomplishments of scientists and scholars. VIVO aims to support open development and integration of science and scholarship through simple, standard semantic web technologies.” Read more
If you’re interested in Linked Data, no doubt you’re planning to listen in on next week’s Semantic Web Blog webinar, Getting Started With The Linked Data Platform (register here), featuring Arnaud Le Hors, Linked Data Standards Lead at IBM and chair of the W3C Linked Data Platform WG and the OASIS OSLC Core TC. It also may be on your agenda to attend this month’s Semantic Web Technology & Business Conference, where speakers including Le Hors, Manu Sporny, Sandro Hawke, and others will be presenting Linked Data-focused sessions.
In the meantime, though, you might enjoy reviewing the results of the LOD2 Project, the European Commission co-funded effort whose four-year run, begun in 2010, aimed at advancing RDF data management; extracting, creating and enriching structured RDF data; interlinking data from different sources; and authoring, exploring and visualizing Linked Data. To that end, why not take a stroll through the recently released Linked Open Data – Creating Knowledge Out of Interlinked Data, edited by LOD2 Project participants Soren Auer of the Institut für Informatik III Rheinische Friedrich-Wilhelms-Universität; Volha Bryl of the University of Mannheim, and Sebastian Tramp of the University of Leipzig?
NEXT PAGE >>