Posts Tagged ‘linked data’

New Opps For Libraries And Vendors Open Up In BIBFRAME Transition

semtechbiz-10th-125sqOpportunities are opening up in the library sector, both for the institutions themselves and providers whose solutions and services can expand in that direction.

These vistas will be explored in a session hosted by Kevin Ford, digital project coordinator at the Library of Congress at next week’s Semantic Technology & Business conference in San Jose. The door is being opened by the Bibliographic Framework Initiative (BIBFRAME) that the LOC launched a few years ago. Libraries will be moving from the MARC standards, their lingua franca for representing and communicating bibliographic and related information in machine-readable form, to BIBFRAME, which models bibliographic data in RDF using semantic technologies.

Read more

A Look At LOD2 Project Accomplishments

lod2pixIf you’re interested in Linked Data, no doubt you’re planning to listen in on next week’s Semantic Web Blog webinar, Getting Started With The Linked Data Platform (register here), featuring Arnaud Le Hors, Linked Data Standards Lead at IBM and chair of the W3C Linked Data Platform WG and the OASIS OSLC Core TC. It also may be on your agenda to attend this month’s Semantic Web Technology & Business Conference, where speakers including Le Hors, Manu Sporny, Sandro Hawke, and others will be presenting Linked Data-focused sessions.

In the meantime, though, you might enjoy reviewing the results of the LOD2 Project, the European Commission co-funded effort whose four-year run, begun in 2010, aimed at advancing RDF data management; extracting, creating and enriching structured RDF data; interlinking data from different sources; and authoring, exploring and visualizing Linked Data. To that end, why not take a stroll through the recently released Linked Open Data – Creating Knowledge Out of Interlinked Data, edited by LOD2 Project participants Soren Auer of the Institut für Informatik III Rheinische Friedrich-Wilhelms-Universität; Volha Bryl of the University of Mannheim, and Sebastian Tramp of the University of Leipzig?

Read more

Cognitive Computing And Semantic Technology: When Worlds Connect

ccimageIn mid-July Dataversity.net, the sister site of The Semantic Web Blog, hosted a webinar on Understanding The World of Cognitive Computing. Semantic technology naturally came up during the session, which was moderated by Steve Ardire, an advisor to cognitive computing, artificial intelligence, and machine learning startups. You can find a recording of the event here.

Here, you can find a more detailed discussion of the session at large, but below are some excerpts related to how the worlds of cognitive computing and semantic technology interact.

One of the panelists, IBM Big Data Evangelist James Kobielus, discussed his thinking around what’s missing from general discussions of cognitive computing to make it a reality. “How do we normally perceive branches of AI, and clearly the semantic web and semantic analysis related to natural language processing and so much more has been part of the discussion for a long time,” he said. When it comes to finding the sense in multi-structured – including unstructured – content that might be text, audio, images or video, “what’s absolutely essential is that as you extract the patterns you are able to tag the patterns, the data, the streams, really deepen the metadata that gets associated with that content and share that metadata downstream to all consuming applications so that they can fully interpret all that content, those objects…[in] whatever the relevant context is.”

Read more

Introduction to: Linked Data Platform

Nametag: Hello, my name is Linked Data PlatformIn its ongoing mission to lead the World Wide Web to its full potential, the W3C recently released the first specification for an entirely new kind of system. Linked Data Platform 1.0 defines a read-write Linked Data architecture, based on HTTP access to web resources described in RDF. To put that more simply, it proposes a way to work with pure RDF resources almost as if they were web pages.

Because the Linked Data Platform (LDP) builds upon the classic HTTP request and response model, and because it aligns well with things like REST, Ajax, and JSON-LD, mainstream web developers may soon find it much easier to leverage the power and benefits of Linked Data. It’s too early to know how big of an impact it will actually make, but I’m confident that LDP is going to be an important bridge across the ever-shrinking gap between todays Web of hyperlinked documents and the emerging Semantic Web of Linked Data. In today’s post, I’m going to introduce you to this promising newcomer by covering the most salient points of the LDP specification in simple terms. So, let’s begin with the obvious question…

 

What is a Linked Data Platform?

A Linked Data Platform is any client, server, or client/server combination that conforms in whole or in sufficient part to the LDP specification, which defines techniques for working with Linked Data Platform Resources over HTTP. That is to say, it allows Linked Data Platform Resources to be managed using HTTP methods (GET, POST, PUT, etc.). A resource is either something that can be fully represented in RDF or otherwise something like a binary file that may not have a useful RDF representation. When both are managed by an LDP, each is referred to as a Linked Data Platform Resource (LDPR), but further distinguished as either a Linked Data Platform RDF Source (LDP-RS) or a Linked Data Platform Non-RDF Source (LDP-NR).

Read more

Spiderbook’s SpiderGraph: Linking Datasets To Help You Sell Better

spiderpix1Startup Spiderbook, which is building a linked dataset of companies and their partners, customers, suppliers, and people involved in those deals, has recently closed its seed round for $1 million. The next-generation sales intelligence company was co-founded by CEO Alan Fletcher, who was a vp of product engineering, IT and operations at Oracle, and Aman Naimat, who has been working in the realm of CRM software since he was 19 years old and also has a background in natural language processing. Along with other core members of the team, the company puts natural language processing and machine learning technology to work to help sales people better connect the dots that explain business relationships, extracting information from unstructured text to sell more effectively.

State-of-the-art CRM, says Naimat, by itself doesn’t help salespeople sell. Since the days of Salesforce, which he worked on at IBM and Oracle, it has remained the same thing, he says, “just evolving with better technology. But basically it is an internal-facing administration tool to give management visibility, not to help a salesperson sell or create business relationships.”

Built from billions of data elements extracted from everything from SEC filings to press releases to blogs to Facebook posts, Spiderbook’s SpiderGraph is taking on that challenge, starting with the goal of helping salespeople understand who is the right contact to talk to, how he or she can meet that person (through shared contacts, for instance), and who competitors are, including those providing technology or other products already in use at the company. “We have created a graph of customers, competition, and suppliers for every company that is all interconnected,” he says.

Read more

New Open Source Graph Database Cayley Unveiled (Video – Part 2)

Cayley Logo[Editor's note: This is Part 2 of a 3 part series. See Part 1 and Part 3]

Barak Michener, Software Engineer, Knowledge NYC has posted on the Google Open Source Blog about “Cayley, an open source graph database.”: “Four years ago this July, Google acquired Metaweb, bringing Freebase and linked open data to Google. It’s been astounding to watch the growth of the Knowledge Graph and how it has improved Google search to delight users every day. When I moved to New York last year, I saw just how far the concepts of Freebase and its data had spread through Google’s worldwide offices. I began to wonder how the concepts would advance if developers everywhere could work with similar tools. However, there wasn’t a graph available that was fast, free, and easy to get started working with. With the Freebase data already public and universally accessible, it was time to make it useful, and that meant writing some code as a side project.”

The post continues: “Cayley is a spiritual successor to graphd; it shares a similar query strategy for speed. While not an exact replica of its predecessor, it brings its own features to the table:RESTful API, multiple (modular) backend stores such as LevelDB and MongoDB, multiple (modular) query languages, easy to get started, simple to build on top of as a library, and of course open source. Cayley is written in Go, which was a natural choice. As a backend service that depends upon speed and concurrent access, Go seemed like a good fit.”

Read more

How to Build Your Own Knowledge Graph (Video – Part 1)

Photo of Jarek WilkiewiczStraight out of Google I/O this week, came some interesting announcements related to Semantic Web technologies and Linked Data. Included in the mix was a cool instructional video series about how to “Build a Small Knowledge Graph.” Part 1 was presented by Jarek Wilkiewicz, Knowledge Developer Advocate at Google (and SemTechBiz speaker).

Wilkiewicz fits a lot into the seven-and-a-half minute piece, in which he presents a (sadly) hypothetical example of an online music store that he creates with his Google colleague Shawn Simister. During the example, he demonstrates the power and ease of leveraging multiple technologies, including the schema.org vocabulary (particularly the recently announced ‘Actions‘), the JSON-LD syntax for expressing the machine readable data, and the newly launched Cayley, an open source graph database (more on this in the next post in this series).

Read more

Building The Scientific Knowledge Graph

saimgeStandard Analytics, which was a participant at the recent TechStars event in New York City, has a big goal on its mind: To organize the world’s scientific information by building a complete scientific knowledge graph.

The company’s co-founders, Tiffany Bogich and Sebastien Ballesteros,came to the conclusion that someone had to take on the job as a result of their own experience as researchers. A problem they faced, says Bogich, was being able to access all the information behind published results, as well as search and discover across papers. “Our thesis is that if you can expose the moving parts – the data, code, media – and make science more discoverable, you can really advance and accelerate research,” she says.

Read more

Semantic Tech Takes On Grants Funding, Portfolio Management

octoimageWhether the discussion is about public grants funding or government agencies’ portfolio management at large, semantic technology can help optimize departments’ missions and outcomes. Octo Consulting, whose engagement with the National Institutes of Health The Semantic Web Blog discussed here, sees the issue in terms of integration and aggregation of data across multiple pipes, vocabularies and standards to enable grant-makers or agency portfolio-managers to get the right answers when they want to search to answer questions, such as whether grants are being allocated to the right opportunities and executed properly, or whether contracts are hired out to the right vendors or licenses are being duplicated.

Those funding public grants, for instance, should keep an eye on what projects private monies are going to, as well – a job that may involve incorporating data in other formats from other public datasets, social media and other sources in addition to their own information, in order to optimize decisions. “The nature of the public grant market is effectively understanding what the private grant market is doing and not doing the same thing,” says Octo executive VP Jay Shah.

Read more

Peer39 By Sizmek Launches Weather Targeting For Programmatic Buying

peer39

NEW YORK, June 4, 2014 (ADOTAS) – Sizmek Inc. (SZMK), a global open ad management company that delivers multiscreen campaigns, announced today that Peer39, its suite of data solutions, has made available new weather targeting attributes for pre-bid buying platforms such as AppNexus. For the first time in the industry, advertisers, agencies and trading desks can target programmatic buys using a variety of pre-bid weather data attributes including temperature ranges, presence of various weather events, current conditions, flu severity and, soon, pollen counts. Read more

<< PREVIOUS PAGENEXT PAGE >>