Posts Tagged ‘linked data’

Building The Scientific Knowledge Graph

saimgeStandard Analytics, which was a participant at the recent TechStars event in New York City, has a big goal on its mind: To organize the world’s scientific information by building a complete scientific knowledge graph.

The company’s co-founders, Tiffany Bogich and Sebastien Ballesteros,came to the conclusion that someone had to take on the job as a result of their own experience as researchers. A problem they faced, says Bogich, was being able to access all the information behind published results, as well as search and discover across papers. “Our thesis is that if you can expose the moving parts – the data, code, media – and make science more discoverable, you can really advance and accelerate research,” she says.

Read more

Semantic Tech Takes On Grants Funding, Portfolio Management

octoimageWhether the discussion is about public grants funding or government agencies’ portfolio management at large, semantic technology can help optimize departments’ missions and outcomes. Octo Consulting, whose engagement with the National Institutes of Health The Semantic Web Blog discussed here, sees the issue in terms of integration and aggregation of data across multiple pipes, vocabularies and standards to enable grant-makers or agency portfolio-managers to get the right answers when they want to search to answer questions, such as whether grants are being allocated to the right opportunities and executed properly, or whether contracts are hired out to the right vendors or licenses are being duplicated.

Those funding public grants, for instance, should keep an eye on what projects private monies are going to, as well – a job that may involve incorporating data in other formats from other public datasets, social media and other sources in addition to their own information, in order to optimize decisions. “The nature of the public grant market is effectively understanding what the private grant market is doing and not doing the same thing,” says Octo executive VP Jay Shah.

Read more

Peer39 By Sizmek Launches Weather Targeting For Programmatic Buying

peer39

NEW YORK, June 4, 2014 (ADOTAS) – Sizmek Inc. (SZMK), a global open ad management company that delivers multiscreen campaigns, announced today that Peer39, its suite of data solutions, has made available new weather targeting attributes for pre-bid buying platforms such as AppNexus. For the first time in the industry, advertisers, agencies and trading desks can target programmatic buys using a variety of pre-bid weather data attributes including temperature ranges, presence of various weather events, current conditions, flu severity and, soon, pollen counts. Read more

Let Semantic Tech Help You Plan Your Summer Fun

edamampixMemorial Day brought with it the official kickoff of summer! Along with that comes everything from outside living to family vacations.

Semantic technology can be part of the fun. Over the next couple of days we’ll look at some ways it can chip in. Let’s start with food as you start thinking about the summer BBQs. There are semantic solutions that can help on various fronts here. Edamam, for example, has built a food ontology that classifies ingredients, nutrients and food that it applies to recipes it scrapes from the web with the help of its natural language processing and machine learning functions.

As you’re breaking out the grill, you can break out the smartphone or iPad to search for grilled burger recipes that incorporate tomatoes in the 200 to 400 calorie range, for example, and take your pick of ranch salmon, Portobello mushroom, turkey with spiced tomato chutney or the classic beef with garden vegetables, for instance. “The nutrition information we append to recipes using natural language processing. This translates into people being able to filter recipes by diet/calories/allergies and be a bit more health-conscious this summer,” says Victor Penev, Edamam founder and CEO.

Read more

Data.gov Turns Five

datagov

Nextgov reports, “When government technology leaders first described a public repository for government data sets more than five years ago, the vision wasn’t totally clear. ‘I just didn’t understand what they were talking about,’ said Marion Royal of the General Services Administration, describing his first introduction to the project. ‘I was thinking, ‘this is not going to work for a number of reasons.’’ A few minutes later, he was the project’s program director. He caught onto and helped clarify that vision and since then has worked with a small team to help shepherd online and aggregate more than 100,000 data sets compiled and hosted by agencies across federal, state and local governments.” Read more

Easing The Way To Linked Open Data In The Geosciences Domain

ocenapixThe OceanLink Project is bringing semantic technology to the geosciences domain – and it’s doing it with the idea in mind of not forcing that community to have to become experts in semtech in order to realize value from its implementation. Project lead Tom Narock of Marymount University, who recently participated in an online webinar that discussed how semantics is being implemented to integrate ocean science data repositories, library holdings, conference abstracts, and funded research awards, noted that this effort is “tackling a particular problem in ocean sciences, but [can be part of a] more general change for researchers in discovering and integrating interdisciplinary resources, [when you] need to do federated and complex searches of available resources.”

The project has an interest in using more formal, stronger semantics – working with OWL, RDF, reasoners – but also an acknowledgement that a steeper learning curve comes with the territory. How to balance that with what the community is able to implement and use? The answer: “In addition to exposing our data using semantic technologies, a big part of Oceanlink is building cyber infrastructure that will help lessen the burden on our end users.”

Read more

The Importance of the Semantic Web To Our Cultural Heritage

oldmasterpaintingEarlier this year The Semantic Web Blog reported that the Getty Research Institute has released the Art & Architecture Thesaurus (AAT) as Linked Open Data. One of the external advisors to its work was Vladimir Alexiev, who leads the Data and Ontology Management group at Ontotext and works on many projects related to cultural heritage.

Ontotext’s OWLIM family of semantic repositories supports large-scale knowledge bases of rich semantic information, and powerful reasoning. The company, for example, did the first working implementation of CIDOC CRM search; CIDOC CRM is one of these rich ontologies for cultural heritage.

We caught up with Alexiev recently to gain some insight into semantic technology’s role in representing the cultural heritage sphere. Here are some of his thoughts about why it’s important for cultural institutions to adopt Linked Open Data and semantic technologies to enhance our digital understanding of cultural heritage objects and information:

Read more

IBM Watson is Going to the Dermatologist

2989654915_e2e31e36be_z

Neal Ungerleider of Fast Company reports, “Big Blue wants you to get to know Watson better. And now that means you could soon encounter the super-computer in a most intimate place–your dermatologist’s office. Early last year IBM announced plans to invest $1 billion into its cognitive-computing platform Watson. That money included $100 million in venture capital for companies developing new ways to use Watson. Today IBM reveals that one of the companies they are investing in will bring artificial intelligence into dermatologists’ offices. Modernizing Medicine, a Florida-based firm which produces iPad software for electronic medical record-keeping, is partnering with IBM to integrate Watson into their software package for dermatologists.” Read more

How The Huffington Post Uses Semantic Technology

huffpo

Alastair Reid of Journalism.co.uk reports, “In the last two and a half years, The Huffington Post has launched in 11 markets and doubled traffic to its sites from 45 million to 90 million unique monthly visitors. Jimmy Maymann, chief executive of The Huffington Post, shared those figures while speaking at the Reuters’s Institute Big Data for Media conference in London today. For Maymann, the key is using data to improve reader experience, a tactic that will bring both editorial and business benefits. ‘Because of how media has changed in the last five years with social and search we’ve gone from producing 500 to 1,600 news stories every day,’ Maymann told delegates, and editors have access to data that can inform newsroom decisions in a real-time analytics dashboard. The content is ‘optimised’ by data, he said, so the editor can understand reader habits better and respond accordingly.” Read more

WorldCat Releases 197 Million Nuggets of Linked Data

worldcatRichard Wallis of OCLC reports on his Data Liberate blog, “A couple of months back I spoke about the preview release of Works data from WorldCat.org.  Today OCLC published a press release announcing the official release of 197 million descriptions of bibliographic Works. A Work is a high-level description of a resource, containing information such as author, name, descriptions, subjects etc., common to all editions of the work.  The description format is based upon some of the properties defined by the CreativeWork type from the Schema.org vocabulary.  In the case of a WorldCat Work description, it also contains [Linked Data] links to individual, OCLC numbered, editions already shared from WorldCat.org.” Read more

<< PREVIOUS PAGENEXT PAGE >>