Posts Tagged ‘semantic’

Off Semantic Tech Goes Into The Wild Blue Yonder

rsz_amprimeLook, up in the sky! It’s a bird, it’s a plane, no – it’s an Amazon drone!

Admittedly, Amazon Prime Air’s unmanned aerial vehicles in commercial use are still a little ways off. But such technology – along with other recent innovations, such as the use of unmanned aircraft in crop-dusting or even Department of Homeland Security border applications, or future capabilities to extend the notion of auto-piloting in passenger airplanes using autonomous machine logic to control airspace and spacing between planes –needs to be accounted for in terms of its impact on the air space. The Next-Generation Air Transportation System is taking on the change in the management and operation of the national air transportation system.

And semantic technology, natural language processing, and machine learning, too, will have a hand in helping out, by fostering collaboration among the agencies that will be working together to develop the system, including the Federal Aviation Administration, the U.S. Air Force, U.S. Navy, and the National Aeronautics and Space Administration, under the coordination of the Joint Planning and Development Office. These agencies will need to leverage each other’s knowledge and research, as well as ensure – as necessary – data privacy.

Read more

Open The Door To Bringing Linked Data To Real-World Projects

ld1Linked Data: Structured Data on the Web is now available in a soft-cover edition. The book, authored by David Wood, Marsha Zaidman, Luke Ruth, and Michael Hausenblas, and with a forward by Tim Berners-Lee, aims to give mainstream developers without previous experience with Linked Data practical techniques for integrating it into real-world projects, focusing on languages with which they’re likely to be familiar, such as JavaScript and Python.

Berners-Lee’s forward gets the ball rolling in a big way, making the case for Linked Data and its critical importance in the web ecosystem:“The Web of hypertext-linked documents is complemented by the very powerful Linked Web of Data.  Why linked?  Well, think of how the value of a Web page is very much a function of what it links to, as well as the inherent value of the information within the Web page. So it is — in a way even more so — also in the Semantic Web of Linked Data.  The data itself is valuable, but the links to other data make it much more so.”

The topic has clearly struck a nerve, Wood believes, noting that today we are “at a point where structured data on the web is getting tremendous play,” from Google’s Knowledge Graph to the Facebook Open Graph protocol, to the growing use of the schema.org vocabulary, to data still growing exponentially in the Linked Open Data Project, and more. “The industry is ready to talk about data and data processing in a way it never has been before,” he continues. There’s growing realization that Linked Data fits in with and nicely complements technologies in the data science realm, such as machine learning algorithms and Hadoop, such that “you can suddenly build things you never could before with a tiny team, and that’s pretty cool….No technology is sufficient in and of itself but combine them and you can do really powerful things.”

Read more

Hello 2014 (Part 2)

rsz_lookahead2

Courtesy: Flickr/faul

Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.

Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.

Bill Roberts, CEO, Swirrl:   Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems.  I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.

Read more

Hello 2014

rsz_lookaheadone

Courtesy: Flickr/Wonderlane

Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:

Phil Archer, Data Activity Lead, W3C:

For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.

I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!

[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.

Read more

Good-Bye 2013

Courtesy: Flickr/MadebyMark

Courtesy: Flickr/MadebyMark

As we prepare to greet the New Year, we take a look back at the year that was. Some of the leading voices in the semantic web/Linked Data/Web 3.0 and sentiment analytics space give us their thoughts on the highlights of 2013.

Read on:

 

Phil Archer, Data Activity Lead, W3C:

The completion and rapid adoption of the updated SPARQL specs, the use of Linked Data (LD) in life sciences, the adoption of LD by the European Commission, and governments in the UK, The Netherlands (NL) and more [stand out]. In other words, [we are seeing] the maturation and growing acknowledgement of the advantages of the technologies.

I contributed to a recent study into the use of Linked Data within governments. We spoke to various UK government departments as well as the UN FAO, the German National Library and more. The roadblocks and enablers section of the study (see here) is useful IMO.

Bottom line: Those organisations use LD because it suits them. It makes their own tasks easier, it allows them to fulfill their public tasks more effectively. They don’t do it to be cool, and they don’t do it to provide 5-Star Linked Data to others. They do it for hard headed and self-interested reasons.

Christine Connors, founder and information strategist, TriviumRLG:

What sticks out in my mind is the resource market: We’ve seen more “semantic technology” job postings, academic positions and M&A activity than I can remember in a long time. I think that this is a noteworthy trend if my assessment is accurate.

There’s also been a huge increase in the attentions of the librarian community, thanks to long-time work at the Library of Congress, from leading experts in that field and via schema.org.

Read more

Dandelion’s New Bloom: A Family Of Semantic Text Analysis APIs

rsz_dandyDandelion, the service from SpazioDati whose goal is to delivering linked and enriched data for apps, has just recently introduced a new suite of products related to semantic text analysis.

Its dataTXT family of semantic text analysis APIs includes dataTXT-NEX, a named entity recognition API that links entities in the input sentence with Wikipedia and DBpedia and, in turn, with the Linked Open Data cloud and dataTXT-SIM, an experimental semantic similarity API that computes the semantic distance between two short sentences. TXT-CL (now in beta) is a categorization service that classifies short sentences into user-defined categories, says SpazioDati.CEO Michele Barbera.

“The advantage of the dataTXT family compared to existing text analysis’ tools is that dataTXT relies neither on machine learning nor NLP techniques,” says Barbera. “Rather it relies entirely on the topology of our underlying knowledge graph to analyze the text.” Dandelion’s knowledge graph merges together several Open Community Data sources (such as DBpedia) and private data collected and curated by SpazioDati. It’s still in private beta and not yet publicly accessible, though plans are to gradually open up portions of the graph in the future via the service’s upcoming Datagem APIs, “so that developers will be able to access the same underlying structured data by linking their own content with dataTXT APIs or by directly querying the graph with the Datagem APIs; both of them will return the same resource identifiers,” Barbera says. (See the Semantic Web Blog’s initial coverage of Dandelion here, including additional discussion of its knowledge graph.)

Read more

GS1 Explores How Its Systems And Standards Will Fit Into The Semantic Web

gs1usnewGS1, the standards organization responsible for barcodes and the Global Data Synchronization Network (GDSN), among other things, is working to extend the standards used for the identification of goods in the brick and mortar retail world into the web realm. As part of an overall conversation with its retail industry members about focusing more broadly on the digital space, it’s exploring how GS1 systems and standards fit into the semantic web.

What we call the UPC code in North America – and the GTIN (Global Trade Identification Network) code elsewhere – is a key part of the discussion. “The interesting thing is that the schema.org folks did some work to show how the GS1 system could be represented in their schemas,” says Bernie Hogan, Senior Vice President, Emerging Capabilities and Industries, who is spearheading GS1 US’s work in the online space. The schema.org/Product properties include quantitative values based on GTIN codes . “We started looking at that and started asking how we can build upon it.”  (Barbara Starr’s recent SearchEngineLand column provides insight into the benefits today of using GS1 identifiers and structured data, including semantic markup on websites, for e-commerce.)

Today, GS1 US’s B2C Alliance now is working with its community to test some of the concepts around embedding the GS1 system in the web, and how that may positively or negatively impact how retailers’ and brand owners’ products are seen by search engines, says Hogan. “Everything with a unique identifier on the web is merging with Linked Open Data, and that gets pretty interesting, so we are working on a strategy to learn how we can fit into this whole thing,” he says, with the help of the GS1 Auto ID Labs research arm. “We ultimately want to make some standards recommendations, but first we are going through the process of testing and getting consensus and doing some research on how that might be done. But it is all about improving search and relevance for identifying products and finding related information.”

Read more

What’s Real In Personalized Mobile Healthcare

rsz_rxNews came this week that a man accused of defrauding a financial group out of close to a million dollars around an investment in a fictional mobile medical device tablet is scheduled to sign a plea agreement admitting that he committed mail fraud. The man, Howard Leventhal, had been promoting the Star Trek-influenced McCoy Home Health Care Tablet as a device that can instantaneously deliver detailed patient information to medical providers. (The product is discussed on the company’s still-surviving web site here.) He was arrested for the fraud in October and has been out on bail.

The interesting thing about this case is that the fake he was perpetrating isn’t very far removed from reality regarding the role mobile apps and systems will play in healthcare. There of course are plenty of mobile apps already available that help users do everything from monitoring their hearts to recording their blood-oxygen level during the night to see whether they have sleep apnea. Research and Markets, for example, says the wireless health market currently will grow to nearly $60 billion by 2018, up from $23.8 billion, with remote patient monitoring applications and diagnostics helping to drive the growth. But where things really get interesting is when mobile health takes on questions of semantic interoperability of accumulated data, and assessing its meaning.

Read more

INSIGHT Centre Answers The Data Analytics Opportunity

rsz_insightpixEarlier this year, leading academics from well-known research centers in Ireland – the Digital Enterprise Research Institute (DERI), Clarity, Clique, 4C and TRIL – came together as part of the INSIGHT Centre for Data Analytics, with £42 million in government funding and £30 million in industry funding. With researchers based in a number of Ireland’s universities, including University College Dublin, Trinity College  Dublin, NUI Galway, NUIMaynooth, Dublin City University, and University College Cork, INSIGHT “is Ireland’s answer to the data analytics opportunity that exists now,” says funding director, Insight Galway, and DERI Director Stefan Decker.

Combining the different centers under one common brand is a way to build critical mass in the areas of Big Data and analytics, spanning categories including recommender systems, media and decision analytics, reasoning, personal sensing, connected health and discovery apps, and, of course, the semantic web and Linked Data, where DERI’s expertise lies, Decker notes. Industry partners are also a large part of the Centre, and will be able to avail themselves of the research expertise transitioning from the various research centers to INSIGHT. For example, the work DERI has been doing on W3C standards will continue under INSIGHT’s purview rather than DERI’s, Decker explains.

Read more

Senzari’s MusicGraph APIs Look To Enhance Musical Journeys

MusicGraph image

News came the other week that Senzari had announced the MusicGraph knowledge engine for music. The Semantic Web Blog had a chance to learn a little bit more about it what’s underway thanks to a chat with Senzari’s COO Demian Bellumio.

MusicGraph used to go by the geekier name of Adaptable Music Parallel Processing Platform, or AMP3 for short, for helping users control their Internet radio. “We wanted to put more knowledge into our graph. The idea was we have really cool and interesting data that is ontologically connected in ways never done before,” says Bellumio. “We wanted to put it out in the world and let the world leverage it, and MusicGraph is a production of that vision.”

Since its announcement earlier this month about launching the consumer version on the Firefox OS platform that lets users make complex queries about music and learn and then listen to results, Senzari has submitted its technology to be offered for the iOS, Android, and Windows Mobile platforms.  “You can ask anything you can think of in the music realm. We connect about 1 billion different points to respond to these queries,” he says. Its data covers more than twenty million songs, connected to millions of individual albums and artists across all genres, with extracted information on everything from keys to concept extractions derived from lyrics.

Read more

<< PREVIOUS PAGENEXT PAGE >>