Posts Tagged ‘Wright State University’

Hello 2014 (Part 2)

rsz_lookahead2

Courtesy: Flickr/faul

Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.

Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.

Bill Roberts, CEO, Swirrl:   Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems.  I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.

Read more

Hello 2014

rsz_lookaheadone

Courtesy: Flickr/Wonderlane

Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:

Phil Archer, Data Activity Lead, W3C:

For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.

I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!

[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.

Read more

Good-Bye 2013

Courtesy: Flickr/MadebyMark

Courtesy: Flickr/MadebyMark

As we prepare to greet the New Year, we take a look back at the year that was. Some of the leading voices in the semantic web/Linked Data/Web 3.0 and sentiment analytics space give us their thoughts on the highlights of 2013.

Read on:

 

Phil Archer, Data Activity Lead, W3C:

The completion and rapid adoption of the updated SPARQL specs, the use of Linked Data (LD) in life sciences, the adoption of LD by the European Commission, and governments in the UK, The Netherlands (NL) and more [stand out]. In other words, [we are seeing] the maturation and growing acknowledgement of the advantages of the technologies.

I contributed to a recent study into the use of Linked Data within governments. We spoke to various UK government departments as well as the UN FAO, the German National Library and more. The roadblocks and enablers section of the study (see here) is useful IMO.

Bottom line: Those organisations use LD because it suits them. It makes their own tasks easier, it allows them to fulfill their public tasks more effectively. They don’t do it to be cool, and they don’t do it to provide 5-Star Linked Data to others. They do it for hard headed and self-interested reasons.

Christine Connors, founder and information strategist, TriviumRLG:

What sticks out in my mind is the resource market: We’ve seen more “semantic technology” job postings, academic positions and M&A activity than I can remember in a long time. I think that this is a noteworthy trend if my assessment is accurate.

There’s also been a huge increase in the attentions of the librarian community, thanks to long-time work at the Library of Congress, from leading experts in that field and via schema.org.

Read more

Help For HealthCare: Mapping Unstructured Clinical Notes To ICD-10 Coding Schemes

Photo of Amit ShethThe health care industry – and the American citizenry at large – has been focused of late on the problems surrounding the implementation of the Affordable Care Act, the federal website’s issues foremost among them. But believe it or not, there are other things the healthcare industry needs to prepare for, among them the October 1, 2014 date for replacing the World Health Organization’s International Statistical Classification of Diseases and Related Health Problems ICD-9 code sets used to report medical diagnoses and inpatient procedures by ICD-10 code sets. ICD-9 uses 14,000 diagnosis codes which will increase to 68,000 in ICD-10, which is a HIPAA (Health Insurance Portability and Accountability Act) code set requirement.

Natural language processing has had the primary role in many solutions aimed at transforming large volumes of unstructured clinical data into information that healthcare IT application vendors and their hospital customers can leverage. But there’s an argument being made that understanding unstructured text of clinical notes that contain a huge stash of information and then mapping them to fine-grained ICD-10 coding schemes requires a combination of NLP, advanced linguistics, machine learning and semantic web technologies, and Amit Sheth, professor of computer science and engineering at Wright State University and director of the Kno.e.sis Center is making them. (See our story yesterday for a look at how the NLP market is evolving overall, including in healthcare.)

“ICD-10 has thousands of codes with millions of possible permutations and combinations. A rule-based approach is not effective to cover the huge number of ICD-10 codes.” Sheth says. Extracting the correct concepts, identifying the relationship between these concepts and mapping them to the correct code is a major challenge, with codes often formed by information from various sections of a clinical document that itself is subject to individual physicians’ style of recording information, among other factors.

Read more

Semantic Tech Outlook: 2013

Photo Courtesy: Flickr/Lars Plougmann

In recent blogs we’ve discussed where semantic technologies have gone in 2012, and a bit about where they will go this year (see here, here and here).

Here are some final thoughts from our panel of semantic web experts on what to expect to see as the New Year rings in:

John Breslin,lecturer at NUI Galway, researcher and unit leader at DERI, creator of SIOC, and co-founder of Technology Voice and StreamGlider

Broader deployment of the schema.org terms is likely. In the study by Muehlisen and Bizer in July this year, we saw Open Graph Protocol, DC, FOAF, RSS, SIOC and Creative Commons still topping the ranks of top semantic vocabularies being used. In 2013 and beyond, I expect to see schema.org jump to the top of that list.

Christine Connors, Chief Ontologist, Knowledgent:

I think we will see an uptick in the job market for semantic technologists in the enterprise; primarily in the Fortune 2000. I expect to see some M&A activity as well from systems providers and integrators who recognize the desire to have a semantic component in their product suite. (No, I have no direct knowledge; it is my hunch!)

We will see increased competition from data analytics vendors who try to add RDF, OWL or graphstores to their existing platforms. I anticipate saying, at the end of 2013, that many of these immature deployments will leave some project teams disappointed. The mature vendors will need to put resources into sales and business development, with the right partners for consulting and systems integration, to be ready to respond to calls for proposals and assistance.

Read more

Good-Bye to 2012: Continuing Our Look Back At The Year In Semantic Tech

Courtesy: Flickr/LadyDragonflyCC <3

Yesterday we began our look back at the year in semantic technology here. Today we continue with more expert commentary on the year in review:

Ivan Herman, W3C Semantic Web Activity Lead:

I would mention two things (among many, of course).

  •  Schema.org had an important effect on semantic technologies. Of course, it is controversial (role of one major vocabulary and its relations to others, the community discussions on the syntax, etc.), but I would rather concentrate on the positive aspects. A few years ago the topic of discussion was whether having ‘structured data’, as it is referred to (I would simply say having RDF in some syntax or other), as part of a Web page makes sense or not. There were fairly passionate discussions about this and many were convinced that doing that would not make any sense, there is no use case for it, authors would not use it and could not deal with it, etc. Well, this discussion is over. Structured data in Web sites is here to stay, it is important, and has become part of the Web landscape. Schema.org’s contribution in this respect is very important; the discussions and disagreements I referred to are minor and transient compared to the success. And 2012 was the year when this issue was finally closed.
  •  On a very different aspect (and motivated by my own personal interest) I see exciting moves in the library and the digital publishing world. Many libraries recognize the power of linked data as adopted by libraries, of the value of standard cataloging techniques well adapted to linked data, of the role of metadata, in the form of linked data, adopted by journals and soon by electronic books… All these will have a profound influence bringing a huge amount of very valuable data onto the Web of Data, linking to sources of accumulated human knowledge. I have witnessed different aspects of this evolution coming to the fore in 2012, and I think this will become very important in the years to come.

Read more

Semantic Web Jobs: Wright State University

Wright State University is looking for a Research Computer Scientist in Celina, OH. The post states that the purpose of this position is “To perform research and development including mentoring graduate and undergraduate students working on funded research projects and provide coordination leading to meeting project deliverables for the funded project and support activities to secure future funding through writing proposals. Participate in writing research publications resulting from project team’s research.” Read more

Universities Put Cash Towards Helping HomeGrown Tech Startups Along

Image Photo Courtesy Flickr/401(K) 2012

Universities play an important role in advancing the technology ecosystem, semantic technology included. Look for starters at work done at The Tetherless World Constellation at Rensselaer Polytechnic Institute, Wright State University’s Kno.e.sis Ohio Center of Excellence in Knowledge-enabled Computing, MIT, and the Digital Enterprise Research Institute located at the National University of Ireland, Galway.

In addition to driving technology ever forward, institutions like these and others also provide a home for incubating good ideas that could become good businesses. Music discovery service Seevl and the enterprise-focused SindiceTech are two examples of semantic spin-outs from DERI, for instance, while MIT Media Lab gave birth to commercial properties with semantic underpinnings including music intelligence platform The Echo Nest. The Kno.e.sis Center points work it’s doing in the commercial direction, too: Its LinkedIn profile description notes that its “work is predominantly multidisciplinary, and multi-institutional, often involving industry collaborations and significant systems developing, with an eye towards real-world impact, technology licensing, and commercialization.”

Given the projects with commercial prospects underway within their own houses, it would seem there’s opportunity for universities themselves to look for even more ways to contribute to that success. And that’s just what the University of Minnesota is doing: This week it said that it’s launching a $20 million seed fund over a ten-year timeframe to support the innovative ideas to which its campus plays host.

Read more

Twitris Awarded Patent for Semantic Analysis Methods

Our own Jennifer Zaino recently reported that semantic social web application Twitris, a program at Wright State University, was tackling coverage of the Occupy Wall Street movement as well as the presidential election. Now Twitris will be busier than ever: “Wright State University has been assigned a patent for core analysis methods used by the Twitris system.” Read more

Twitris Social Media Analysis Tackles Occupy Wall Street, 2012 Elections

Semantic social web application Twitris, a project of Kno.e.sis at Wright State University, recently added to its social media analysis event lineup coverage of Occupy Wall Street, and Election 2012 is set to debut in the next couple of weeks.

These join earlier efforts such as the India Against Corruption Twitris site, and across all of them users can explore the popular topics about the event in the Twittersphere for that day; see related information by clicking on a tag; browse topics by location and see how they trend across different segments of society; search and explore questions related to a topic; view sentiments associated with a particular entity in the topic set; and more.

Leading the effort is Kno.e.sis Ohio Center of Excellence in Knowledge-enabled Computing director and LexisNexis Ohio Eminent Scholar Dr. Amit P. Sheth, who coined the term citizen-sensing and has written on the topic of continuous semantics to analyze real-time data.

Read more

NEXT PAGE >>