Posts Tagged ‘government’

A Look Ahead For Linked Data

rsz_swirrllogoAt Swirrl, the focus continues to be bringing more users aboard the Linked Data train. It’s helping to realize this aim in part thanks to the work it’s doing with customers, primarily in the government sector. The company’s bringing its PublishMyData platform (which The Semantic Web Blog first discussed here) to customers such as the U.K.’s Department for Communities and Local Government, which is looking to Linked Data to help publish statistical data and useful reference data about local government and also information about the department’s performance to increase transparency, which is consumed primarily by other public sector organizations, charities, and entities that report to the department but are not part of it.

“Usually that was done in a mish-mash of technologies, and depended on individuals that do lots of hard work with spreadsheets to make it work,” says founder and CEO Bill Roberts. He characterizes the department’s move to Linked Data as a bit of a leap of faith, driven by its open data strategist Steve Peters and a vision of what can be achieved by moving in this direction. During engagements like this one, Roberts notes, Swirrl has gotten some strong insight, as well, into how to improve its solution for people who aren’t “dyed-in-the-wool Linked Data heads. That’s fed into things we’re working on,” including plans in the pipeline to build tools that make the self-service process easier.

Read more

HealthCare.Gov: Progress Made But BackEnd Struggles Continue

rsz_hcgovThe media has been reporting the last few hours on the Obama administration’s self-imposed deadline for fixing HealthCare.gov. According to these reports, the site is now working more than 90 percent of the time, up from 40 percent in October; that pages on the website are loading in less than a second, down from about eight; that 50,000 people can simultaneously use the site and that it supports 800,000 visitors a day; and page-load failures are down to under 1 percent.

There’s also word, however, that while the front-end may be improved, there are still problems on the back-end. Insurance companies continue to complain they aren’t getting information correctly to support signups. “The key question,” according to CBS News reporter John Dickerson this morning, “is whether that link between the information coming from the website getting to the insurance company – if that link is not strong, people are not getting what was originally promised in the entire process.” If insurance companies aren’t getting the right information for processing plan enrollments, individuals going to the doctor’s after January 1 may find that they aren’t, in fact, covered.

Jeffrey Zients, the man spearheading the website fix, at the end of November did point out that work remains to be done on the backend for tasks such as coordinating payments and application information with insurance companies. Plans are for that to be in effect by mid-January.

As it turns out, among components of its backend technology, according to this report in the NY Times, is the MarkLogic Enterprise NoSQL database, which in its recent Version 7 release also added the ability to store and query data in RDF format using SPARQL syntax.

Read more

Open Data: The Key To $3 Trillion In Economic Value

rsz_mckinsey__company_logoThere’s money in that open data. A new report from the McKinsey Global Institute finds that machine-readable information that’s made available to others has the potential to generate significant economic value: $3 trillion annually in seven domains, to be exact.

The report, entitled Open Data: Unlocking Innovation And Performance With Liquid Information, sees the potential economic effect unfolding in education, transportation, consumer products, electricity, oil and gas, health care and consumer finance. Data becomes more liquid, the report authors note, when it is open, widely available and in shareable formats, and when advanced computing and analysis can yield from it — potentially in conjunction with proprietary data — novel insights. It doesn’t specifically mention Linked Data, but hones in on government open data platforms – including the Linked-Data infused data.gov.UK, which it cites as having had 1.4 million page views this summer – as critical to the economic good tidings. It records more than 40 countries with open data platforms, and up to 1 million data sets as having been made open by governments worldwide.

Read more

Helping Citizen Searches For Government Services

Photo courtesy: Flickr/Arjan Richter

By 2016, ABI Research has it, as much as $114 billion could be saved worldwide through the implementation of online e-government services. It predicted that investment in these services is set to increase from $28 billion in 2010 to $57 billion in 2016, and that the number of users will nearly triple over the forecast period.

Here in the states, according to a 2012 survey by GovLoop, 83 percent of respondents say that they can access government-oriented customer service efforts via a website. And the number of people who are taking advantage of the ability to access information and services on government web sites is pretty significant, even going back to 2010, when the Pew Internet & American Life Project reported that 82 percent of American Internet users – 62 percent of adults – were doing so. Among its findings at the time were that 46 percent have looked up what services a government agency provides; 33 percent  have renewed a driver’s license or auto registration; 23 percent have gotten information about or applied for government benefits; and 11 percent have applied for a recreational license, such as a fishing or hunting license.

Given the interest in accessing information via the Internet about government services by the citizenry — not to mention accessing the services themselves, and not only in the US but abroad — it makes sense for governments to put an emphasis on customer service online. The Govloop survey finds that there’s room for some improvement, with the majority of respondents rating service a 3 or 4 on the scale of 1 to 5. Perhaps additional help will come from some efforts in the semantic web space, like a vocabulary for describing civic services that government organizations can use to help citizens using search engines hone in on the service that’s their true interest from the start.

Read more

Semantic Tech Outlook: 2013

Photo Courtesy: Flickr/Lars Plougmann

In recent blogs we’ve discussed where semantic technologies have gone in 2012, and a bit about where they will go this year (see here, here and here).

Here are some final thoughts from our panel of semantic web experts on what to expect to see as the New Year rings in:

John Breslin,lecturer at NUI Galway, researcher and unit leader at DERI, creator of SIOC, and co-founder of Technology Voice and StreamGlider

Broader deployment of the schema.org terms is likely. In the study by Muehlisen and Bizer in July this year, we saw Open Graph Protocol, DC, FOAF, RSS, SIOC and Creative Commons still topping the ranks of top semantic vocabularies being used. In 2013 and beyond, I expect to see schema.org jump to the top of that list.

Christine Connors, Chief Ontologist, Knowledgent:

I think we will see an uptick in the job market for semantic technologists in the enterprise; primarily in the Fortune 2000. I expect to see some M&A activity as well from systems providers and integrators who recognize the desire to have a semantic component in their product suite. (No, I have no direct knowledge; it is my hunch!)

We will see increased competition from data analytics vendors who try to add RDF, OWL or graphstores to their existing platforms. I anticipate saying, at the end of 2013, that many of these immature deployments will leave some project teams disappointed. The mature vendors will need to put resources into sales and business development, with the right partners for consulting and systems integration, to be ready to respond to calls for proposals and assistance.

Read more

Good-Bye to 2012: A Look Back At The Year In Semantic Tech, Part 1

Courtesy: Flickr/zoetnet

As we close out 2012, we’ve asked some semantic tech experts to give us their take on the year that was. Was Big Data a boon for the semantic web, or is the opportunity to capitalize on the connection still pending? Is structured data on the web not just the future but the present? What sector is taking a strong lead in the semantic web space?

We begin with Part 1, with our experts listed in alphabetical order:

John Breslin, lecturer at NUI Galway, researcher and unit leader at DERI, creator of SIOC, and co-founder of Technology Voice and StreamGlider:
I think the schema.org initiative really gaining community support and a broader range of terms has been fantastic. It’s been great to see an easily understandable set of terms for describing the objects in web pages, but also leveraging the experience of work like GoodRelations rather than ignoring what has gone before. It’s also been encouraging to see the growth of Drupal 7 (which produces RDFa data) in the government sector: Estimates are that 24 percent of .gov CMS sites are now powered by Drupal.

Martin Böhringer, CEO & Co-Founder Hojoki:

For us it was very important to see Jena, our Semantic Web framework, becoming an Apache top-level project in April 2012. We see a lot of development pace in this project recently and see a chance to build an open source Semantic Web foundation which can handle cutting-edge requirements.

Still disappointing is the missing link between Semantic Web and the “cool” technologies and buzzwords. From what we see Semantic Web gives answers to some of the industry’s most challenging problems, but it still doesn’t seem to really find its place in relation to the cloud or big data (Hadoop).

Christine Connors, Chief Ontologist, Knowledgent:

One trend that I have seen is increased interest in the broader spectrum of semantic technologies in the enterprise. Graph stores, NoSQL, schema-less and more flexible systems, ontologies (& ontologists!) and integration with legacy systems. I believe the Big Data movement has had a positive impact on this field. We are hearing more and more about “Big Data Analytics” from our clients, partners and friends. The analytical power brought to bear by the semantic technology stack is sparking curiosity – what is it really? How can these models help me mitigate risk, more accurately predict outcomes, identify hidden intellectual assets, and streamline business processes? Real questions, tough questions: fun challenges!

Read more

New York City: Taking Smart — And Semantic — Steps To Its Digital Future

Every day New York City is getting closer to being the Digital City of the Future. It’s a long journey, though, and one that the semantic web community can lend a hand with.

At this week’s Semantic Technology & Business Conference in NYC, Andrew Nicklin of the Office of Strategic Technology and Development, NYC Department of Information Technology & Telecommunications (DoITT) provided a look at what has been accomplished so far, and what’s on the to-do roadmap. Recent months have seen accomplishments including the passage of Local Law 11 of 2012 – the “most progressive legislation in the U.S. as far as cities being mandated to open data,” Nicklin said in an interview with The Semantic Web Blog before his keynote address at SemTech. “It ensures permanency for our program past any administrative changes….The whole notion of open data doesn’t go way because it is written into law.”

Read more

The Semantic Link – October, 2012

Paul Miller, Bernadette Hyland, Ivan Herman, Eric Hoffer, Andraz Tori, Peter Brown, Christine Connors, Eric Franzon

On Friday, October 12, a group of Semantic thought leaders from around the globe met with their host and colleague, Paul Miller, for the latest installment of the Semantic Link, a monthly podcast covering the world of Semantic Technologies. This episode includes a discussion about various approaches to building semantic systems, and “the Linkers” were joined by two special guests: Hadley Beeman, expert in Government Linked Data and Open Data; and Joel Natividad, CEO & Co-Founder, Ontodia.
Read more

Ontodia Preps Smart City Data Marketplace; Platform Previews At SemTech NYC

Six months ago, Ontodia’s NYCFacets walked away with the win at New York City’s BigApps 3.0 conference. In the months since, the Smart Open Data Exchange that catalogs all the NYC-related data sources (which we first covered here) has been busy expanding its team, moving into the NYU-Poly hosted incubator, and getting ready to launch its Smart City platform for general use next year.

A preview of that platform will take place at the upcoming Semantic Technology & Business Conference in NYC. “We are going to our original mission of really creating that data exchange using semantic technology,” says Ontodia co-founder Joel Natividad. It’s putting the focus not on raw data or learning new technologies, but on being a linked answers marketplace – converting raw data to answers rather than just linking raw data.

Read more

Beth Noveck on a More Open-Source Government

Helen Walters reports that Beth Noveck recently gave a TED talk regarding open-source government. Walters writes, “As the US’s first Deputy CTO, Beth Noveck founded the White House Open Government Initiative, which developed administration policy on transparency, participation and collaboration. She starts her talk by reminding us that in the old days, the White House was literally an open house. At the beginning of the 19th century, John Quincy Adams met a local dentist who happened in to shake his hand. Adams promptly dismissed the Secretary of State, with whom he was meeting, and asked the dentist to remove an aching tooth. ‘When I got to the White House in 2009, the White house was anything but open,’ she says. Bomb blast curtains covered the windows; they were running Windows 2000. Social media was verboten. Noveck’s mandate: to change this system.” Read more

NEXT PAGE >>