In the video below, Dr. James Melton, a Lecturer in Comparitive Politics at University College London, gives a presentation on Constitute. Constitute is a new way to explore the constitutions of the world. The origins of the project date back to 2005 with the Comparative Constitutions Project, which has the stated goal of cataloging the contents of all constitutions written in independent states since 1789. To date, that work has resulted in a collection of 900+ constitutions and 2500+ Amendments. A rigorous formal survey instrument including 669 questions was then applied to each of these “constitutional events,” resulting in the base data that the team had to work with. Melton and his group wanted to create a system that allowed for open sharing of this information, and not just with researchers, but with anyone who wants to explore the world’s constitutions. They also needed the system to be flexible enough to handle changes, when, as Melton points out, “…roughly 15% of the countries in the world change their constitution every single year.”
Joel Gurin of InformationWeek recently asked, “Will 2014 finally become the year of open data? We’re certainly seeing evidence that open data is moving from the margins into the mainstream, with new uses for data that governments and other sources are making freely available to the public. But if we’re going to see open data’s promise fulfilled, it will be important for governments, and the federal government in particular, to make it easier for the public to access and use their open data.” Read more
The media has been reporting the last few hours on the Obama administration’s self-imposed deadline for fixing HealthCare.gov. According to these reports, the site is now working more than 90 percent of the time, up from 40 percent in October; that pages on the website are loading in less than a second, down from about eight; that 50,000 people can simultaneously use the site and that it supports 800,000 visitors a day; and page-load failures are down to under 1 percent.
There’s also word, however, that while the front-end may be improved, there are still problems on the back-end. Insurance companies continue to complain they aren’t getting information correctly to support signups. “The key question,” according to CBS News reporter John Dickerson this morning, “is whether that link between the information coming from the website getting to the insurance company – if that link is not strong, people are not getting what was originally promised in the entire process.” If insurance companies aren’t getting the right information for processing plan enrollments, individuals going to the doctor’s after January 1 may find that they aren’t, in fact, covered.
Jeffrey Zients, the man spearheading the website fix, at the end of November did point out that work remains to be done on the backend for tasks such as coordinating payments and application information with insurance companies. Plans are for that to be in effect by mid-January.
As it turns out, among components of its backend technology, according to this report in the NY Times, is the MarkLogic Enterprise NoSQL database, which in its recent Version 7 release also added the ability to store and query data in RDF format using SPARQL syntax.
That is the vision of Bart van Leeuwen, Amsterdam Firefighter and founder of software company, Netage. We’ve covered Bart’s work before here at SemanticWeb.com and at the Semantic Technology & Business Conference, and today, there is news that the work is advancing to a new stage.
In the Netherlands, there exist 25 “Safety Regions” (pictured on the left). These organizations coordinate disaster management, fire services, and emergency medical teams. The regions are designed to enable various first responders to work together to deal with complex and severe crises and disasters.
Additionally, the Dutch Police acts as a primary partner organization in these efforts. The police is a national organization, separate from the safety regions and divided into its own ten regions. Read more
Sean Gallagher of Ars Technica recently wrote, “The National Security Agency’s (NSA) apparatus for spying on what passes over the Internet, phone lines, and airways has long been the stuff of legend, with the public catching only brief glimpses into its Leviathan nature. Thanks to the documents leaked by former NSA contractor Edward Snowden, we now have a much bigger picture. When that picture is combined with federal contract data and other pieces of the public record—as well as information from other whistleblowers and investigators—it’s possible to deduce a great deal about what the NSA has built and what it can do.” Read more
By 2016, ABI Research has it, as much as $114 billion could be saved worldwide through the implementation of online e-government services. It predicted that investment in these services is set to increase from $28 billion in 2010 to $57 billion in 2016, and that the number of users will nearly triple over the forecast period.
Here in the states, according to a 2012 survey by GovLoop, 83 percent of respondents say that they can access government-oriented customer service efforts via a website. And the number of people who are taking advantage of the ability to access information and services on government web sites is pretty significant, even going back to 2010, when the Pew Internet & American Life Project reported that 82 percent of American Internet users – 62 percent of adults – were doing so. Among its findings at the time were that 46 percent have looked up what services a government agency provides; 33 percent have renewed a driver’s license or auto registration; 23 percent have gotten information about or applied for government benefits; and 11 percent have applied for a recreational license, such as a fishing or hunting license.
Given the interest in accessing information via the Internet about government services by the citizenry — not to mention accessing the services themselves, and not only in the US but abroad — it makes sense for governments to put an emphasis on customer service online. The Govloop survey finds that there’s room for some improvement, with the majority of respondents rating service a 3 or 4 on the scale of 1 to 5. Perhaps additional help will come from some efforts in the semantic web space, like a vocabulary for describing civic services that government organizations can use to help citizens using search engines hone in on the service that’s their true interest from the start.
Elliot Mass of Information Management reports, “A partnership between Auburn University and Intelligent Software Solutions is adding a novel wrinkle to the old adage of learning by doing. In this case, Auburn students will hone real-world data analytics skills by gathering military intelligence for the U.S. government. Auburn, a public university in Auburn, Alabama, with more than 25,000 students, trains students in data modeling and simulation, cyber forensics and cyber risk analysis at its Cyber Research Center. ISS, based in Colorado Springs, develops software solutions for the U.S. government. Its data analysis and visualization, geo-temporal analysis and semantic data processing products are used by the Department of Defense, the Department of Homeland Security as well as foreign governments.” Read more
The debate about PRISM continues. One of the latest volleys was posted in InformationWeek by Coverlet Meshing (a pseudonym used by “a senior IT executive at one of the nation’s largest banks.”) Meshing wrote: “Prism doesn’t scare me. On 9/11, my office was on the 39th floor of One World Trade. I was one of the many nameless people you saw on the news running from the towers as they collapsed. But the experience didn’t turn me into a hawk. In fact, I despise the talking heads who frame Prism as the price we pay for safety. And not just because they’re fear-mongering demagogues. I hate them because I’m a technologist and they’re giving technology a bad name.” Read more
World-wide interests of US banks first to be identified
The platform, developed by OpenCorporates, collects, extracts and makes usable global corporate data, in an open and granular way. Large data sets, many of which were not available as open data before, have been imported by the London-based company, and used to develop corporate network visualisations which show the global corporate networks of businesses. Examples include IBM, Starbucks and Barclays.
In addition to the corporate network visualisations, the new technology has produced maps which show the world-wide interests of four US banks – Bank of America, Citigroup, Goldman Sachs and Morgan Stanley. They reveal complex and deep networks, as well as the central position that the Cayman Islands have within them.
Chief Executive of OpenCorporate, Chris Taggart said:
“This platform is an incredibly powerful and innovative piece of technology. Prior to its development, many of the datasets we are using were only available as web pages or PDFs. Now we are bringing this data together into a useable format which will change the way people are able to access and view corporate networks.”
“The emphasis we place on detailed provenance and confidence scores with this platform is substantially better than existing efforts to identify corporate networks, which are essentially ‘black boxes’. These hide the underlying data used to derive the relationship links, give no indication of how likely the information is to be correct, or the date the information related to. We believe that in a world which is increasingly dependent on corporate data, this is critical – whether you are an investigative journalist, or calculating credit risk.”
NEXT PAGE >>