Posts Tagged ‘open source’

Semantic Technology May Help NIH In Its HealthCare Advancement Mission

ashoknareOcto Consulting, a technology solutions and management services company for both the intelligence and healthcare sectors, recently published an infographic exploring the intersection among the Semantic Web, Linked Data and Health IT as it relates to accessing and interacting with data from an array of sources in the healthcare chain. “Our point of view is that in healthcare there are multiple data sources and so much data – especially when it comes to clinical trials, pharmaceuticals research and scientific data,” says CTO Ashok Nare. “It’s very possible that each of those data elements is represented in a different format, so how to take them all and connect them to ask questions you aren’t able to ask otherwise. That’s where semantic technologies are extremely useful.”

One health-care sector project in which Octo is putting semantic technologies to use these days is an effort it has underway with the U.S.’s medical research agency, the National Institutes of  Health, whose mission includes providing grants to the scientific community to engage in research and experiments “to enhance health, lengthen life, and reduce illness and disability,”  as its web site explains. Now, not only does the NIH want to understand what it’s funding and how those grants are progressing, but also  “what opportunities it may be missing out on,” Nare explains.

That means continually assessing not only what’s in its portfolio but also what research gaps there are, which requires conducting analysis on more and more data sources and investigating more queries: That could mean more development and expense, without the help of semantic web technologies.

Read more

Callimachus Open Source 1.2 Is Out

cal

According to a new announcement from the company 3 Round StonesCallimachus Open Source v1.2 has been released.  Callimachus is a Linked Data application server used by the Federal Government for publishing open data on the Web and Fortune 1000 for consuming and visualizing a combination of enterprise and open content.  3 Round Stones is working to make Callimachus the choice of Web developers who want to rapidly write and host data-driven Web applications.   Callimachus Open Source 1.2 provides a number of improvements across the board – some which improve use experience for building visualizations with the Chart Wizard and some back end changes aimed at improving scaling and performance. As always, for full details about the project and downloads check out http://callimachusproject.org.  Here are some highlights.” Read more

The European Project “digital.me” Opens Its Code

digital.me

09-24-2013 06:12 PM CET — The EU’s “digital.me” project brings Fraunhofer IAO together with seven research and industry partners to develop a system for user-controlled social networks and services that can serve as a central hub for managing a user’s various digital identities. The project has now released the source code from its software development work as an open-source project.

The use of personal information for private and business life is a trend in our increasingly information-driven society. With the rise of social media, individuals are revealing more personal data online than ever before. This data disclosure provides value to users, such as enhancing social contacts or obtaining personalized services and products. However, the existing social internet makes it difficult for using personal information in a controlled way while retaining privacy where required. Read more

Jeff Hawkins on Open Source and Machine Learning Meeting Big Data

Simon Phipps of InfoWorld recently wrote, “At OSCON in Portland, Ore., last month, I had the chance to meet Jeff Hawkins, the inventor of the Palm Pilot and arguably the father of the smartphone. I learned that he is now pioneering the analysis of huge streams of real-time data using insights gained as a neuroscientist. His company offers a product that can learn the characteristics of data streams, predict their future actions, and identifiy anomalies. He has just recently taken the core of that product and released it as a GPLv3-licensed open source project on GitHub so that anyone can build machine intelligence into their systems. Below is a video of our discussion, followed by an edited version of the interview.” Read more

Status Update on US Open Data Collaboration

Todd Park

Justin Kern of Information Management reports, “The U.S. government is outlining its new program for an open source repository to foster collaboration on getting more information to citizens in a faster manner. Federal CTO Todd Park formally introduced Project Open Data on Thursday in a blog post, and gave an update on its first days of activity. In the first 24 hours after Project Open Data was published, more than two dozen contributors submitted to its GitHub platform, including fixes to broken Web links and policy input. Other, meatier contributions, or ‘pull requests,’ included a tool that converts spreadsheets and databases into APIs for ease of use by developers, and code that translates geographic data from locked formats into open, available formats, according to Park.” Read more

Single Sign-On Can Improve Healthcare Systems

Shahid Qadri has written an article for Med City News about how to use WebID to create single sign-on access for health care systems. He writes, “The Simple Sign-on challenge sponsored by the ONC through the Health 2.0 challenge was an exciting opportunity for us to learn about a sophisticated technology protocol and then being able to hack several open source system to implement a single sign on solution based on the protocol. This was a challenge that was truly a ‘challenge’ for me, but an exciting and rewarding one (our solution was the second place winner!).” Read more

Search And Next-Gen Big Data Apps

Search is a fundamental, a system building block, and something that should be a critical part of enterprise architectures. That’s what Grant Ingersoll, co-founder and CTO at search, discovery and analytics vendor LucidWorks – which leverages the Apache Lucene/Solr open source search project – told an audience at last week’s GigaOM Structure Data event.

The company late last year launched LucidWorks Big Data for developing Big Data applications, which builds on top of its heritage developing the LucidWorks Search solution. “It’s a platform for organizations and developers to build out next-generation data applications,” Ingersoll said in a conversation with the Semantic Web Blog. Its focus is on tight integration of key Apache open source projects and layering with a REST API, to provide developers single-source access to the stack’s richness for creating applications that provide comprehensive search, discovery and analysis of an organization’s vast content and user interactions.

LucidWorks Big Data is made up of Apache Hadoop; the Apache Mahout machine-learning library; Hive, a data warehouse system for Hadoop that facilitates data summarization, ad-hoc queries, and the analysis of large datasets stored in Hadoop-compatible file systems; and Apache OpenNLP, a machine-learning based toolkit for the processing of natural language text that supports common NLP tasks.

Read more

Chicago Uses GitHub to Open Up Data

Alex Howard of O’Reilly Radar reports, “GitHub has been gaining new prominence as the use of open source software in government grows. Earlier this month, I included a few thoughts from Chicago’s chief information officer, Brett Goldstein, about the city’s use of GitHub, in a piece exploring GitHub’s role in government. While Goldstein says that Chicago’s open data portal will remain the primary means through which Chicago releases public sector data, publishing open data on GitHub is an experiment that will be interesting to watch, in terms of whether it affects reuse or collaboration around it. In a followup email, Goldstein, who also serves as Chicago’s chief data officer, shared more about why the city is on GitHub and what they’re learning. Our discussion follows.” Read more

A Look Back at the 2012 TimesOpen Events

Greg Bates of Programmable Web reports, “The Gray Lady is getting her code on. In Andre Behrens’s New York Times blog, Open, billed as ‘All the code that’s fit to print,’ he recounts events on coding and science held in 2012. Two of the notable events were the well-attended one on Big Data and Smarter Scaling and their Open Source Science Fair. Three speakers graced the Big Data event: Andrew Montalenti the CTO of Parse.ly… James Boehmer, Manager of Search Technology at the New York Times; and Allan Beaufour, CTO of Chartbeat.” Read more

Data.gov Moving to Open Source Platform

The team at Nextgov reports, “The team that manages Data.gov is well on its way to making the government data repository open source using a new back-end called the Open Government Platform, officials said during a Web discussion Wednesday. The governments of India and Ghana have already launched beta versions of their data catalogues on the open source platform, said Jeanne Holm who heads the Data.gov team. Government developers from the U.S. and India built the OGPL jointly. They posted it to the code sharing site GitHub where other nations and developers can adopt it as is or amend it to meet their specific needs.” Read more

NEXT PAGE >>