Posts Tagged ‘NASA’

Winners at NASA Space Apps Challenge Demonstrate Star Trek-Like Technology


Reno, NV (PRWEB) April 15, 2014 — Developers at the NASA Space Apps Challenge won the Plexi Natural Language Processing challenge by demonstrating an app that allows astronauts to use voice commands for mission-critical tasks by speaking to a wearable device. The International Space Apps Challenge is an international mass collaboration focused on space exploration that takes place over 48 hours in cities on six continents.


Next door at the Microsoft sponsored Reno Hackathon, a team of University of Nevada students claimed that event’s prize for best use of Natural Language Processing. This was the second Reno Hackathon, a competition between programmers to build a new product in a very short period of time, which was held on April 12-13, 2014 at the University of Nevada DeLaMare University Library. Read more

Off Semantic Tech Goes Into The Wild Blue Yonder

rsz_amprimeLook, up in the sky! It’s a bird, it’s a plane, no – it’s an Amazon drone!

Admittedly, Amazon Prime Air’s unmanned aerial vehicles in commercial use are still a little ways off. But such technology – along with other recent innovations, such as the use of unmanned aircraft in crop-dusting or even Department of Homeland Security border applications, or future capabilities to extend the notion of auto-piloting in passenger airplanes using autonomous machine logic to control airspace and spacing between planes –needs to be accounted for in terms of its impact on the air space. The Next-Generation Air Transportation System is taking on the change in the management and operation of the national air transportation system.

And semantic technology, natural language processing, and machine learning, too, will have a hand in helping out, by fostering collaboration among the agencies that will be working together to develop the system, including the Federal Aviation Administration, the U.S. Air Force, U.S. Navy, and the National Aeronautics and Space Administration, under the coordination of the Joint Planning and Development Office. These agencies will need to leverage each other’s knowledge and research, as well as ensure – as necessary – data privacy.

Read more

NASA Turns to Charles River Analytics to Detect Volcanic Eruptions, Storms


Sara Castellanos of Biz Journals reports, “Charles River Analytics on Thursday announced a contract to develop technology for NASA that detects volcanic eruptions, storms and algae blooms from satellite imagery. The Cambridge, Mass.-based firm develops computational intelligence technology, which is used to interpret data for the purpose of improving decision-making in real-time. The NASA contract is for a system called DIPSARS, or the Discover of Interesting Patterns and Semantic Analysis in Remote Space. The contract is valued at $125,000.” Read more

NASA Moves to the Cloud


Kathleen Hickey of reports, “NASA’s OpenNEX is one of the latest federal research projects moving to the cloud to improve collaboration with the academic, public and private sectors. In doing so, the space agency is using Amazon Web Services to make terabytes worth of climate and Earth science data available to researchers, app developers, academia and the public. The first data sets became available in March and include temperature, precipitation and climate change projections, as well as data processing tools fromNASA’s Earth Exchange (NEX), a research and collaboration platform from  NASA’s Advanced Supercomputing Facility at Ames Research Center in California.” Read more

Google Gets Into Quantum Computing; Advancing Machine Learning Is A Goal

Google, in the midst of its I/O conference (see our story here), also has teamed up with NASA to form the Quantum Artificial Intelligence Lab at the agency’s Ames Research Center.

According to a post on Google’s Research Blog, the lab will house a D-Wave Systems quantum computer. The goal is to study how quantum computing can solve some of the most challenging computer science problems, with a focus on advancing machine learning. Machine learning, as Director of Engineering Hartmut Neven writes, “is all about building better models of the world to make more accurate predictions,” but it’s hard work to build a really good model. Real-world applications that he discusses include building a more useful search engine by better understanding spoken questions and what’s on the web to provide the best answer.

Read more

NASA Challenge Seeks Solution to Big Data Problems

Derrick Harris of GigaOM reports that NASA has launched a series of Big Data challenges aimed at finding innovative solutions to some of the nation’s most pressing Big Data problems. He writes, “Some of the U.S. government’s most research-intensive agencies want your help to come up with better ways to analyze their expansive data sets. NASA, along with the National Science Foundation and the Department of Energy, launched a competition on TopCoder called the Big Data Challenge series. Essentially, it’s a competition to crowdsource a solution to the very big problem of fragmented and incompatible federal data.” Read more

Rhizomer Wants Users To Revel In Working With Linked Data

How can users – especially those who don’t have deep roots within the semantic web community – make Linked Data useful to them? It’s not always apparent, says Roberto Garcia, a mind behind the Rhizomik initiative that has produced a tool called Rhizomer. Its approach is to take advantage of the structures that organize data (schemas, thesaurus, ontologies, and so on) and use them to drive the  automatic generation of user interfaces tailored to each semantic data-set to be explored.

A project led by members of the GRIHO (Human-Computer Interaction and data integration) research group that is assigned to the Computer Science and Industrial Engineering Department of the University of Lleida, where Garcia is associate professor, the initiative also has led to projects including ReDeFer, a set of tools to move data in an out of the Semantic Web, and various ontologies for multimedia, e-business and news. As for Rhizomer, it accommodates publishing and exploration of Linked Data, with data-set exploration helped by features including an overview to get a full picture of the data-set at hand; zooming and filtering to zoom in on items of interest and filter out uninteresting items; details to arrive at concrete resources of interest; and visualizations tailored to the kind of resource at hand, as the site explains.

In other words, its features are “organized so they support the typical data analysis tasks,” he says. “We are more a contributor from the user perspective of how you interact with that data.”

Read more

Stardog RDF Database Bites Into Fat Part Of The Market

Clark & Parsia’s Stardog lightweight RDF database is moving into release candidate 1.0 mode just in time for next week’s upcoming Semantic Technology & Business Conference in San Francisco next week. The product’s been stable and useable for awhile now, but a 1.0 nomenclature still carries weight with a good number of IT buyers.

The focus for the product, says cofounder and managing principal Kendall Clark, is to be optimized for what he says is the fat part of the market – and that’s not the part that is dealing with a trillion RDF triples. “Most people and organizations don’t need to scale to trillions of anything,” though scaling up, and up, and up, is where most of Clark & Parsia’s competitors have focused their attention, he says. “We’ve seen a significant percentage of what people are doing with semantic technology and most applications are not at a billion triples today.” Take as an example Clark & Parsia’s customer, NASA, which built an expertise location system based on semantic technology that today is still not more than 20 million triples. “You might say that’s a little toy but not if you are at NASA and need defined experts, it is a real, valuable thing and we see this all the time,” he says.

Read more

Smartlogic Highlights Content Intelligence Over Enterprise Semantics

Smartlogic recently released a new version of its Semaphore software, which took home the 2011 European Frost & Sullivan Technology Innovation Award. Version 3.3 adds new semantically-rich features, but the company itself has been shifting its strategy to talk about its solution less as the enterprise semantic platform and more as a content intelligence platform for identifying, classifying, extracting, analyzing and utilizing hard-to-find information from among unstructured assets in existing information management systems like Microsoft SharePoint.

Why? According to marketing VP Maya Natarajan, it’s an in to better customer access. “Whenever you think of the word semantic, there’s such a small percentage of the population that understands what it is,” she says. “But amazingly the uptake for content intelligence is so great. People immediately understand that so much quicker” — that is, she says, that content intelligence describes all the business reasons and benefits for deploying an enterprise semantic platform.

Another way to make the virtues of content intelligence even more obvious: Smartlogic is planning to introduce prebuilt starter taxonomies to kickstart the process in some vertical sectors. Meanwhile, Version 3.3 has brought to its customers features that still proclaim its semantic heritage, including a semantic visualization tool.

Read more

The Semantics of NASA’s POPS Project

The W3C recently interviewed Jeanne Holm, Chief Knowledge Architect at the Jet Propulsion Laboratory at Cal Tech. Holm also leads the Knowledge Management team at NASA. In the interview, Holm stated, “The goal of our project was to make it easy find expertise within an organization, or, as you’ll see, across organizational boundaries. The project is called POPS for ‘People, Organizations, Projects, and Skills.’ The acronym does not include E for Expert for a good reason: we tried three times to create a system with data specifically about expertise, but failed each time for different social reasons. Each attempt relied on self-generated lists of expertise. In the first attempt, people over- or under-inflated their expertise, sometimes to bolster their resumes. The second attempt prompted labor unions to get overly involved because greater expertise could be tied to higher pay. The third approach involved profiles verified by management, and that led to a number of human resources grievances when there was a disagreement. In all cases, the data became suspect.” Read more