Posts Tagged ‘data’

Data is Shaping the Future of Product Development

Max Engel of HypeBot recently identified the smart use of data as one of three major trends currently shaping the future of product development. He writes, “The role that data plays goes beyond analytics. Open platforms and API’s allow for the creation of product mash-ups that have broken down barriers to content availability. One cannot underestimate the brilliance of platform-focused companies like SoundCloud and Spotify that allow innovation to occur rapidly and unfettered. Similarly, structured data is going to be increasingly impactful. Facebook’s Open Graph, for example, projects the foundation for the oft-mentioned semantic web.” Read more

Yandex Social Search App Blocked from Accessing Facebook Data

Josh Constine of Tech Crunch reports, “Yandex begged Facebook not to shut down its social search app Wonder that launched [last week]. But the explanation Yandex’s lawyers sent us for why it’s compliant with Facebook’s policies didn’t stop Facebook from blocking all API calls from Wonder, Yandex confirms. Facebook tells me it’s now discussing policy with Yandex. The move follows a trend of Facebook aggressively protecting its data. Wonder has, or should I say had, big potential. When I broke the news that Yandex was readying Wonder earlier this month, I detailed how the voice-activated social search app for iOS let people see what local businesses friends had visited or taken photos at, what music they’d been listening to, and what news they had been reading. It essentially reorganized Facebook’s data into a much more mobile, discoverable format.” Read more

14 Data Trends to Watch for in 2013

Alex Howard recently shared 14 trends to watch for in 2013. He writes, “The idea of ‘lean government‘ gained some traction in 2012, as cities and agencies experimented with applying the lean startup approach to the public sector. With GOV.UK, the British government both redefined the online government platform and showed how citizen-centric design can be done right. In 2013, the worth of a lean government approach will be put to the test when the work of the White House Innovation Fellows is released.” Similarly, “Gartner analyst Andrea DiMaio is now looking at the intersection of government and technology through the lens of ‘smart government.’ In 2013, I expect to hear much more about that, from smartphones to smarter cities to smart disclosure.” Read more

Data in 2013: What Will it Look Like?

Marjorie Teresa R. Perez of the Business Mirror recently questioned what data will look like in 2013. She writes, “There are many, many people who talk about this issue. For instance, Director, Market Insight and Strategy at Amdocs Michal Harris—who is awesome, by the way—says that we’re going to see a move away from service providers just worrying about the operational challenge of managing data to operators beginning to realize the business opportunity that big data brings. And if you look at the trends toward data and video traffic, you can see that the number of people enjoying LTE coverage is going to skyrocket in the next couple of years. Unlike the Web today, the semantic Web won’t just reside within computers, laptops and mobile devices. Instead, it will be part of electronics like refrigerators, cars and televisions.” Read more

Hadoop Meets Semantic Technology: Data Scientists Win

Hadoop is on almost every enterprise’s radar – even if they’re not yet actively engaged with the platform and its advantages for Big Data efforts. Analyst firm IDC earlier this year said the market for software related to the Hadoop and MapReduce programming frameworks for large-scale data analysis will have a compound annual growth rate of more than sixty percent between 2011 and 2016, rising from $77 million to more than $812 million.

Yet, challenges remain to leveraging all the possibilities of Hadoop, an Apache Software Foundation open source project, especially as it relates to empowering the data scientist. Hadoop is composed of two sub-projects: HDFS, a distributed file system built on a cluster of commodity hardware so that data stored in any node can be shared across all the servers, and the MapReduce framework for processing the data stored in those files.

Semantic technology can help solve many of the  challenges, Michael A. Lang Jr., VP, Director of Ontology Engineering Services at Revelytix, Inc., told an audience gathered at the Semantic Technology & Business Conference in New York City yesterday.

Read more

The Semantic Web Has Gone Mainstream! Wanna Bet?

Juan Sequeda photoIn 2005, I started learning about the so-called Semantic Web. It wasn’t till 2008, the same year I started my PhD, that I finally understood what the Semantic Web was really about. At the time, I made a $1000 bet with 3 college buddies that the Semantic Web would be mainstream by the time I finished my PhD. I know I’m going to win! In this post, I will argue why.

Read more

.data Proposal by Stephen Wolfram Gets Responses From Semantic Community

Photo of Stephen WolframIt cannot be denied that Stephen Wolfram knows data. As the person behind Mathematica and Wolfram|Alpha, he has been working with data — and the computation of that data — for a long time. As he said in his blog yesterday, “In building Wolfram|Alpha, we’ve absorbed an immense amount of data, across a huge number of domains.  But—perhaps surprisingly—almost none of it has come in any direct way from the visible internet. Instead, it’s mostly from a complicated patchwork of data files and feeds and database dumps.”

The main topic of Wolfram’s post is a proposal about the form and placement of raw data on the internet. In the post, he proposes that .data be created as a new generic Top-Level Domain (gTLD) to hold data in a “parallel construct.”

Read more

Boston City Hall – Doctoral Fellowship Available

The Harvard Boston Research Initiative [HBRI], in conjunction with the City of Boston, has announced that it is seeking applicants for a part-time graduate fellowship. The fellowship runs from February 1, 2012 through August 31, 2012 and applications are due January 1, 2012. The announcement states, “The fellowship is funded by the Radcliffe Institute for Advanced Study at Harvard University, but doctoral students from any school in the greater Boston area with strong skills in data management and analysis, and an interest in computational social science are encouraged to apply.”

“Fellows will work 15-25 hours/week, mainly at Boston City Hall, and will be paid $20/hour. While at City Hall, fellows will spend much of their time working closely with a team of policy makers and researchers interested in using new types of data to carry out analyses that can improve both public policy and scholarship about key urban issues. ” Read more

LMI Announces the Climate Change Knowledge Engine

A recent article announced that “LMI, a leader in helping the public sector address energy and environmental issues, has launched its Climate Change Knowledge Engine™ (LMI-CliCKE™), a groundbreaking tool for the easy consumption of climate change data. The tool combines open-source semantic web technology and data from the public domain in a way that is accessible to nonscientific leaders in the public and private sectors. LMI-CliCKE (pronounced ‘click’) is free to use and available to the public now at http://clicke.lmi.org/.” Read more

Sentiment Analysis v. Semantic Analysis

A recent article examines the shortcomings of sentiment analysis and how semantic analysis can help. According to the article, “For years, sentiment has been a widely used measure of how customers view a company’s products and services. But sentiment analysis has inherent flaws. First is what it cannot tell you because it only considers a small amount of the available data. Only about 25 percent of posts actually contain sentiment, either positive or negative, which means three out of four posts are neutral, revealing no sentiment, and are effectively being ignored by the analysis. Thus, decisions are being based on what only a quarter of the posts are saying. Another problem with sentiment is statistical confidence in the data. Simply stated, all methods of sentiment analysis rely on example data that, whittled down, reveals a low level of confidence about the sentiment being identified, either positive or negative. Data with such low confidence is a poor foundation for sentiment analysis.” Read more

NEXT PAGE >>