Orbis is looking for a Software Developer – Cloud and Big Data in Annapolis, MD. The post states, “Join a team oriented environment, participate in Scrum meetings, drive and execute technical various project phases from development through implementation of advanced web technology solutions for commercial and government clients. Many projects include integration of Open Source software with developed web services to create custom solutions for our clients. Develop high quality software designs and programs for customized customer solutions using Java. Use Agile/SCRUM methodologies and help implement semantic web technologies (OWL, RDF).” Read more
Posts Tagged ‘Big Data’
Ron Miller of TechCrunch reports, “IBM today announced a new product called Watson Analytics, one they claim will bring sophisticated big data analysis to the average business user. Watson Analytics is a cloud application that does all of the the heavy lifting related to big data processing by retrieving the data, analyzing it, cleaning it, building sophisticated visualizations and offering an environment for communicating and collaborating around the data. And lest you think that IBM is just slapping on the Watson label because it’s a well known brand (as I did), Eric Sall, vp of worldwide marketing for business analytics at IBM says that’s the not the case. The technology underlying the product including the ability to process natural language queries is built on Watson technology.” Read more
Big Data has been getting its fair share of commentary over the last couple months. Surveys from multiple sources have commented on trends and expectations. The Semantic Web Blog provides some highlights here:
- From Accenture Anayltics’s new Big Success With Big Data report: There remain some gaps in what constitutes Big Data for respondents to its survey: Just 43 percent, for instance, classify unstructured data as part of the package. That option included open text, video and voice. Those are gaps that could be filled leveraging technologies such as machine learning, speech recognition and natural language understanding, but they won’t be unless executives make these sources a focus of Big Data initiatives to start with.
- From Teradata’s new survey on Big Data Analytics in the UK, France and Germany: Close to 50 percent of respondents in the latter two countries are using three or more data types (from sources ranging from social media, to video, to web blogs, to call center notes, to audio files and the Internet of Things) in their efforts, compared to just 20 percent in the UK. A much higher percentage of UK businesses (51 percent) are currently using just a single type of new data, such as video data, compared with France and Germany, where only 21 percent are limiting themselves to one type of new data, it notes. Forty-four percent of execs in Germany and 35 percent in France point social media as the source of the new data. About one-third of respondents in each of those countries are investigating video, as well.
Apigee wants the development community to be able to seamlessly take advantage of predictive analytics in their applications.
“One of the biggest things we want to ensure is that the development community gets comfortable with powering their apps with data and insights,” says Anant Jhingran, Apigee’s VP of products and formerly CTO for IBM’s information management and data division. “That is the next wave that we see.”
Apigee wants to help them ride that wave, enabling their business to better deal with customer issues, from engagement to churn, in more personal and contextual ways. “We are in the business of helping customers take their digital assets and expose them through clean APIs so that these APIs can power the next generation of applications,” says Jhingran. But in thinking through that core business, “we realized the interactions happening through the APIs represent very powerful signals. Those signals, when combined with other contextual data that may be in the enterprise, enable some very deep insights into what is really happening in these channels.”
With today’s announcement of a new version of its Apigee Insights big data platform, all those signals generated – through API and web channels, call centers, and more – can come together in the service of predictive analytics for developers to leverage.
What best practices should inform your company’s text analytics initiatives? Executive Lessons on Modern Text Analytics, a new white paper prepared by: Geoff Whiting, principal at GWhiting.com and Alesia Siuchykava, project director at Data Driven Business provides some insight. Contributors to the lessons shared in the report include Ramkumar Ravichandran, Director, Analytics, at Visa and Matthew P.T. Ruttley, Manager of Data Science at Mozilla Corp
One of the interesting points made in the paper is that text analytics can be applied to many use cases: customer satisfaction and management effectiveness, product design insights, and enhancing predictive data modeling as well as other data processes. But at the same time, a takeaway is that it is better for text analytics teams to follow a narrow path than to try to accommodate a wide-ranging deployment. “All big data initiatives, and especially initial text analytics, need a specific strategy,” the writers note, preferable focusing on “low-hanging fruit through simple business problems and use cases where text analytics can provide a small but fast ROI.
IBM Taps Global Network of Innovation Centers to Fuel Linux on Power Systems for Big Data and Cloud Computing
CHICAGO, Aug. 22, 2014 /PRNewswire/ — At the LinuxCon North America conference last week, IBM (NYSE: IBM) announced it is tapping into its global network of over 50 IBM Innovation Centers and IBM Client Centers to help IBM Business Partners, IT professionals, academics, and entrepreneurs develop and deliver new Big Data and cloud computing software applications for clients using Linux on IBM Power Systems servers. Read more
Gil Press of Forbes reports, “Gartner released last week its latest Hype Cycle for Emerging Technologies. Last year, big data reigned supreme, at what Gartner calls the ‘peak of inflated expectations.’ But now big data has moved down the ‘trough of disillusionment’ replaced by the Internet of Things at the top of the hype cycle. In 2012 and in 2013 Gartner’s analysts thought that the Internet of Things had more than 10 years to reach the ‘plateau of productivity’ but this year they give it five to ten years to reach this final stage of maturity. The Internet of Things, says Gartner, ‘is becoming a vibrant part of our, our customers’ and our partners’ business and IT landscape’.” Read more
Jonathan Vanian of GigaOM reports, “EverString, a big data startup that helps companies identify prospective sales leads and new clients though predictive analytics, has raised $12 million in a series A funding round. Lightspeed Venture Partners led the round, which also included existing investors Sequoia Capital and IDG Ventures. While there are a host of marketing analytics services in the market like Silverpop and Eloqua that businesses use to aggregate numerous sales leads and find potential customers, EverString’s technology goes beyond whatever data is hosted internally within a company and branches out to the open web, explained EverString’s co-founder and CEO, Vincent Yang.” Read more
Deborah Gage of The Wall Street Journal reports, “Making big data stores as easy to search as Internet data has been a holy grail for the software industry, and it’s become a more pressing problem since the growth of the big data software Hadoop, which holds enormous amounts of data. Adatao Inc., a startup based in Sunnyvale, Calif., has raised nearly $13 million in Series A funding led by Andreessen Horowitz to take on the challenge. Founded in 2012 by veterans of Google Inc., Yahoo Inc. and the Army Research Lab, the company combines machine learning, natural language processing and in-memory (i.e. fast) computing to create a system in which users can write queries in ordinary English or one of several computer languages-—Smart Query, SQL, Scala, Java, Python or R–and get results in less time than it takes to speak their questions.” Read more
Chloe Green of Information Age recently wrote, “Handling immense data sets requires a combination of scientific and technological skills to determine how data is stored, searched and accessed. In science, the importance of data scientists in ensuring that data is handled correctly from the outset is not underestimated; other industries can learn from the scientific approach. Text-mining tools and the use of relevant taxonomies are essential. If we think about big data as a huge number of data points in some multi-dimensional space, the problem is one of analysis, i.e. frequently finding very similar or very dissimilar points which cannot be compared. In life sciences, taxonomies assign data points a class, thus comparison of two points is as easy as looking up other data points in the same class.” Read more