Posts Tagged ‘Oracle’
Oversight Systems is in the business of Big Data analytics. Come June, it also will be in the business of having its technology serve as a platform behind third-party business intelligence and analytics applications on-demand – including its ontology approach for integrating data from disparate enterprise systems.
The company currently provides packaged solutions that let front-line employees involved in processes such as procure-to-pay or order-to-cash conduct continuous transaction analysis for insights into transactions that violate business rules, so that the business can take action to close gaps and assure compliance to operational and regulatory requirements. The ontology it’s developed over the years, which includes proprietary semantic and relationship information and infers some additional information, is there to help with the acquisition and preparation of data.
As we close out 2012, we’ve asked some semantic tech experts to give us their take on the year that was. Was Big Data a boon for the semantic web, or is the opportunity to capitalize on the connection still pending? Is structured data on the web not just the future but the present? What sector is taking a strong lead in the semantic web space?
We begin with Part 1, with our experts listed in alphabetical order:
John Breslin, lecturer at NUI Galway, researcher and unit leader at DERI, creator of SIOC, and co-founder of Technology Voice and StreamGlider:
I think the schema.org initiative really gaining community support and a broader range of terms has been fantastic. It’s been great to see an easily understandable set of terms for describing the objects in web pages, but also leveraging the experience of work like GoodRelations rather than ignoring what has gone before. It’s also been encouraging to see the growth of Drupal 7 (which produces RDFa data) in the government sector: Estimates are that 24 percent of .gov CMS sites are now powered by Drupal.
Martin Böhringer, CEO & Co-Founder Hojoki:
For us it was very important to see Jena, our Semantic Web framework, becoming an Apache top-level project in April 2012. We see a lot of development pace in this project recently and see a chance to build an open source Semantic Web foundation which can handle cutting-edge requirements.
Still disappointing is the missing link between Semantic Web and the “cool” technologies and buzzwords. From what we see Semantic Web gives answers to some of the industry’s most challenging problems, but it still doesn’t seem to really find its place in relation to the cloud or big data (Hadoop).
Christine Connors, Chief Ontologist, Knowledgent:
One trend that I have seen is increased interest in the broader spectrum of semantic technologies in the enterprise. Graph stores, NoSQL, schema-less and more flexible systems, ontologies (& ontologists!) and integration with legacy systems. I believe the Big Data movement has had a positive impact on this field. We are hearing more and more about “Big Data Analytics” from our clients, partners and friends. The analytical power brought to bear by the semantic technology stack is sparking curiosity – what is it really? How can these models help me mitigate risk, more accurately predict outcomes, identify hidden intellectual assets, and streamline business processes? Real questions, tough questions: fun challenges!
Bob Evans of Oracle has written an article for Forbes regarding the future of Big Data. He writes, “If you think we’ve got Big Data problems now—with “only” about 9 billion devices connected to the Internet—what’s the situation going to be like when that number soars to 50 billion at the end of the decade? Oracle president Mark Hurd recently raised the possibility that unless businesses and government agencies can seize control over that Big Data explosion, then they’ll run the risk of simply being overwhelmed by vast volumes of data that they can’t find, control, manage, or secure—let alone analyze and exploit.”
He goes on, “What happens when that already-tricky situation is compounded dramatically as an additional 40 billion devices get connected to the Internet over the next several years and begin streaming out massive volumes of data about speeds and location and performance degradation and volume of usage and even such vital but narrowly focused applications such as whether or not your morning coffee is ready? Read more
Callimachus is getting an update. It’s been quiet for a few months over at the framework for data-driven applications based on Linked Data principles, but with good reason, says David Wood, CTO of Callimachus project sponsor 3 Round Stones. 3 Round Stones also offers Callimachus Enterprise, winner of this year’s Startup Competition at the Semantic Technology and Business Conference in San Francisco.
Big things were on the menu for this release, which should emerge from beta today. To date, all the RDF that Callimachus has dealt with has been local to it, Wood explains. “People have been saying for ages, ‘But I don’t want to copy the LOD cloud into Callimachus to deal with it. I want to deal with a lot of data out there in the world, in enterprise systems, an Oracle server, or the LOD cloud,” he says.
The new release takes on the challenge of dealing with data that’s external to Callimachus.
Oracle is looking for Software Developers in Hillsboro, Oregon. This position will “Design, develop, troubleshoot and/or test/QA software. Specifically, write design specifications. Provide scoping, estimation of bugs/enhancements. Perform software development tasks associated with developing, debugging, or designing software applications or operating systems according to provided design specifications. Build enhancements within an existing software architecture and occasionally suggest improvements to the architecture. Work with Quality Assurance team to review QA test plans for projects being developed. Work with documentation team to review documentation for relevant areas. Work with development manager and other Oracle team members on assigned projects. Utilizes graduate-level research and analysis skills.” Read more
Oracle has released a new report entitled From Overload to Impact: An Industry Scorecard on Big Data Business Challenges. The report shows that the data deluge is here, and companies are currently failing to fully capitalize on it as a result of too few and too weak tools — a hole that Semantic Web technologies are starting to fill.
Oracle states, “Executives say they are not prepared to handle the increasing amount of data they face. Twenty-nine percent of executives give their organization a ‘D’ or ‘F’ in preparedness to manage the data deluge, and 93 percent believe their organization is losing revenue opportunities – representing on average, 14 percent of revenue – by not being able to fully leverage the information they collect. Read more
Chris Kanaracus reports that Oracle has acquired Collective Intellect for the company’s social intelligence tools. Kanaracus writes, “Collective Intellect’s platform includes a semantic analytics engine that derives insights from ‘tens of millions of conversations daily, transforming social conversations into actionable intelligence,’ according to an FAQ document on the acquisition. The Boulder, Colorado, company was founded in 2005 and counts CNBC, Viacom Media Networks and General Mills among its customers.”
He goes on, “This is the second social media-related acquisition for Oracle within the past month, coming shortly after the purchase of Vitrue, a maker of tools for running marketing campaigns through social media. Combined, the Vitrue and Collective Intellect technologies will create the industry’s ‘most advanced and comprehensive social relationship platform,’ Oracle said in a statement.” Read more
Clark & Parsia’s Stardog lightweight RDF database is moving into release candidate 1.0 mode just in time for next week’s upcoming Semantic Technology & Business Conference in San Francisco next week. The product’s been stable and useable for awhile now, but a 1.0 nomenclature still carries weight with a good number of IT buyers.
The focus for the product, says cofounder and managing principal Kendall Clark, is to be optimized for what he says is the fat part of the market – and that’s not the part that is dealing with a trillion RDF triples. “Most people and organizations don’t need to scale to trillions of anything,” though scaling up, and up, and up, is where most of Clark & Parsia’s competitors have focused their attention, he says. “We’ve seen a significant percentage of what people are doing with semantic technology and most applications are not at a billion triples today.” Take as an example Clark & Parsia’s customer, NASA, which built an expertise location system based on semantic technology that today is still not more than 20 million triples. “You might say that’s a little toy but not if you are at NASA and need defined experts, it is a real, valuable thing and we see this all the time,” he says.
NEXT PAGE >>