Research this month from MindMetre Research shows that 89 percent of organizations believe they need to gain greater insight into their growing volumes of unstructured data to improve their commercial advantages and gain a competitive edge. That insight into such data, the research reports, could feed a number of business-boosting scenarios. “This content can be used to provide insights for proposals and projects, to inform business relationships, to enable collaboration, to avoid repetition of research, to repurpose content, and generally to streamline the flow of enterprise knowledge and avoid replication of work already done,” says Paul Lindsell, Managing Director of MindMetre.
Posts Tagged ‘Oracle’
Connect The Dots: Embarcadero Technologies’ Update Integrates Metadata Governance Repository Knowledge With Its Database Tools
Embarcadero Technologies has an update of its database tools – ER/Studio, DBArtisan, Rapid SQL, DB Optimizer and DB Change Manager XE5 — that among its new features includes integration with its Connect metadata governance repository. Connect, which The Semantic Web Blog covered here, keeps all the information about an enterprise’s data — what it means and where it is – to bridge the gap between the work of governance teams and that of day-to-day operations.
“We are providing terrific metadata integration right in the product,” says Henry Olson, director of product management. It is effectively the first instance of collaboration, syndication and integration across ER/Studio, Embarcadero’s data architecture and modeling tools, DB PowerStudio database development, administration, and performance tuning solutions, and Connect. “That’s a deep theme for us because it is a perennial problem in large organizations to make the work of the data architect team more broadly available,” he says, “and to make others more aware of the data assets and better able to use them.”
Startup Elementum wants to take supply chains into the 21st century. Incubated at Flextronics, the second largest contract manufacturer in the world, and launching today with $44 million in Series B funding from that company and Lightspeed Ventures, its approach is to get supply chain participants – the OEMs that generate product ideas and designs, the contract manufacturers who build to those specs, the component makers who supply the ingredients to make the product, the various logistics hubs to move finished product to market, and the retail customer – to drop the one-off relational database integrations and instead see the supply chain fundamentally as a complex graph or web of connections.
“It’s no different thematically from how Facebook thinks of its social network or how LinkedIn thinks of what it calls the economic graph,” says Tyler Ziemann, head of growth at Elementum. Built on Amazon Web Services, Elementum’s “mobile-first” apps for real-time visibility, shipment tracking and carrier management, risk monitoring and mitigation, and order collaboration have a back-end built to consume and make sense of both structured and unstructured data on-the-fly, based on a real-time Java, MongoDB NoSQL document database to scale in a simple and less expensive way across a global supply chain that fundamentally involves many trillions of records, and flexible schema graph database to store and map the nodes and edges of the supply chain graph.
“Relational database systems can’t scale to support the types of data volumes we need and the flexibility that is required for modeling the supply chain as a graph,” Ziemann says.
The media has been reporting the last few hours on the Obama administration’s self-imposed deadline for fixing HealthCare.gov. According to these reports, the site is now working more than 90 percent of the time, up from 40 percent in October; that pages on the website are loading in less than a second, down from about eight; that 50,000 people can simultaneously use the site and that it supports 800,000 visitors a day; and page-load failures are down to under 1 percent.
There’s also word, however, that while the front-end may be improved, there are still problems on the back-end. Insurance companies continue to complain they aren’t getting information correctly to support signups. “The key question,” according to CBS News reporter John Dickerson this morning, “is whether that link between the information coming from the website getting to the insurance company – if that link is not strong, people are not getting what was originally promised in the entire process.” If insurance companies aren’t getting the right information for processing plan enrollments, individuals going to the doctor’s after January 1 may find that they aren’t, in fact, covered.
Jeffrey Zients, the man spearheading the website fix, at the end of November did point out that work remains to be done on the backend for tasks such as coordinating payments and application information with insurance companies. Plans are for that to be in effect by mid-January.
As it turns out, among components of its backend technology, according to this report in the NY Times, is the MarkLogic Enterprise NoSQL database, which in its recent Version 7 release also added the ability to store and query data in RDF format using SPARQL syntax.
Oversight Systems is in the business of Big Data analytics. Come June, it also will be in the business of having its technology serve as a platform behind third-party business intelligence and analytics applications on-demand – including its ontology approach for integrating data from disparate enterprise systems.
The company currently provides packaged solutions that let front-line employees involved in processes such as procure-to-pay or order-to-cash conduct continuous transaction analysis for insights into transactions that violate business rules, so that the business can take action to close gaps and assure compliance to operational and regulatory requirements. The ontology it’s developed over the years, which includes proprietary semantic and relationship information and infers some additional information, is there to help with the acquisition and preparation of data.
As we close out 2012, we’ve asked some semantic tech experts to give us their take on the year that was. Was Big Data a boon for the semantic web, or is the opportunity to capitalize on the connection still pending? Is structured data on the web not just the future but the present? What sector is taking a strong lead in the semantic web space?
We begin with Part 1, with our experts listed in alphabetical order:
John Breslin, lecturer at NUI Galway, researcher and unit leader at DERI, creator of SIOC, and co-founder of Technology Voice and StreamGlider:
I think the schema.org initiative really gaining community support and a broader range of terms has been fantastic. It’s been great to see an easily understandable set of terms for describing the objects in web pages, but also leveraging the experience of work like GoodRelations rather than ignoring what has gone before. It’s also been encouraging to see the growth of Drupal 7 (which produces RDFa data) in the government sector: Estimates are that 24 percent of .gov CMS sites are now powered by Drupal.
Martin Böhringer, CEO & Co-Founder Hojoki:
For us it was very important to see Jena, our Semantic Web framework, becoming an Apache top-level project in April 2012. We see a lot of development pace in this project recently and see a chance to build an open source Semantic Web foundation which can handle cutting-edge requirements.
Still disappointing is the missing link between Semantic Web and the “cool” technologies and buzzwords. From what we see Semantic Web gives answers to some of the industry’s most challenging problems, but it still doesn’t seem to really find its place in relation to the cloud or big data (Hadoop).
Christine Connors, Chief Ontologist, Knowledgent:
One trend that I have seen is increased interest in the broader spectrum of semantic technologies in the enterprise. Graph stores, NoSQL, schema-less and more flexible systems, ontologies (& ontologists!) and integration with legacy systems. I believe the Big Data movement has had a positive impact on this field. We are hearing more and more about “Big Data Analytics” from our clients, partners and friends. The analytical power brought to bear by the semantic technology stack is sparking curiosity – what is it really? How can these models help me mitigate risk, more accurately predict outcomes, identify hidden intellectual assets, and streamline business processes? Real questions, tough questions: fun challenges!
Bob Evans of Oracle has written an article for Forbes regarding the future of Big Data. He writes, “If you think we’ve got Big Data problems now—with “only” about 9 billion devices connected to the Internet—what’s the situation going to be like when that number soars to 50 billion at the end of the decade? Oracle president Mark Hurd recently raised the possibility that unless businesses and government agencies can seize control over that Big Data explosion, then they’ll run the risk of simply being overwhelmed by vast volumes of data that they can’t find, control, manage, or secure—let alone analyze and exploit.”
He goes on, “What happens when that already-tricky situation is compounded dramatically as an additional 40 billion devices get connected to the Internet over the next several years and begin streaming out massive volumes of data about speeds and location and performance degradation and volume of usage and even such vital but narrowly focused applications such as whether or not your morning coffee is ready? Read more
NEXT PAGE >>