Posts Tagged ‘NoSQL’

Blurred Lines: RDBMS Vendors Will Add NoSQL Database Features

forrlogoResearch firm Forrester at the end of September issued its Forrester Wave: NoSQL Key-Value Databases, Q3 2014 report. The report looked at seven enterprise-class vendors in the space: Amazon Web Services, Aerospike, Basho Technologies, Couchbase, DataStax, MapR Technologies, and Oracle.

Noting that the current adoption of NoSQL is at 20 percent and is likely to double by 2017, Forrester principal analyst and report author Noel Yuhanna and his co-authors explain that top use cases for key-value database include social and mobile apps, scale-out apps, Web 2.0, line-of-business apps, big data apps, and operational and analytical apps.

That said, he also notes that the lines between key-value store, document database and graph database NoSQL solutions are blurring, as vendors look to satisfy broader enterprise needs and better appeal to app developers. “Relational database management system vendors, such as Oracle, IBM, Microsoft and SAP, will broaden their current relational database products to include key-value, graph and document features and functionality to deliver more comprehensive data management platforms in the coming years,” the report states.

Read more

Big Data Review: What The Surveys Say

bd2Big Data has been getting its fair share of commentary over the last couple months. Surveys from multiple sources have commented on trends and expectations. The Semantic Web Blog provides some highlights here:

  • From Accenture Anayltics’s new Big Success With Big Data report: There remain some gaps in what constitutes Big Data for respondents to its survey: Just 43 percent, for instance, classify unstructured data as part of the package. That option included open text, video and voice. Those are gaps that could be filled leveraging technologies such as machine learning, speech recognition and natural language understanding, but they won’t be unless executives make these sources a focus of Big Data initiatives to start with.
  • From Teradata’s new survey on Big Data Analytics in the UK, France and Germany: Close to 50 percent of respondents in the latter two countries are using three or more data types (from sources ranging from social media, to video, to web blogs, to call center notes, to audio files and the Internet of Things) in their efforts, compared to just 20 percent in the UK.  A much higher percentage of UK businesses (51 percent) are currently using just a single type of new data, such as video data, compared with France and Germany, where only 21 percent are limiting themselves to one type of new data, it notes. Forty-four percent of execs in Germany and 35 percent in France point social media as the source of the new data. About one-third of respondents in each of those countries are investigating video, as well.

Read more

SPARQL And NoSQL: A Match On Many Levels

site-header-10th-blog-304x200Is SPARQL the SQL for NoSQL? The question will be discussed at this month’s Semantic Technology & Business Conference in San Jose by Arthur Keen, vp of solution architecture of startup SPARQL City.

It’s not the first time that the industry has considered common database query languages for NoSQL (see this story at our sister site Dataversity.net for some perspective on that). But as Keen sees it, SPARQL has the legs for the job. “What I know about SPARQL is that for every database [SQL and NoSQL alike] out there, someone has tried to put SPARQL on it,” he says, whereas other common query language efforts may be limited in database support. A factor in SPARQL’s favor is query portability across NoSQL systems. Additionally, “you can achieve much higher performance using declarative query languages like SPARQL because they specify the ‘What’ and not the ‘How’ of the query, allowing optimizers to choose the best way to implement the query,” he explains.

Read more

Semantically Aligned Design Principles At Core of Australian Electronic Health Records Platform

site-header-10th-blog-304x200At the upcoming Semantic Technology & Business Conference in San Jose, Dr. Terry Roach, principal of  CAPSICUM Business Architects, and Dr. Dean Allemang, principal consultant at Working Ontologist, will host a session on A Semantic Model for an Electronic Health Record (EHR). It will focus on Australia’s electronic-Health-As-A-Service  (eHaas) national platform for personal electronic health records, provided by the CAPSICUM semantic framework for strategically aligned business architectures.

Roach and Allemang participated in an email interview with The Semantic Web Blog to preview the topic:

The Semantic Web Blog: Can you put the work you are doing on the semantic EHR model in context: How does what Australia is doing with its semantic framework compare with how other countries are approaching EHRs and healthcare information exchange?

Roach and Allemang: The eHaaS project that we have been working on has been an initiative of Telstra, a large, traditional telecommunications provider in Australia. Its Telstra Health division, which is focused on health-related software investments, for the past two years has embarked on a set of strategic investments in the electronic health space. Since early 2013 it has acquired and/or established strategic partnerships with a number of local and international healthcare software providers ranging from hospital information systems [to] mobile health applications [to] remote patient monitoring systems to personal health records [to] integration platforms and health analytics suites.

At the core of these investments is a strategy to develop a platform that captures and maintains diverse health-related interactions in a consolidated lifetime health record for individuals. The eHaaS platform facilitates interoperability and integration of several health service components over a common secure authentication service, data model, infrastructure, and platform. Starting from a base of stand-alone, vertical applications that manage fragmented information across the health spectrum, the eHaaS platform will establish an integrated, continuously improving, shared healthcare data platform that will aggregate information from a number of vertical applications, as well as an external gateway for standards-based eHealth messages, to present a unified picture of an individual’s health care profile and history.

Read more

Big Data Challenges In Banking And Securities

Photo courtesy: Johan Hansson, https://www.flickr.com/photos/plastanka/

Photo courtesy: Johan Hansson, https://www.flickr.com/photos/plastanka/

A new report from the Securities Technology Analysis Center (STAC), Big Data Cases in Banking and Securities, looks to understand big data challenges specific to banking by studying 16 projects at 10 of the top global investment and retail banks.

According to the report, about half the cases involved e petabyte or more or data. That includes both natural language text and highly structured formats that themselves presented a great deal of variety (such as different departments using the same field for a different purpose or for the same purpose but using a different vocabulary) and therefore a challenge for integration in some cases. The analytic complexity of the workloads studied, the Intel-sponsored report notes, covered everything from basic transformations at the low end to machine learning at the high-end.

Read more

Financial Execs Worry About Data Lineage; Triple Stores Can Calm Fears

 

Photo courtesy: Flickr/ FilterForge

Photo courtesy: Flickr/ FilterForge

The Aite Group, which provides research and consulting services to the international financial services market, spends its fair share of time exploring the data and analytics challenges the industry faces. Senior analyst Virginie O’Shea commented on many of them during a webinar this week sponsored by enterprise NoSQL vendor MarkLogic.

Dealing with multiple data feeds from a variety of systems; feeding information to hundreds of end users with different priorities about what they need to see and how they need to see it; a lack of a common internal taxonomy across the organization that would enable a single identifier for particular data items; the toll ETL, cleansing, and reconciliation can take on agile data delivery; the limitations in cross-referencing and linking instruments and data to other data that exact a price on data governance and quality – they all factor into the picture she sketched out.

Read more

Gartner Uncovers Who’s Cool In The Supply Chain

Photo courtesy: Flickr/a loves dc

Photo courtesy: Flickr/a loves dc

Gartner recently released its report dubbed, “Cool Vendors in Supply Chain Services,” which gives kudos to providers that use cloud computing as an enabler or delivery mechanism for capabilities that help enterprises to better manage their supply chains.

On that list of vendors building cloud solutions and leveraging big data and analytics to optimize the supply chain is startup Elementum, which The Semantic Web Blog initially covered here and which envisions the supply chain as a complex graph of connections. As we reported previously, Elementum’s back-end is based on a real-time Java, MongoDB NoSQL document database and flexible schema graph database to store and map the nodes and edges of a supply chain graph. A URI is used for identifying data resources and metadata, and a federated platform query language makes it possible to access multiple types of data using that URI, regardless of what type of database it is stored in. Mobile apps provide end users access to managing transportation networks, respond to supply chain risks, and monitor the health of the supply chain.

Gartner analyst Michael Dominy writes in the report that Elementum earns its cool designation in part for its exploitation of Gartner’s Nexus of Forces, which the research firm describes as the convergence and mutual reinforcement of social, mobility, cloud and information patterns that drive new business scenarios.

Read more

Cruxly Analytics Technology Drives Actions From Intents

Image courtesy: Flickr/ M4D GROUP

Image courtesy: Flickr/ M4D GROUP

What are your customers – or potential clients – saying or asking online, often in short texts and streaming posts, or in emails about your products, services, or their own particular interests or desires? If you can understand their actionable intents in realtime, then you have a good shot at responding swiftly and appropriately to those expressed intents, requests, or queries. That could add up to new sales, new customers, and better marketing and product management.

Startup Cruxly, which is presenting at this week’s Sentiment Analysis Symposium in NYC, believes it’s taking the oft-touted concept of social media monitoring in a new direction with its platform that applies natural language processing techniques for intent detection in realtime. “The idea is to be actionable,” says CEO Aloke Guha. “If it’s not actionable, at most [monitoring] is a nice-to-have [capability].”

Read more

NoSQL’s Data Modeling Advantages

4708481750_40fe48efa7_z

Jennifer Zaino recently wrote an article for our sister website DATAVERSITY on the evolving field of NoSQL databases. Zaino wrote, “Hadoop Hbase. MongoDB. Cassandra. Couchbase. Neo4J. Riak. Those are just a few of the sprawling community of NoSQL databases, a category that originally sprang up in response to the internal needs of companies such as Google, Amazon, Facebook, LinkedIn, Yahoo and more – needs for better scalability, lower latency, greater flexibility, and a better price/performance ratio in an age of Big Data and Cloud computing. They come in many forms, from key-value stores to wide-column stores to data grids and document, graph, and object databases. And as a group – however still informally defined – NoSQL (considered by most to mean ‘not only SQL’) is growing fast. The worldwide NoSQL market is expected to reach $3.4 billion by 2018, growing at a CAGR of 21 percent between last year and 2018, according to Market Research Media. Read more

SindiceTech Relaunch Features SIREn Search System, PivotBrowser Relational Faceted Browser

sindiceLast week news came from SindiceTech about the availability of its SindiceTech Freebase Distribution for the cloud (see our story here). SindiceTech has finalized its separation from the university setting in which it incubated, the former DERI institute, now a part of the Insight Center for Data Analytics, and now is re-launching its activities, with more new solutions and capabilities on the way.

“The first thing was to launch the Knowledge Graph distribution in the cloud,” says CEO Giovanni Tummarello. “The Freebase distribution showcases how it is possible to quickly have a really large Knowledge Graph in one’s own private cloud space.” The distribution comes instrumented with some of the tools SindiceTech has developed to help users both understand and make use of the data, he says, noting that “the idea of the Knowledge Graph is to have a data integration space that makes it very simple to add new information, but all that power is at risk of being lost without the tools to understand what is in the Knowledge Graph.”

Included in the first round of the distribution’s tools for composing queries and understanding the data as a whole are the Data Types Explorer (in both tabular and graph versions), and the Assisted SPARQL Query Editor. The next releases will increase the number of tools and provide updated data. “Among the tools expected is an advanced Knowledge Graph entity search system based on our newly released SIREn search system,” he says.

Read more

NEXT PAGE >>