Posts Tagged ‘NoSQL’

The Supply Chain Is One Big Graph In Start-up Elementum’s Platform

rsz_elementum_transport_appStartup Elementum wants to take supply chains into the 21st century. Incubated at Flextronics, the second largest contract manufacturer in the world, and launching today with $44 million in Series B funding from that company and Lightspeed Ventures, its approach is to get supply chain participants – the OEMs that generate product ideas and designs, the contract manufacturers who build to those specs, the component makers who supply the ingredients to make the product, the various logistics hubs to move finished product to market, and the retail customer – to drop the one-off relational database integrations and instead see the supply chain fundamentally as a complex graph or web of connections.

“It’s no different thematically from how Facebook thinks of its social network or how LinkedIn thinks of what it calls the economic graph,” says Tyler Ziemann, head of growth at Elementum. Built on Amazon Web Services, Elementum’s “mobile-first” apps for real-time visibility, shipment tracking and carrier management, risk monitoring and mitigation, and order collaboration have a back-end built to consume and make sense of both structured and unstructured data on-the-fly, based on a real-time Java, MongoDB NoSQL document database to scale in a simple and less expensive way across a global supply chain that fundamentally involves many trillions of records, and flexible schema graph database to store and map the nodes and edges of the supply chain graph.

“Relational database systems can’t scale to support the types of data volumes we need and the flexibility that is required for modeling the supply chain as a graph,” Ziemann says.

Read more

Semantic Web Jobs: Tagged

tag

Tagged is looking for a Big Data Engineer. According to the post, “Technology is at the point where ubiquitous devices mitigate the problem of physical distance between people.  In other words, the internet is part of our physical world and our physical world embraces the internet.  The round-trip of reality to digital bits and back means that all life is now data – it is all about the capture, extraction, augmentation, interpretation, transformation, composition, propagation and last but not least, the volume of data. The Data Engineer will be the most important software engineering position for decades to come.” Read more

Hello 2014 (Part 2)

rsz_lookahead2

Courtesy: Flickr/faul

Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.

Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.

Bill Roberts, CEO, Swirrl:   Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems.  I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.

Read more

Hello 2014

rsz_lookaheadone

Courtesy: Flickr/Wonderlane

Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:

Phil Archer, Data Activity Lead, W3C:

For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.

I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!

[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.

Read more

Good-Bye 2013

Courtesy: Flickr/MadebyMark

Courtesy: Flickr/MadebyMark

As we prepare to greet the New Year, we take a look back at the year that was. Some of the leading voices in the semantic web/Linked Data/Web 3.0 and sentiment analytics space give us their thoughts on the highlights of 2013.

Read on:

 

Phil Archer, Data Activity Lead, W3C:

The completion and rapid adoption of the updated SPARQL specs, the use of Linked Data (LD) in life sciences, the adoption of LD by the European Commission, and governments in the UK, The Netherlands (NL) and more [stand out]. In other words, [we are seeing] the maturation and growing acknowledgement of the advantages of the technologies.

I contributed to a recent study into the use of Linked Data within governments. We spoke to various UK government departments as well as the UN FAO, the German National Library and more. The roadblocks and enablers section of the study (see here) is useful IMO.

Bottom line: Those organisations use LD because it suits them. It makes their own tasks easier, it allows them to fulfill their public tasks more effectively. They don’t do it to be cool, and they don’t do it to provide 5-Star Linked Data to others. They do it for hard headed and self-interested reasons.

Christine Connors, founder and information strategist, TriviumRLG:

What sticks out in my mind is the resource market: We’ve seen more “semantic technology” job postings, academic positions and M&A activity than I can remember in a long time. I think that this is a noteworthy trend if my assessment is accurate.

There’s also been a huge increase in the attentions of the librarian community, thanks to long-time work at the Library of Congress, from leading experts in that field and via schema.org.

Read more

Applied Relevance Announces Epinomy Version 7

epinomy

Menlo Park, California (PRWEB) December 23, 2013 — Applied Relevance announces Epinomy optimized for MarkLogic 7, a leading NoSQL database platform for managing big data. Epinomy is an advanced information management application for organizing, tagging and classifying structured and unstructured big data content. Epinomy’s semantic engine allows organizations to easily build ontologies and auto-tag documents with metadata, enabling information managers to harness the power of ‘triple stores’ allowing users to quickly search and find all relevant structured and unstructured information all the time. Read more

HealthCare.Gov: Progress Made But BackEnd Struggles Continue

rsz_hcgovThe media has been reporting the last few hours on the Obama administration’s self-imposed deadline for fixing HealthCare.gov. According to these reports, the site is now working more than 90 percent of the time, up from 40 percent in October; that pages on the website are loading in less than a second, down from about eight; that 50,000 people can simultaneously use the site and that it supports 800,000 visitors a day; and page-load failures are down to under 1 percent.

There’s also word, however, that while the front-end may be improved, there are still problems on the back-end. Insurance companies continue to complain they aren’t getting information correctly to support signups. “The key question,” according to CBS News reporter John Dickerson this morning, “is whether that link between the information coming from the website getting to the insurance company – if that link is not strong, people are not getting what was originally promised in the entire process.” If insurance companies aren’t getting the right information for processing plan enrollments, individuals going to the doctor’s after January 1 may find that they aren’t, in fact, covered.

Jeffrey Zients, the man spearheading the website fix, at the end of November did point out that work remains to be done on the backend for tasks such as coordinating payments and application information with insurance companies. Plans are for that to be in effect by mid-January.

As it turns out, among components of its backend technology, according to this report in the NY Times, is the MarkLogic Enterprise NoSQL database, which in its recent Version 7 release also added the ability to store and query data in RDF format using SPARQL syntax.

Read more

Semantic Web Jobs: Mapjects

mapjects

Mapjects is looking for a Data Analyst in Washington, DC. The post states, “Mapjects is a leading centralized logistics operations portal platform. The platform servers franchises with ERP components that suite the franchise business needs. Mapjects Clearview platform provide one-click distribution, logistics and analysis products to enrich and visualize big data sets from warehousing, fulfillment, fraud detection, payment technology and b2b eCommerce. Mapjects is seeking a NoSql data analyst or data developer.” Read more

NoSQL and The Next Generation of Databases

MarkLogic

Mark van Rijmenam of Smart Data Collective recently wrote, “The past decades organisations have been working with relational databases to store their structured data. In the big data era however, these types of databases are not sufficient anymore. Although they made a huge difference in the database world and unlocked data for many applications, relational databases miss some important characteristics for the big data era. NoSQL databases are the answer that solves many of these problems. It is a completely new way of thinking about databases… Read more

MarkLogic® 7 Enhances Enterprise NoSQL with Game Changing Capabilities

MarkLogic

SAN CARLOS, Calif. — October 10, 2013 — MarkLogic Corporation today announced the latest version of its Enterprise NoSQL database platform, MarkLogic® 7. To help organizations gain better operational agility and optimize storage costs, MarkLogic 7 supports cloud computing, can run natively on the Hadoop* Distributed File System (HDFS), and has new features that enable database elasticity and searchable tiered storage. Additionally, to help users understand and gain more meaning from their data, MarkLogic 7 introduces MarkLogic® Semantics. MarkLogic Semantics combines the power of documents, data and RDF triples (also known as Linked Data) to enable analysts to understand, discover and make better informed decisions, and to power the delivery of the most comprehensive, contextually relevant information to users. Read more

<< PREVIOUS PAGENEXT PAGE >>