App Orchid Inc recently announced the industry’s first Cognitive Computing app builder. The announcement states, “Emerging from stealth mode, AppOrchid Inc. announced today its disruptive new technology for developing cognitive apps that targets the multi-billion dollar “Internet of Everything” (IoE) market. “The future for enterprise computing lies in intelligent or cognitive apps. In this new “Internet of everything” world, connected devices, social data and massive volumes of free form documents integrate with enterprise applications in real-time. AppOrchid’s groundbreaking products employ Big Data technology and a scalable Knowledge graph model powered with intelligent natural language processing. The end result is human-like intelligence, with a gamified user experience spanning conventional, handheld and wearable devices. This is a watershed moment in Enterprise computing”, said Krishna Kumar, Founder and CEO of AppOrchid Inc.”
DATAVERSITY™ and SemanticWeb.com have announced the first Cognitive Computing Forum in San Jose, California, on August 20-21, 2014. This two-day conference was developed to help attendees understand the new world of Cognitive Analytics, Machine Learning, Deep Learning, Reasoning and next generation Artificial Intelligence. Visit www.cognitivecomputingforum.com to view speakers, the agenda, registration options, and to learn more about this unique event.
Cognitive systems are the next stage in the evolution of smarter computing and are often described as emulating the human brain. Built upon recent advances in technologies such as natural language processing, machine learning, sensors, and neural networks, and combined with massive computational power, cognitive computing promises to bring staggering improvements to applications. Among the biggest improvements are expected in predictive analytics, robot intelligence, computer-based reasoning, and human annotation. New technologies and companies are on the horizon, and these top technologies will be represented and available to attendees throughout the event.
DATAVERSITY has enlisted a world-class group of speakers to lead the in-depth presentations at the conference. Tom Mitchell, Professor of AI and Learning at Carnegie Mellon University; Chris Welty, Research Scientist at IBM T.J. Watson Research Center; Ted Dunning, Chief Application Architect at MapR, and Google Fellow R.V. Guhaare among the industry experts on the schedule.
Discover the potential of Cognitive Computing for your organization and register your staff for this event. Register two staff members from the same organization and the third is free. See details at www.cognitivecomputingforum.com on this and other discounts available now.
The inaugural Cognitive Computing Forum will be co-located with the 10th Annual Semantic Technology & Business Conference and the fourth annual NoSQL Now! conference.
If you are a member of the press and would like to attend, please request a Press Pass by contacting Samantha Taylor at email@example.com.
Read the full press release here.
The Learning Resource Metadata Initiative (LRMI) has released a technical briefing about schema.org. The paper was co-authored by Phil Barker and Lorna M. Campbell of Cetis, the Centre for Educational Technology, Interoperability and Standards.
LRMI, which we have reported on here, “has developed a common metadata framework for describing or ‘tagging’ learning resources on the web.”
The Cetis website says, “This briefing describes schema.org for a technical audience. It is aimed at people who may want to implement schema.org markup in websites or other tools they build but who wish to know more about the technical approach behind schema.org and how to implement it. We also hope that this briefing will be useful to those who are evaluating whether to implement schema.org to meet the requirements of their own organization.”
In making the announcement in a W3C list, Barker explained, “We often find that when explaining the technology approach of LRMI we are mostly talking about schema.org, so this briefing, which describes the schema.org specification for a technical audience should be of interest to anyone thinking about implementing or using LRMI in a website or other tool. It should also be of interest to people who plan to use schema.org for describing other types of resources.”
The technical brief can be downloaded from:
Cary, NC, March 20, 2014 – Saffron Technology, a cognitive systems company helping Fortune 1000 businesses understand the value of transforming disconnected data into actionable knowledge, today announced that it has closed a $7 million Series B investment round. Funds are earmarked to accelerate business growth, including opening new global headquarters in Silicon Valley.
“Data becomes infinitely more powerful when you tie together its meaning from a multiplicity of disparate sources. Our patented Natural Intelligence Platform unifies all kinds of data – structured and unstructured – in real time, from a large variety of sources and continuously learns about the things in the data without the need for pre-determined rules or models,” said Gayle Sheppard, Saffron Technology CEO. “Now you can automatically see converging and other patterns to anticipate outcomes and prepare to act. These capabilities, combined with our customers’ success with Saffron, position us well for growth. With this additional funding, we will expand customer-centric next generation service teams, build a strong brand presence, create scalability across our business, and establish a Silicon Valley headquarters in spring 2014.”
Almost exactly 10 years after the publication of RDF 1.0 (10 Feb 2004, http://www.w3.org/TR/rdf-concepts/), the World Wide Web Consortium (W3C) has announced today that RDF 1.1 has become a “Recommendation.” In fact, the RDF Working Group has published a set of eight Resource Description Framework (RDF) Recommendations and four Working Group Notes. One of those notes, the RDF 1.1 primer, is a good starting place for those new to the standard.
Lanthaler said of the recommendation, “Semantic Web technologies are often criticized for their complexity–mostly because RDF is being conflated with RDF/XML. Thus, with RDF 1.1 we put a strong focus on simplicity. The new specifications are much more accessible and there’s a clear separation between RDF, the data model, and its serialization formats. Furthermore, the primer provides a great introduction for newcomers. I’m convinced that, along with the standardization of Turtle (and previously JSON-LD), this will mark an important point in the history of the Semantic Web.”
Washington, DC – January 21, 2014 – The new release (2.1) of Stardog, a leading RDF database, hits new scalability heights with a 50-fold increase over previous versions. Using commodity server hardware at the $10,000 price point, Stardog can manage, query, search, and reason over datasets as large as 50B RDF triples.
The new scalability increases put Stardog into contention for the largest semantic technology, linked data, and other graph data enterprise projects. Stardog’s unique feature set, including reasoning and integrity constraint validation, at large scale means it will increasingly serve as the basis for complex software projects.
“We’re really happy about the new scalability of Stardog,” says Mike Grove, Clark & Parsia’s Chief Software Architect, “which makes us competitive with a handful of top graph database systems. And our feature set is unmatched by any of them.”
The new scalability work required software engineering to remove garbage collection pauses during query evaluation, which the 2.1 release also accomplishes. Along with a new hot backup capability, Stardog is more mature and production-capable than ever before.
We reported yesterday on the news that JSON-LD has reached Recommendation status at W3C. Three formal vocabularies also reached that important milestone yesterday:
The W3C Documentation for The Data Catalog Vocabulary (DCAT), says that DCAT “is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web….By using DCAT to describe datasets in data catalogs, publishers increase discoverability and enable applications easily to consume metadata from multiple catalogs. It further enables decentralized publishing of catalogs and facilitates federated dataset search across sites. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation.”
Meanwhile, The RDF Data Cube Vocabulary addresses the following issue: “There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations. The Data Cube vocabulary is a core foundation which supports extension vocabularies to enable publication of other aspects of statistical data flows or other multidimensional data sets.”
Lastly, W3C now recommends use of the Organization Ontology, “a core ontology for organizational structures, aimed at supporting linked data publishing of organizational information across a number of domains. It is designed to allow domain-specific extensions to add classification of organizations and roles, as well as extensions to support neighbouring information such as organizational activities.”
NEXT PAGE >>