Chloe Green of Information Age recently wrote, “Handling immense data sets requires a combination of scientific and technological skills to determine how data is stored, searched and accessed. In science, the importance of data scientists in ensuring that data is handled correctly from the outset is not underestimated; other industries can learn from the scientific approach. Text-mining tools and the use of relevant taxonomies are essential. If we think about big data as a huge number of data points in some multi-dimensional space, the problem is one of analysis, i.e. frequently finding very similar or very dissimilar points which cannot be compared. In life sciences, taxonomies assign data points a class, thus comparison of two points is as easy as looking up other data points in the same class.” Read more
Posts Tagged ‘Big Data’
Dana Gardner of CRM Buyer recently wrote, “The power of Big Data technology is being successfully applied to understanding such complex unknowns as consumer sentiment and even intent. That understanding then vastly improves how retailers and myriad service providers manage their users’ experiences — increasingly in real time. Fortunately, today’s consumers are quite willing to share their intents and sentiments via social media, if you can gather and process the information. Hence the rapidly developing field of social customer relationship management, or Social CRM.” Read more
Sarah Austin, founder of Peak Energies recently wrote for Forbes, “The Valley bubble seems obsessed with the Internet of Things. Things are getting smarter. Devices talk to each other and people are now starting to talk to them. Things are evolving to make decisions, gather information and just take care of stuff for us. For a lot of people, it sounds too crazy. But we already benefit from the start of this shift. Take a Thai restaurant for example. You ask your phone to find you nearby Thai food, and it gives you a list of options. But why doesn’t it filter out obviously bad ones? Or determine that it only need show the closest location of the restaurant chain, and that if two places have a nearly identical menu, but one is 20 percent more expensive and a mile further away, it shouldn’t come up in your immediate results? This is the future of tech. As humans, we want choices, but we don’t want 100 choices. Read more
Is SPARQL the SQL for NoSQL? The question will be discussed at this month’s Semantic Technology & Business Conference in San Jose by Arthur Keen, vp of solution architecture of startup SPARQL City.
It’s not the first time that the industry has considered common database query languages for NoSQL (see this story at our sister site Dataversity.net for some perspective on that). But as Keen sees it, SPARQL has the legs for the job. “What I know about SPARQL is that for every database [SQL and NoSQL alike] out there, someone has tried to put SPARQL on it,” he says, whereas other common query language efforts may be limited in database support. A factor in SPARQL’s favor is query portability across NoSQL systems. Additionally, “you can achieve much higher performance using declarative query languages like SPARQL because they specify the ‘What’ and not the ‘How’ of the query, allowing optimizers to choose the best way to implement the query,” he explains.
In mid-July Dataversity.net, the sister site of The Semantic Web Blog, hosted a webinar on Understanding The World of Cognitive Computing. Semantic technology naturally came up during the session, which was moderated by Steve Ardire, an advisor to cognitive computing, artificial intelligence, and machine learning startups. You can find a recording of the event here.
Here, you can find a more detailed discussion of the session at large, but below are some excerpts related to how the worlds of cognitive computing and semantic technology interact.
One of the panelists, IBM Big Data Evangelist James Kobielus, discussed his thinking around what’s missing from general discussions of cognitive computing to make it a reality. “How do we normally perceive branches of AI, and clearly the semantic web and semantic analysis related to natural language processing and so much more has been part of the discussion for a long time,” he said. When it comes to finding the sense in multi-structured – including unstructured – content that might be text, audio, images or video, “what’s absolutely essential is that as you extract the patterns you are able to tag the patterns, the data, the streams, really deepen the metadata that gets associated with that content and share that metadata downstream to all consuming applications so that they can fully interpret all that content, those objects…[in] whatever the relevant context is.”
Perficient is looking for a Big Data Solution Architect to work anywhere in the United States — this person will travel about 75% of the time. According to the post, “Perficient is looking for Solution Architecture Consultants who are passionate about data and can lead the building of next generation Big Data and predictive analytic applications. You will be working with Perficient business units and technology partners to understand client business objectives and drive solutions that efficiently meet the needs of the business. You will be responsible for guiding the full lifecycle of delivery of information management systems, identifying, quantifying, and winning Perficient consulting opportunities, and providing industry thought-leadership; all while building one of the industry’s leading-edge consulting practices. ” Read more
SAN DIEGO — Teradata (NYSE: TDC), the analytic data platforms, marketing applications, and services company, today announced two acquisitions that accelerate the growth of its big data capabilities.
On July 16th, Teradata acquired assets of Revelytix, a leader in information management products for big data with unique metadata management technology and deep expertise in integrating information across the enterprise. On July 17th, Teradata acquired assets of Hadapt, including experienced big data technologists and intellectual property. Read more
New York’s Tektree Systems is in need of a Big Data Architect. The job description states, “Hadoop Data Architect with both hands-on Big Data and relational experience and deep knowledge of physical data modeling, data organization and storage technology, experienced with high volumes and able to architect and implement multi-tier solutions using the right technology in each tier, based on fit. Required Skills and Qualifications:
- Design and development of data models for a new HDFS Master Data Reservoir and one or more relational or object Current Data environments
- Design of optimum storage allocation for the data stores in the architecture.
- Development of data frameworks for code implementation and testing across the program
- Knowledge and experience with RDF and other Semantic technologies
- Participation in code reviews to assure that developed and tested code conforms with the design and architecture principles
- QA and testing of modules/applications/interfaces.
- End-to-End project experience through to completion and supervise turnover to Operations staff.
- Preparation of documentation of data architecture, designs and implemented code”.
App Orchid Inc recently announced the industry’s first Cognitive Computing app builder. The announcement states, “Emerging from stealth mode, AppOrchid Inc. announced today its disruptive new technology for developing cognitive apps that targets the multi-billion dollar “Internet of Everything” (IoE) market. “The future for enterprise computing lies in intelligent or cognitive apps. In this new “Internet of everything” world, connected devices, social data and massive volumes of free form documents integrate with enterprise applications in real-time. AppOrchid’s groundbreaking products employ Big Data technology and a scalable Knowledge graph model powered with intelligent natural language processing. The end result is human-like intelligence, with a gamified user experience spanning conventional, handheld and wearable devices. This is a watershed moment in Enterprise computing”, said Krishna Kumar, Founder and CEO of AppOrchid Inc.”
According to Mike Kavis of Forbes, “Companies are jumping on the Internet of Things (IoT) bandwagon and for good reasons. McKinsey Global Institute reports that the IoT business will deliver $6.2 trillion of revenue by 2025. Many people wonder if companies are ready for this explosion of data generated for IoT? As with any new technology, security is always the first point of resistance. I agree that IoT brings a wave of new security concerns but the bigger concern is how woefully prepared most data centers are for the massive amount of data coming from all of the “things” in the near future.”
Kavis went on to write that, “Some companies are still hanging on to the belief that they can manage their own data centers better than the various cloud providers out there. This state of denial should all but go away when the influx of petabyte scale data becomes a reality for enterprises. Enterprises are going to have to ask themselves, “Do we want to be in the infrastructure business?” because that is what it will take to provide the appropriate amount of bandwidth, disk storage, and compute power to keep up with the demand for data ingestion, storage, and real-time analytics that will serve the business needs. If there ever was a use case for the cloud, the IoT and Big Data is it. Processing all of the data from the IoT is an exercise in big data that boils down to three major steps: data ingestion (harvesting data), data storage, and analytics.”
To read a different perspective on these challenges and how Semantic Web technologies play a role in them, read Irene Polikoff’s recent guest post, “RDF is Critical to a Successful Internet of Things.”