Opinion

Lucena Research Launches QuantDesk™ Back Tester

Lucena Research, a leading provider of investment decision support technology, today announced the launch of QuantDesk™ Back Tester, the trading strategy simulator component of QuantDesk™. QuantDesk™ Back Tester is a realistic market simulator that allows investors to test trading strategies over critical market periods. Back Tester represents a fourth component of Lucena’s flagship QuantDesk™ cloud-based platform, which allows users to build a strategy using Lucena’s modular algorithms such as scanning, forecasting, optimizing and hedging to help investment professionals validate and refine quantitative investment strategies. Read more

3 Transformations of IT

David Hill of Network Computing recently shared his theory on the three transformations of IT. He writes, “The first was the digitization of business. The second is the continuing digitization of human experience. The third stage is the digitization of machines. Each transformation is ongoing, builds upon the others, and may overlap. Thus, some technologies that formed a foundation earlier are still active. For example, the mainframe is still alive and well, even in the time of mobile computing. Even though specific technologies provide a frame of reference, these transformations span a broad perspective and are not dependent upon any one technology. Please also note that there is not a smooth transition to each transformation, but that elements of a later transformation may be present while the key transformation of an earlier era is still more prominent.” Read more

The Downfall of Facebook Graph Search?

Matthew Syrett of PBS Media Shift recently shared his thoughts on the flaws of Facebook’s graph search. He writes, “With Graph Search, Facebook intends to mine our social networks to unlock knowledge stored among our online friends to create a recommendation search engine. Instead of crawling the Internet to build search indices, Graph Search will use our social media connections, likes, and Check-ins to make search indices to respond to our queries. Using Graph Search, we should be able to find restaurants that have been checked into or liked by our Facebook friends. We can discover movies based upon the likes of our connections, or relevant music. The project strives to be an alternative to everyday search queries we all do to discover things to do or get in our lives — all validated by people we know and trust, and not by unknown reviewers or a faceless algorithm.” Read more

The Future of E-Commerce Data Interpretation: Semantic Markup, or Computer Vision?

How will webpage data be interpreted in the next few years?  The Semantic Web community has high hopes for ever evolving semantic standards to help systems identify and extract rich data found on the web, ultimately making it more useful.  With the announcement of Schema.org support for GoodRelations  in November, it seems clear semantic progress is now being made on the e-commerce front, and at an accelerated rate.  Martin Hepp, founder of GoodRelations, estimates the rate of adoption of rich, structured e-commerce data to significantly increase this year.

diffbot logo and semantic web cubeHowever, Mike Tung, founder and CEO of a data parsing service called DiffBot, has less faith that the standards necessary for a true Semantic Web will ever be completely and effectively implemented.  In an interview on Xconomy he states that for semantic standards to work correctly content owners must markup the content once for the web and a second time for the semantic standards.  This requires extra work, and affords them the opportunity to perform content stuffing (SEO spam).

Read more

Open Data: The Tough Questions

Tom Slee of The Guardian recently raised some of his concerns regarding who profits most from open data. He states, “In the technology world, ‘openness!’ has long been a battle cry of the underprivileged. It’s the language of bottom up freedom against top-downcontrol; of Linux against Microsoft and Wikipedia against Britannica. And now, open government data is the demand of those who would drag government data out from behind locked doors. But the world has changed since Linus Torvalds started his hobby operating system, and now openness is heard from the top as much as the bottom.” Read more

Google’s Structured Data Take Over

Barbara Starr of Search Engine Land recently posed the question, is Google hijacking semantic markup and structured data? She writes, “In 2012, I started a series, How The Major Search And Social Engines Are Using The Semantic Web, which took us to a point in time around September 2012. Since then, there have been further interesting developments. In this article, I am going to focus on recent developments that are search engine and/or Google specific, then take a further look back in search engine history with the assumption (for you history and strategy lovers,) that a successful strategy used once, may well be used again in similar circumstances.” Read more

An Opinion of Semantic Travel Search

Phillip Butler of Argophilia.com recently shared his opinions regarding semantic search as it relates to the travel industry, particularly with regard to Desti, a new travel search startup. Butler writes, “The other day Tnooz reported on Expedia testing their own variant of natural language search, now available in a Powerset-like sandbox called YourVisit. In that article Kevin May aptly points to other supposed ‘natural language search’ developments in travel, namely Evature and Hopper. The problem with this is, these systems are not AI nor true semantic search, in fact ‘natural language search’ is a buzz term actually used by Powerset to differentiate from hakia and Google semantic search experiments.” Read more

Thinking Differently with Graph Databases

Emil Eifrem, CEO of Neo Technology recently opined that graphs offer a new way of thinking. He explains, “Faced with the need to generate ever-greater insight and end-user value, some of the world’s most innovative companies — Google, Facebook, Twitter, Adobe and American Express among them — have turned to graph technologies to tackle the complexity at the heart of their data. To understand how graphs address data complexity, we need first to understand the nature of the complexity itself. In practical terms, data gets more complex as it gets bigger, more semi-structured, and more densely connected.” Read more

How the Internet of Things Will Reshape the World in 2013

Aron Kramer of The Guardian recently predicted long-term changes to the world that will occur in 2013. He writes, “A healthy dose of scepticism is in order whenever one attempts to foresee the future. Events usually make great sense in retrospect, but are difficult to predict at the time. The daily hum of headlines, breaking news and Twitterfeeds may distract us from the underlying changes taking place. With this in mind, the best way to think about 2013 is to consider the long-term changes that are reshaping our world – some with visible effect, and some under the radar.” Read more

Why WordPress Needs to Embrace Machine Readability

Benjamin J. Balter recently opined that WordPress needs to start better expressing content in a machine readable format. Balter begins with an explanation of REST: “The idea is simple: a URL should uniquely identify the underlying data it represents. If I have a URL, I shouldn’t need anything else to view or otherwise manipulate the information behind it. WordPress, for the most part, does this well. Each post is given a unique permalink (e.g., 2012-12-15-why-wordpress…) that always points to that post. The problem is, however, in WordPress’s sense, it points to the display of that content, not the content itself. Read more

<< PREVIOUS PAGENEXT PAGE >>