Sean Golliher

SemanticWeb.com “Innovation Spotlight” Interview with Kevin O’Connor, CEO of FindTheBest and Founder of DoubleClick

The founder of DoubleClick.com, purchased by Google in 2007 for around $3.7 billion,  Kevin O’Connor, spoke with me about his newest venture FindTheBest.com. Founded in 2009, FindTheBest.com makes recommendations and comparisons for just about anything of interest on the web.  Kevin tells us all about FindTheBest.com, recommendation engines, and future plans for his company.

If you would like your company to be considered for an interview please email editor[ at ]semanticweb[ dot ]com.


Sean:
Hi Kevin, after leaving DoubleClick, which was sold to Google, what made you decide to start another company?

Kevin: I resigned as the CEO of DoubleClick in 2000, although I did remain the Chairman until the company was sold in 2005. I had spent 17 years working 80-hour weeks—and loved it—but ultimately decided I wanted more balance in my life; that meant spending more time with my family.

My passion for tech, however, never faded. I wanted to find a way get back into the tech world, but still have time for all the other important things in life. So I decided to start my own venture capital firm—O’Connor Ventures—and began investing in promising startups like Surfline, Meet-Up, Procore and Travidia.

I honestly didn’t think I would get back into the tech world as a founder, but I was becoming more and more frustrated by the chaos of the Web and I wanted to find a way to organize it.

 Sean:  What inspired you to start FindTheBest.com? What problem where you trying to solve?

Kevin: FindTheBest was founded out of three fundamental problems I saw with the Web:

Read more

SemanticWeb.com “Innovation Spotlight” Interview with Andreas Blumauer, CEO of Semantic Web Company

If you would like your company to be considered for an interview please email editor[ at ]semanticweb[ dot ]com.

In this segment of our “Innovation Spotlight” we spoke with Andreas Blumauer, the CEO of  Semantic Web Company. Semantic Web Company is headquartered in Vienna, Austria and their software extracts meaning from big data using linked data technologies. In this interview Andreas describes some of the their core products to us in more detail.

Sean: Hi Andreas. Can you give us a little background on your company? When did you get started in the Semantic Web?

Andreas: As an offspring of a ‘typical’ web agency from the early days of the internet, we became a specialized provider in 2004: The ‘Semantic Web School’ focused on research, consulting and training in the area of the semantic web. We learned quickly how the idea of a ‘semantic web’ was able to trigger a lot of great project visions but also, that most of the tools from the early days of the semantic web were rather scary for enterprises. In 2007 we experienced that information professionals began to search for grown-up semantic web solutions to improve their information infrastructure. We were excited that ‘our’ main topics obviously began to play a role in the development of IT-strategies in many organizations. We refocused on the development of software and renamed our company.

Read more

SemanticWeb.com “Innovation Spotlight” Interview with Elliot Turner, CEO of AlchemyAPI.

If you would like your company to be considered for an interview please email editor[ at ]semanticweb[ dot ]com.

In this segment of our “Innovation Spotlight” we spoke with Elliot Turner (@eturner303), the founder and CEO of AlchemyAPI.com. AlchemyAPI’s cloud-based platform processes around 2.5 billion requests per month. Elliot describes how their API helps companies with sentiment analysis, entity extraction, linked data, text mining, and keyword extraction.

Sean: Hi Elliot, thanks for joining us, how did AlchemyAPI get started?

Elliot: AlchemyAPI was founded in 2005 and in the past seven years has become one of the most widely used semantic analysis APIs, processing billions of transactions monthly for customers across dozens of countries.

I am the Founder and CEO and a serial entrepreneur who comes from the information security space.  My previous company built and sold high-speed network security appliances. After it was acquired, I started AlchemyAPI to focus on the problem of understanding natural human language and written communications.

Sean: Can you describe how your API works? What does it allow your customers to accomplish?

Elliot: Customers submit content via a cloud-based API, and AlchemyAPI analyzes that information in real-time, transforming opaque blobs of text into structured data that can be used to drive a number of business functions. The service is capable of processing thousands of customer transactions every second, enabling our customers to perform large-scale text analysis and content analytics without significant capital investment.

Read more

Introducing SemanticWeb.com “Innovation Spotlight” Series with Pingar

[Editor's Note: This interview, conducted by guest Sean Golliher, is our first in the new series entitled "Innovation Spotlight." It's part of our initiative to introduce the semantic web community to innovative companies working on important problems using Semantic Technologies.

If you would like your company to be considered for an interview please email editor[ at ]semanticweb[ dot ]com.]

Pingar Interview:

Alyona Medelyan ( @zelandiya ) joined Pingar ( @PingarHQ ) in 2010 and is the chief research officer at Pingar. She has a PhD in Natural Language Processing that was completed at the University of Waikato and funded by Google. Her expertise areas are Keywords and Entity Extraction, as well as Wikipedia Mining.

In this interview we find out more about Pingar’s research, their products, and the clients they work with.

Sean: Hi Alyona. Thanks for speaking with us today.  When was Pingar founded and can you explain a little bit about what Pingar does?

Alyona: Pingar was founded in 2007 and in the past 5 years we have developed innovative software for document management and text analytics. I joined the company in 2010 and have been focusing more specifically on automated metadata assignment by adding keyword extraction, named entity recognition and taxonomy mapping capabilities.

Sean: What techniques do you use for keyword extraction and named entity recognition? Are you using any existing databases to aide with entity recognition?

Read more

Google Just Hi-jacked the Semantic Web Vocabulary

[Editor's Note: This guest editorial is provided by Sean Golliher. He can be found on Twitter at @seangolliher]

The Semantic Web’s LOD Cloud

Google announced they’re rolling out new enhancements to their search technology and they’re calling it the “Knowledge Graph.”  For those involved in the Semantic Web Google’s “Knowledge Graph” is nothing new. After watching the video, and reading through the announcements, the Google engineers are giving the impression, to those familiar with this field, that they have created something new and innovative.

Google’s “new” Knowledge Graph

While it ‘s commendable that Google is improving search it’s interesting to note the direct translations of Google’s “new language” to the existing semantic web vocabulary. Normally engineers and researchers quote, or at least reference, the original sources of their ideas. One can’t help but notice that the semantic web isn’t mentioned in any of Google’s announcements. After watching the different reactions from the semantic web community I found that many took notice of the language Google used and how the ideas from the semantic web were repackaged as “new” and discovered by Google.

Read more

Paper Review: “Recovering Semantic Tables on the WEB”

A simple table with no semanticsA paper entitled  “Recovering Semantics of Tables on the Web” was presented at  the 37th Conference on Very Large Databases in Seattle, WA . The paper’s authors included 6 Google engineers along with professor Petros Venetis of Stanford University and Gengxin Miao of UC Santa Barbara. The paper summarizes an approach for recovering the semantics of tables with additional annotations other than what the author of a table has provided. The paper is of interest to developers working on the semantic web because it gives insight into how programmers can use semantic data (database of triples) and Open Information Extraction (OIE) to enhance unstructured data on the web. In addition they compare how a  “maximum-likelihood” model, used to assign class labels to tables, compares to a “database of triples” approach. The authors show that their method for labeling tables is capable of labeling “an order of magnitude more tables on the web than is possible using Wikipedia/YAGO and many more than freebase.”

Read more

The Future of Video on the Web: A Discussion with Googler Thomas Steiner About the “Semantic Web Video (SemWebVid)” Project

During the recent SemTech SF event I met with Thomas Steiner of Google Italy to discuss a recent project entitled “SemWebVid.” In addition to working for Google, Steiner is working on his Ph.D. at UPC.edu in Barcelona, Spain. I found Steiner’s project fascinating because it provides a glimpse at how semantic technology will change the way we view, find, and interact with online video in future.

The SemWebVid project is part of a European Union research project entitled “I-SEARCH.”  Google, in addition to other companies, is one of the industry sponsors of this project. All of the research from this project is being done “in the open” and the findings are available to the public via published papers.

Read more

SemTech2011 Coverage: SemWebbers, LODers: What PubSubHubbub Can Do For You

I sat down with Dr. Alexandre Passant, of DERI NUI Galway, to discuss his recent research projects entitled “sparqlPuSh” and “SMOB.” SparqlPuSh combines Google’s PubSubHubbub, SPARQL, and SPARQL updates for proactive notifications in RDF stores. The interface was designed to be plugged on top of an RDF store. SMOB is a distributed microblogging system that reuses similar principles to enable privacy in microblogging, and won a Google research award. The full specification of sparqlPuSHcan be found on the Google code website.

Passant began by discussing the real-time web and applications that utilize citizen sensing. Practical applications are being developed that combine sensors and social data to solve real-world problems. Passant said “The web is now being looked at like a large information stream.We can use this stream to build real-time semantic applications.” He quoted a recent paper from WWW2010 entitled “Earthquake Shakes Twitter Users:Real-time Event Detection by Social Sensors.”  In this paper the authors discuss how they semantically analyze a tweet and how each user is regarded as a “sensor” in the application.

Read more

SemTech 2011 Coverage: Paypal Discusses Social Commerce and the Semantic Web

Paypal’s Praveen Alavilli presented at SemTech 2011, San Francisco, on Social Commerce and the Semantic Web. Alavilli explained that “ecommerce involves much more than just buying and selling.”  There are many aspects involved with ecommerce and many online retailers are focused only on the transactional part of an online sale. For example, the point where a consumer is on a website making a purchase within a shopping cart. And, as Alavilli explained, “the buying cycle of a consumer is much larger than the point at which a transaction occurs.”  In each step of the buying cycle, there is an opportunity to influence the consumer. Online consumers may go through one, or all, of the following steps in an ecommerce buying cycle:

  • Inspiration: A consumer “wants to do something” like become a writer, for example.
  • Investigation:  A consumer looks for products that help accomplish a task(s).
  • Research:  A consumer does research on products and solutions.
  • Search:  The consumer searches the web for product.
  • Transaction: The experience the consumer has when they purchase online.
  • Use: Determine how the product is used.
  • End of Life: What happens when the consumer is finished using the product.

Read more

SemTech 2011 Coverage: The RDFa/SEO Wave – How to Catch It and Why

SemTech 2011 Coverage

Barbara H. Starr

Barbara H. Starr

In Barbara Starr’s (Ontologica) session this week at Semtech 2011, San Francisco, she presented a detailed timeline outlining the adoption of RDFa and semantic search enhancements by the major search engines.  In addition to mentioning the rapid growth of the Linked Open Data (LOD) cloud, she showed a movement by the search engines, in particular Google, to support semantic search. The movement is going away from a web of documents with hyperlinks to a web of data and semantic links. Her timeline showed the evolution from search engine support for structured data formats such as Resource Description Framework-in-attributes (RDFa) through last week’s announcement of the Schema.org alliance.

In her talk, she demonstrated how the use of semantic technology in commercial searches can be accomplished by doing a query for “Barack Obama Birthday” on both Google and Bing.  Google returns answers from typical sources like answers.com and Wikipedia, as shown below in Fig. 1.0.

Read more