Seth Stevenson of The Wall Street Journal recently wrote, “Upon hearing, in March of this year, reports that a 17-year-old schoolboy had sold a piece of software to Yahoo! for $30 million, you might well have entertained a few preconceived notions about what sort of child this must be. A geeky specimen, no doubt. A savant with zero interests outside writing lines of code. A twitchy creature, prone to mumbling, averse to eye contact. Thus it’s rather a shock when you first encounter Nick D’Aloisio striding into London’s Bar Boulud restaurant, firmly shaking hands and proceeding to outline his entrepreneurial vision.” Read more
Posts Tagged ‘Yahoo!’
[UPDATE: This panel has a new panelist! Mike Arnesen, SEO Team Manager of SwellPath will participate in New York.]
On October 3 at the New York Semantic Technology & Business Conference (#SemTechBiz), a panel of experts will tackle the issue of how Semantic Web technologies are rapidly changing the landscape of Search Engine Optimization. The panel, titled “The Semantic Web Has Killed SEO. Long Live SEO.,” is made up of Aaron Bradley, David Amerland, Barbara Starr, Duane Forrester, and Mike Arnesen.
The session will address numerous issues at the intersection of Semantic Web and SEO. As the description reads, “From rich snippets to the Google Knowledge Graph to Bing Snapshots semantic technology has transformed the look, feel and functionality of search engines.”
Have these changes undermined the ways in which websites are optimized for search, effectively “killing” SEO? Or are tried-and-true SEO tactics still effective? And what does the future hold for SEO in a semantic world?
David Amerland, author of Google Semantic Search and speaker at the upcoming Semantic Technology & Business Conference in New York, has given his take on Microsoft’s acquisition of Nokia’s Devices & Services division. In his analysis, he talks about the Semantic Web and how the company that stands to lose the most in this deal is neither Microsoft nor Nokia, but Yahoo!
Amerland posits, “In the semantic web there are specific vectors driving growth that revolve around relevance and the end-user experience. In order to guarantee both you need a means to constantly acquire quality data and control the environment. Apple gets this to the extent that it locks out virtually all third-party providers from its iPhones and iOs, Facebook got it which is why it launched its own app designed to help it take over users’ phones and Google gets it having recently launched Moto X, in addition to the Android environment being present in many third-party phones.”
Sing along with me to this classic hit from 1980: “Knowledge graphs are everywhere; They’re everywhere; My mind describes them to me.”
Our Daughter’s Wedding’s song Lawn Chairs. But it’s a good description of some of the activity at the Semantic Technology & Business Conference this week, which saw Google, Yahoo and Wikidata chatting up the topic of Knowledge Graphs. On Tuesday, for example, Google’s Jason Douglas provided insight into how the search giant’s Knowledge Graph is critical to meeting a new world of search requirements that’s focused on providing answers and acting in an anticipatory way (see story here), while Wednesday’s closing keynote had Wikimedia Deutschland e.V. project director Denny Vrandecic getting the audience up to date with Wikidata – aka, Wikipedia’s Knowledge Graph For, And By, Everyone.
There are some 280 language versions of Wikipedia for which Wikidata serves as the common source of structured data. Wikidata now has an entity base of more than 12 million items that represent the topics of Wikipedia articles, Vrandecic said during his presentation.
At The Semantic Technology and Business conference in San Francisco Monday, OCLC technology evangelist Richard Wallis broke the news that Content-negotiation was implemented for the publication of Linked Data for WorldCat resources. Last June, WorldCat.org began publishing Linked Data for its bibliographic treasure trove, a global catalog of more than 290 million library records and some 2 billion holdings, leveraging schema.org to describe the assets.
“Now you can use standard Linked Data technologies to bring back information in RDF/ XML, JSON, or Turtle,” Wallis said. Or triples. “People can start playing with this today.” As he writes in his blog discussing the news, they can manually specify their preferred serialization format to work with or display, or do it from within a program by specifying to the http protocol for the format to accept from accessing the URI.
“Two hundred ninety million records on the web of Linked Data is a pretty good chunk of stuff when you start talking content negotiation,” Wallis told the Semantic Web Blog.
David Amerland of Imassera recently wrote, “It seems that $1 billion plus change these days is what’s required to buy a photo-sharing app (if you’re Facebook), a global phone manufacturer (if you’re Google) or a microblogging site (if you happen to be Yahoo). Beyond the jaw-dropping numbers that are casually bandied around for these acquisitions lies a game plan that has every major player struggling to position themselves for relevance and longevity in the semantic web. This ‘new’ web is characterized by two related things: data and connectivity and these happen to be the exact same building blocks out of which web verticals are created.” Read more
Yahoo! is looking for a Senior Research Scientist, Scalable Machine Learning in New York, NY. According to the post, “Machine learning and data mining are ubiquitous in Web research and stand at the core of Yahoo!’s products and platforms, involving important functions such as classification, prediction, ranking, clustering, sampling, compression, recommendation, dimension reduction, indexing, pattern mining and regression. The technical challenges associated with these tasks have been extensively studied using standard computational models. However, they still represent an open research area in large-scale, high-throughput data environments such as Yahoo!’s.” Read more
Schema.org, Learning Resource Metadata Initiative Join Hands In Boost To Educational Content Searches
Earlier this month word came of a revision to schema.org: Version 1.0a additions, according to this posting from Dan Brickley, include the Datasets vocabulary, and some supporting utility terms for describing schema.org types, properties and their inter-relationships. One of the gems in the update are additions related to the Learning Resource Metadata Initiative (LRMI), an effort led by the Association of Educational Publishers and Creative Commons, which has as its goals making it easier to publish, discover and delivery quality educational resources on the web. The Bill and Melinda Gates Foundation and the William and Flora Hewlett Foundation helped fund the work.
With schema.org serving as a catalyst for its work, the LRMI developed a common metadata framework for tagging online learning resources, with the idea of having that metadata schema incorporated into Schema.org. With that now the case, it’s possible for publishers or curators of educational content to use LRMI markup and have that metadata recognized by the major search engines.
“One of the reasons why education was one of the first extensions of schema.org is that the education industry is going through some very interesting times,” says Madi Weland Solomon, head of Data Architecture Standards at education company Pearson plc, one of the LRMI project launch partners.
Barbara Starr of Search Engine Land reports, “In a June 2010 Semantic Web Meetup in San Diego, Peter Mika of Yahoo!’s research division gave a presentation entitled, ‘The future face of Search is Semantic for Facebook, Google and Yahoo!’ As the title suggests, the presentation focused on the ever-growing use of semantic markup as a means for helping computers parse and understand content. The talk focused on what was then the current state of the Semantic Web, as well as upcoming formats/technologies in development and the research being done in the field of semantic search.” Read more
Google has scooped up news aggregation summary service Wavii for $30 million, according to Reuters. (Google and Wavii haven’t officially commented yet.) Wavii’s service has been influenced by expert machine learning natural language processing work, as explained by founder and CEO Adrian Aoun in our interview here. In February, a blog on the site also explained its use of classification for NLP tasks like disambiguating entities, automatically learning new entities and relationship extraction. Late last year Wavii announced its iPhone app.
Reports have it that Google and Apple were in a bidding war over acquiring the venture, which has been likened to Yahoo’s Summly buyout in March (see story here). TechCrunch says the Wavii team will join Google’s Knowledge Graph division.
When it comes to delivering personalized intelligence about what’s up in the world, Wavii aims to better understand users and what they’ll want to see in their feeds not just via explicit topic follows, but also via various signals. These include which other topics are involved in the events they comment on, how often they click into events about each topic, what topics they search for and what topic pages they visit. It also includes other attributes of stories they care about besides the topics, and their interest level in a topic to guess what the interest might be in related topics.