Yahoo! is searching for a Research Scientist in Sunnyvale, CA. According to the post, “We are looking for a few research scientists with strong background in machine learning or data mining or natural language processing to this team. Your essential responsibility is to focus on search relevance, query understanding, query triggering, email spam, login fraud, etc. Your long term responsibility includes deep understanding and analyzing the search relevance issues, mining the large corpora of click log and query logs, designing and developing better ways to understand query intent and improve search relevance, detecting spam emails in real time, which may include machine learning in ranking, natural language processing, entity extraction, and text classification, etc.” Read more
Posts Tagged ‘Yahoo!’
Deborah Todd of the Pittsburgh Post-Gazette reports, “An initiative to use Yahoo’s data and Carnegie Mellon University’s brain trust to build the smartphone apps of the future has launched with a multimillion-dollar jump start. Project InMind — a five-year, $10 million partnership between CMU and multinational Internet corporation Yahoo Inc. — gives university researchers access to a “mobile toolkit” of Yahoo’s real-time data services and its infrastructure in order to advance machine learning and personalization of smartphone apps. Once new experimental mobile products are created, students and faculty on campus will be able to opt in as alpha testers. The goal is to create customized services able to anticipate users’ needs and interests on an ongoing basis, whether a user is at home playing video games or navigating the streets of a foreign country.” Read more
Jim Edwards of Business Insider reports, “Yahoo has acquired Aviate, a company that provides ‘contextual’ app search and organization for mobile phone users, Marissa Mayer announced at CES — the huge tech conference in Las Vegas — today. ‘Contextual’ search is becoming a huge deal at the major tech brands. Google, Microsoft’s Bing, Apple and Facebook all have contextual or ‘semantic’ search efforts under way.” Read more
Interested in how schema.org has trended in the last couple of years since its birth? If you were at The International Semantic Web Conference event in Sydney a couple of weeks back, you may have caught Google Fellow Ramanathan V. Guha — the mind behind schema.org — present a keynote address about the initiative.
Of course, Australia’s a far way to go for a lot of people, so The Semantic Web Blog is happy to catch everyone up on Guha’s thoughts on the topic.
We caught up with him when he was back stateside:
The Semantic Web Blog: Tell us a little bit about the main focus of your keynote.
Guha: The basic discussion was a progress report on schema.org – its history and why it came about a couple of years ago. Other than a couple of panels at SemTech we’ve maintained a rather low profile and figured it might be a good time to talk more about it, and to a crowd that is different from the SemTech crowd.
The short version is that the goal, of course, is to make it easier for mainstream webmasters to add structured data markup to web pages, so that they wouldn’t have to track down many different vocabularies, or think about what Yahoo or Microsoft or Google understands. Before webmasters had to champion internally which vocabularies to use and how to mark up a site, but we have reduced that and also now it’s not an issue of which search engine to cater to.
It’s now a little over two years since launch and we are seeing adoption way beyond what we expected. The aggregate search engines see about 15 percent of the pages we crawl have schema.org markup. This is the first time we see markup approximately on the order of the scale of the web….Now over 5 million sites are using it. That’s helped by the mainstream platforms like Drupal and WordPress adopting it so that it becomes part of the regular workflow. Read more
Seth Stevenson of The Wall Street Journal recently wrote, “Upon hearing, in March of this year, reports that a 17-year-old schoolboy had sold a piece of software to Yahoo! for $30 million, you might well have entertained a few preconceived notions about what sort of child this must be. A geeky specimen, no doubt. A savant with zero interests outside writing lines of code. A twitchy creature, prone to mumbling, averse to eye contact. Thus it’s rather a shock when you first encounter Nick D’Aloisio striding into London’s Bar Boulud restaurant, firmly shaking hands and proceeding to outline his entrepreneurial vision.” Read more
[UPDATE: This panel has a new panelist! Mike Arnesen, SEO Team Manager of SwellPath will participate in New York.]
On October 3 at the New York Semantic Technology & Business Conference (#SemTechBiz), a panel of experts will tackle the issue of how Semantic Web technologies are rapidly changing the landscape of Search Engine Optimization. The panel, titled “The Semantic Web Has Killed SEO. Long Live SEO.,” is made up of Aaron Bradley, David Amerland, Barbara Starr, Duane Forrester, and Mike Arnesen.
The session will address numerous issues at the intersection of Semantic Web and SEO. As the description reads, “From rich snippets to the Google Knowledge Graph to Bing Snapshots semantic technology has transformed the look, feel and functionality of search engines.”
Have these changes undermined the ways in which websites are optimized for search, effectively “killing” SEO? Or are tried-and-true SEO tactics still effective? And what does the future hold for SEO in a semantic world?
David Amerland, author of Google Semantic Search and speaker at the upcoming Semantic Technology & Business Conference in New York, has given his take on Microsoft’s acquisition of Nokia’s Devices & Services division. In his analysis, he talks about the Semantic Web and how the company that stands to lose the most in this deal is neither Microsoft nor Nokia, but Yahoo!
Amerland posits, “In the semantic web there are specific vectors driving growth that revolve around relevance and the end-user experience. In order to guarantee both you need a means to constantly acquire quality data and control the environment. Apple gets this to the extent that it locks out virtually all third-party providers from its iPhones and iOs, Facebook got it which is why it launched its own app designed to help it take over users’ phones and Google gets it having recently launched Moto X, in addition to the Android environment being present in many third-party phones.”
Sing along with me to this classic hit from 1980: “Knowledge graphs are everywhere; They’re everywhere; My mind describes them to me.”
Our Daughter’s Wedding’s song Lawn Chairs. But it’s a good description of some of the activity at the Semantic Technology & Business Conference this week, which saw Google, Yahoo and Wikidata chatting up the topic of Knowledge Graphs. On Tuesday, for example, Google’s Jason Douglas provided insight into how the search giant’s Knowledge Graph is critical to meeting a new world of search requirements that’s focused on providing answers and acting in an anticipatory way (see story here), while Wednesday’s closing keynote had Wikimedia Deutschland e.V. project director Denny Vrandecic getting the audience up to date with Wikidata – aka, Wikipedia’s Knowledge Graph For, And By, Everyone.
There are some 280 language versions of Wikipedia for which Wikidata serves as the common source of structured data. Wikidata now has an entity base of more than 12 million items that represent the topics of Wikipedia articles, Vrandecic said during his presentation.
NEXT PAGE >>