Posts Tagged ‘search’

Google Buys DeepMind Technologies, Growing Its Deep Learning Portfolio And Expertise

deepmindGoogle’s letting the cash flow. Fresh off its $3.2 billion acquisition of “conscious home” company Nest, which makes the Nest Learning Thermostat and Protect smoke and carbon monoxide detector, it’s spending some comparative pocket change — $400 million – on artificial intelligence startup DeepMind Technologies.

The news was first reported at re/code here, where one source describes DeepMind as “the last large independent company with a strong focus on artificial intelligence.” The London startup, funded by Founders Fund, was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman, with the stated goal of combining machine learning techniques and neuroscience to build powerful general purpose learning algorithms.

Its web page notes that its first commercial applications are in simulations, e-commerce and games, and this posting for a part-time paid computer science internship from this past summer casts it as “a world-class machine learning research company that specializes in developing cutting edge algorithms to power massively disruptive new consumer products.”

Read more

Google’s Popping Up Information About Search Result Sources

Google’s Knowledge Graph took on some new work this week, driving popups of information about some of the website sources that users see in their search results.

googresultAccording to a posting at Google’s Search blog, clicking on the name of the information source that appears next to the link delivers details about that source, as in the picture at left. “You’ll see this extra information when a site is widely recognized as notable online, when there is enough information to show or when the content may be handy for you,” reports Bart Niechwiej, the software engineer who wrote up the news.

The feature’s been getting a lot of buzz. Much of the information informing Google’s Knowledge Graph comes from Wikipedia, as well as from Freebase and the CIA World FactBook. And, when it comes to a popup source of information you’re likely to see show up somewhere in most searches’ results, Wikipedia likely will be among them. In fact, observers like Matt McGee over at Search Engine Land have noted about the new feature that “the popups rely heavily on Wikipedia.”

Read more

Hello 2014 (Part 2)

rsz_lookahead2

Courtesy: Flickr/faul

Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.

Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.

Bill Roberts, CEO, Swirrl:   Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems.  I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.

Read more

Hello 2014

rsz_lookaheadone

Courtesy: Flickr/Wonderlane

Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:

Phil Archer, Data Activity Lead, W3C:

For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.

I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!

[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.

Read more

Good-Bye 2013

Courtesy: Flickr/MadebyMark

Courtesy: Flickr/MadebyMark

As we prepare to greet the New Year, we take a look back at the year that was. Some of the leading voices in the semantic web/Linked Data/Web 3.0 and sentiment analytics space give us their thoughts on the highlights of 2013.

Read on:

 

Phil Archer, Data Activity Lead, W3C:

The completion and rapid adoption of the updated SPARQL specs, the use of Linked Data (LD) in life sciences, the adoption of LD by the European Commission, and governments in the UK, The Netherlands (NL) and more [stand out]. In other words, [we are seeing] the maturation and growing acknowledgement of the advantages of the technologies.

I contributed to a recent study into the use of Linked Data within governments. We spoke to various UK government departments as well as the UN FAO, the German National Library and more. The roadblocks and enablers section of the study (see here) is useful IMO.

Bottom line: Those organisations use LD because it suits them. It makes their own tasks easier, it allows them to fulfill their public tasks more effectively. They don’t do it to be cool, and they don’t do it to provide 5-Star Linked Data to others. They do it for hard headed and self-interested reasons.

Christine Connors, founder and information strategist, TriviumRLG:

What sticks out in my mind is the resource market: We’ve seen more “semantic technology” job postings, academic positions and M&A activity than I can remember in a long time. I think that this is a noteworthy trend if my assessment is accurate.

There’s also been a huge increase in the attentions of the librarian community, thanks to long-time work at the Library of Congress, from leading experts in that field and via schema.org.

Read more

OpenText Details Enterprise Information Management Project For Getting More Value From Enterprise Information

rsz_redoxypixOpenText, the enterprise information management (EIM) vendor that acquired Nstein’s text mining and analytics technology a few years back and ranked strong in go-to-market strength in a January report from Hurwitz & Associates, today is revealing details about its new Project Red Oxygen at its annual customer conference. Now, its core content management and analytics technology will have a featured role in the Discovery Suite component of the project, which the vendor bills as the first-ever harmonized release of new EIM software advancements for extracting value from information and accelerating time to competitive advantage.

“We see that information management is becoming extremely strategic,” says Lubor Ptacek, VP of Strategic Marketing. There’s no competitive differentiation to be gained in the mundane any longer – in retail banking, for example, everybody offers savings, checking with zero cost, and so on, he notes, so “How do you compete? You do it by how well you apply information to recruiting more customers and turning money over faster. Efficiency and customer experience matter but many companies still manage their businesses in a very silo’d way.” But it’s only when there’s the ability to combine information from among all these disparate apps “that we can start helping our organizations to drive innovation and growth,” Ptacek says.

Read more

Redlink Brings The Semantic Web To Integrators

rsz_relinkA cloud-based platform for semantic enrichment, linked data publishing and search technologies is underway now at startup Redlink, which bills it as the world’s first project of its kind.

The company has a heritage in the European Commission-funded IKS (Interactive Knowledge Stack) Open Source Project, created to provide a stack of semantic features for use in content management systems and the birthplace of Apache Stanbol, as well as in the Linked Media Framework project, from which Apache Marmotta derived. The founding developers of those open source projects are founders of Redlink, and Apache Stanbol, which provides a set of reusable components for semantic content management (including adding semantic information to “non-semantic” pieces of content), and Apache Marmotta, which provides Linked Data platform capabilities, are core to the platform, as is Apache Solr for enterprise search.

Read more

UPDATE: The Semantic Web Has Killed SEO. Long Live SEO.

[UPDATE: This panel has a new panelist! Mike Arnesen, SEO Team Manager of SwellPath will participate in New York.]

seo-is-dead-long-live-seoOn October 3 at the New York Semantic Technology & Business Conference (#SemTechBiz), a panel of experts will tackle the issue of how Semantic Web technologies are rapidly changing the landscape of Search Engine Optimization. The panel, titled “The Semantic Web Has Killed SEO. Long Live SEO.,” is made up of Aaron Bradley, David Amerland, Barbara Starr, Duane Forrester, and Mike Arnesen.

The session will address numerous issues at the intersection of Semantic Web and SEO. As the description reads, “From rich snippets to the Google Knowledge Graph to Bing Snapshots semantic technology has transformed the look, feel and functionality of search engines.”

Have these changes undermined the ways in which websites are optimized for search, effectively “killing” SEO? Or are tried-and-true SEO tactics still effective? And what does the future hold for SEO in a semantic world?

Read more

Microsoft, Nokia, and What this Means for Yahoo!

Photo of David AmerlandDavid Amerland, author of Google Semantic Search and speaker at the upcoming Semantic Technology & Business Conference in New York, has given his take on Microsoft’s acquisition of Nokia’s Devices & Services division. In his analysis, he talks about the Semantic Web and how the company that stands to lose the most in this deal is neither Microsoft nor Nokia, but Yahoo!

Amerland posits, “In the semantic web there are specific vectors driving growth that revolve around relevance and the end-user experience. In order to guarantee both you need a means to constantly acquire quality data and control the environment. Apple gets this to the extent that it locks out virtually all third-party providers from its iPhones and iOs, Facebook got it which is why it launched its own app designed to help it take over users’ phones and Google gets it having recently launched Moto X, in addition to the Android environment being present in many third-party phones.”

Read more

Analysis of Brand-Related Knowledge Graph Search

Depiction of entities connected in the Google Knowledge GraphIn a recent post on the Moz.com blog, Dr. Peter J. Myers wrote about an apparent change that took place on the morning of July 19th that appears to be related to how Google processes Knowledge Graph entities. “My gut feeling is that Google has bumped up the volume on the Knowledge Graph, letting KG entries appear more frequently,” Myers posted.

The morning of July 19th was specifically identified because, Myers explained, “Overnight, the number of queries we track in the MozCast 10K beta system that show some kind of Knowledge Graph jumped from 17.8% to 26.7%, an increase of over 50%. This was not a test or a one-day fluke — here’s a graph for all of July 2013 (as of August 20th, the number has remained stable near 27%).

Read more

NEXT PAGE >>