Facebook is looking for a Partner Engineer (Data Applications) in Menlo Park, CA. According to the post, “Partner Engineering is a highly technical team that works with our strategic partners to integrate Facebook Platform into their Web sites, applications, and devices. This role demands an in-depth understanding of complex issues related to semantics, data modeling, platform architecture, application development, and management. The ideal candidate will have 15+ years of professional data analysis and systems architecture experience, including both relational database and semantic modeling work.” Read more
Posts Tagged ‘Facebook’
Not everyone gets to have quite the affectionate relationship with technology that Joaquin Phoenix has with Samantha in Her. But it’s nearly Valentine’s Day, and so as good a time as any to at least review some of the ways that semantic and related technologies are helping us find — and stay — in love:
- Graphing relationships is the game at dating app Hinge, which works to connect Facebook friends with friends’ friends, using their history and likes to build a graph about each other that gets the love conversation started. The free mobile data-driven matchmaking app is available in NYC, DC, Philadelphia, and Boston, and most recently came online in San Francisco, too.
- Folks in search of romance also have the Freebase-powered LoveFlutter to check into. It, too, makes use of your Facebook interests and extends that with the help of the Freebase’s database to fill out other details about those interests – such as what genre of movies it is that you like – to semantically connect your interests with that of other users, and you with them. It will use that data to suggest a great first date spot for you, too. Costs range from free to $29.99 month.
ElasticSearch 1.0 launches today, combining Elasticsearch realtime search and analytics, Logstash (which helps you take logs and other event data from your systems and store them in a central place), and Kibana (for graphing and analyzing logs) in an end-to-end stack designed to be a complete platform for data interaction. This first major update of the solution that delivers actionable insights in real-time from almost any type of structured and unstructured data source follows on the heels of the release of the commercial monitoring solution Elasticsearch Marvel, which gives users insight into the health of Elasticsearch clusters.
Organizations from Wikimedia to Netflix to Facebook today take advantage of Elasticsearch, which vp of engineering Kevin Kluge says is distinguished by its focus from its open-source start four years ago on realtime search in a distributed fashion. The native JSON and RESTful search tool “has intelligence where when it gets a new field that it hasn’t seen before, it discerns from the content of the field what type of data it is,” he explains. Users can optionally define schemas if they want, or be more freeform and very quickly add new styles of data and still profit from easier management and administration, he says.
Models also exist for using JSON-LD to represent RDF in a manner that can be indexed by Elasticsearch. The BBC World Service Archive prototype, in fact, uses an index based on ElasticSearch and constructed from the RDF data held in a central triple store to make sure its search engine and aggregation pages are quick enough.
Startup Elementum wants to take supply chains into the 21st century. Incubated at Flextronics, the second largest contract manufacturer in the world, and launching today with $44 million in Series B funding from that company and Lightspeed Ventures, its approach is to get supply chain participants – the OEMs that generate product ideas and designs, the contract manufacturers who build to those specs, the component makers who supply the ingredients to make the product, the various logistics hubs to move finished product to market, and the retail customer – to drop the one-off relational database integrations and instead see the supply chain fundamentally as a complex graph or web of connections.
“It’s no different thematically from how Facebook thinks of its social network or how LinkedIn thinks of what it calls the economic graph,” says Tyler Ziemann, head of growth at Elementum. Built on Amazon Web Services, Elementum’s “mobile-first” apps for real-time visibility, shipment tracking and carrier management, risk monitoring and mitigation, and order collaboration have a back-end built to consume and make sense of both structured and unstructured data on-the-fly, based on a real-time Java, MongoDB NoSQL document database to scale in a simple and less expensive way across a global supply chain that fundamentally involves many trillions of records, and flexible schema graph database to store and map the nodes and edges of the supply chain graph.
“Relational database systems can’t scale to support the types of data volumes we need and the flexibility that is required for modeling the supply chain as a graph,” Ziemann says.
Google’s letting the cash flow. Fresh off its $3.2 billion acquisition of “conscious home” company Nest, which makes the Nest Learning Thermostat and Protect smoke and carbon monoxide detector, it’s spending some comparative pocket change — $400 million – on artificial intelligence startup DeepMind Technologies.
The news was first reported at re/code here, where one source describes DeepMind as “the last large independent company with a strong focus on artificial intelligence.” The London startup, funded by Founders Fund, was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman, with the stated goal of combining machine learning techniques and neuroscience to build powerful general purpose learning algorithms.
Its web page notes that its first commercial applications are in simulations, e-commerce and games, and this posting for a part-time paid computer science internship from this past summer casts it as “a world-class machine learning research company that specializes in developing cutting edge algorithms to power massively disruptive new consumer products.”
J. O’Dell of Venture Beat reported last week, “Facebook launched Trending, a new feature that shows you relevant-to-you topics that are spiking in popularity. It’s like Twitter’s trending topics feature, except that every person on the network sees a different list of topics based on their own personal interests, Likes, friends, location, etc. In a conversation with Chris Struhar, a software engineer on News Feed, we learned a bit about what makes Trending tick.” Read more
Picking up from where we left off yesterday, we continue exploring where 2014 may take us in the world of semantics, Linked and Smart Data, content analytics, and so much more.
Marco Neumann, CEO and co-founder, KONA and director, Lotico: On the technology side I am personally looking forward to make use of the new RDF1.1 implementations and the new SPARQL end-point deployment solutions in 2014 The Semantic Web idea is here to stay, though you might call it by a different name (again) in 2014.
Bill Roberts, CEO, Swirrl: Looking forward to 2014, I see a growing use of Linked Data in open data ‘production’ systems, as opposed to proofs of concept, pilots and test systems. I expect good progress on taking Linked Data out of the hands of specialists to be used by a broader group of data users.
Yesterday we said a fond farewell to 2013. Today, we look ahead to the New Year, with the help, once again, of our panel of experts:
Phil Archer, Data Activity Lead, W3C:
For me the new Working Groups (WG) are the focus. I think the CSV on the Web WG is going to be an important step in making more data interoperable with Sem Web.
I’d also like to draw attention to the upcoming Linking Geospatial Data workshop in London in March. There have been lots of attempts to use Geospatial data with Linked Data, notably GeoSPARQL of course. But it’s not always easy. We need to make it easier to publish and use data that includes geocoding in some fashion along with the power and functionality of Geospatial Information systems. The workshop brings together W3C, OGC, the UK government [Linked Data Working Group], Ordnance Survey and the geospatial department at Google. It’s going to be big!
[And about] JSON-LD: It’s JSON so Web developers love it, and it’s RDF. I am hopeful that more and more JSON will actually be JSON-LD. Then everyone should be happy.
As we prepare to greet the New Year, we take a look back at the year that was. Some of the leading voices in the semantic web/Linked Data/Web 3.0 and sentiment analytics space give us their thoughts on the highlights of 2013.
Phil Archer, Data Activity Lead, W3C:
The completion and rapid adoption of the updated SPARQL specs, the use of Linked Data (LD) in life sciences, the adoption of LD by the European Commission, and governments in the UK, The Netherlands (NL) and more [stand out]. In other words, [we are seeing] the maturation and growing acknowledgement of the advantages of the technologies.
I contributed to a recent study into the use of Linked Data within governments. We spoke to various UK government departments as well as the UN FAO, the German National Library and more. The roadblocks and enablers section of the study (see here) is useful IMO.
Bottom line: Those organisations use LD because it suits them. It makes their own tasks easier, it allows them to fulfill their public tasks more effectively. They don’t do it to be cool, and they don’t do it to provide 5-Star Linked Data to others. They do it for hard headed and self-interested reasons.
Christine Connors, founder and information strategist, TriviumRLG:
What sticks out in my mind is the resource market: We’ve seen more “semantic technology” job postings, academic positions and M&A activity than I can remember in a long time. I think that this is a noteworthy trend if my assessment is accurate.
There’s also been a huge increase in the attentions of the librarian community, thanks to long-time work at the Library of Congress, from leading experts in that field and via schema.org.
NEXT PAGE >>