Just What We’ve All Been Waiting For: Bing Now Offers Emoji Search

queenZach Miners of PC World reports, “Bing now supports searches with emoji, meaning you can insert or paste a range of emoji icons like hearts, smiley faces, food graphics, or any combination thereof, for some interesting, though not always useful, results. It’s a novelty feature, yes, but still fun. And one that could help Bing draw at least some attention away from Google. Google at the moment does not give results for emoji searches, though its auto-complete technology does recognize them. Yahoo, meanwhile, does support emoji searches. Bing’s tool is available in all English markets, the search engine said, offered as an homage to the shorthand’s popularity.” Read more

HTML5: The Party Is Officially On!

w3chtmlWord came from the World Wide Web Consortium (W3C) yesterday that it has published the 5th major revision of HTML, the core language of the web. While HTML5 is already in use by developers (having become a W3C candidate recommendation a couple of years ago), the recommendation for the standard is a lynchpin for the community, as it now formalizes stable guidelines for the development of innovative and cross-platform web sites and applications.

A key feature of HTML5 – the first major new HTML standard in more than a decade – is that it provides the ability to describe the structure of a web document with standard semantics. It uses semantic tags for things like page headers, footers, body, ordered lists, time, and more to better identify an element and how it is being used. Greater use of these tags should improve a browser’s ability to understand content for display across a range of devices and screen sizes without requiring any development rejiggering, and search engines’ ability to more effectively index a page, which could lead to better rankings.

Read more

NEW WEBINAR Announced: Yosemite Project – Part 3

“Transformations for Integrating VA data with FHIR in RDF”

Yosemite Project Part 3: Part 3-Transformations for Integrating VA data with FHIR in RDF SemanticWeb.com recently launched a series of webinars on the topic of “RDF as a Universal Healthcare Exchange Language.”

Part 1 of that series, “The Yosemite Project: An RDF Roadmap for Healthcare Information Interoperability,” is available as a recorded webinar and slide deck.

Part 2,The Ideal Medium for Health Data? A Dive into Lab Tests,” will take place on November 7, 2014 (registration is open as of this writing).

Announcing Part 3:

click here to register now!
TITLE: Transformations for Integrating VA data with FHIR in RDF
DATE: Wednesday, November 12, 2014
TIME: 2 PM Eastern / 11 AM Pacific
PRICE: Free to all attendees
DESCRIPTION: In our series on The Yosemite Project, we explore RDF as a data standard for health data. In this installment, we will hear from Rafael Richards, Physician Informatician, Office of Informatics and Analytics in the Veterans Health Administration (VHA), about “Transformations for Integrating VA data with FHIR in RDF.”

The VistA EHR has its own data model and vocabularies for representing healthcare data. This webinar describes how SPARQL Inference Notation (SPIN) can be used to translate VistA data to the data represented used by FHIR, an emerging interchange standard.

 

Read more

Semantic Web Jobs: IBM

ibmThe IBM Collaborative Discovery Research Team is looking for a Research Scientist in San Jose, CA. The post states, “We are looking for Research Staff Members with experience in building larger software systems and collaborative software development to help build our platform. Candidates in this role generate highly novel ideas (theoretical or experimental) in a specific engineering or scientific discipline and invent and design complex products and processes. This position may be involved in engineering these ideas to an advanced state of feasibility by evaluating ideas and plans and participating in their implementation. The full cycle of innovation to delivery is typically a multiple-year effort.” Read more

University of Amsterdam Researchers Use Neural Networks to Improve Machine Translations

amsterdamLoek Essers of Tech World recently wrote, “Researchers at the University of Amsterdam are using neural networks to help a statistical machine translation systems learn what all human translators know — that the best translation of a word often depends on the context. Machine translation systems such as Google Translate or those at iTranslate4.eu guess how to translate words and phrases based on how often they appear in a large corpus of human-translated texts. Such tools are increasingly important as individuals and businesses seek to access information or buy products and services from other countries where different languages are spoken.” Read more

Google Partners with Oxford on NLP and Image Recognition Research

deepmindBen Woods of The Next Web reports, “Google has joined forces with the University of Oxford in the UK in order to better study the potential of artificial intelligence (AI) in the areas of image recognition and natural language processing. The hope is that by joining forces with an esteemed academic institution, the research will progress more rapidly than going it alone for its DeepMind project. In total, Google has hired seven individuals (who also happen to be world experts in deep learning for natural language understanding), three of which will remain as professors holding joint appointments at Oxford University.” Read more

Semantic Web Jobs: MITRE

mitreMITRE Corporation is looking for an Artificial Intelligence Engineer in McLean, VA. The post states, “This position involves application of a variety of ontology, knowledge representation and semantic web tools and techniques to address sponsor problems. Candidates with practical experience designing, developing, and extending ontologies and data modeling systems preferred. This role involves technical consulting, implementing and advisement to our sponsors and therefore requires strong interpersonal and communication skills, and the ability to work with highly skilled scientists and engineers. The successful candidate must have strong skills in semantic modeling techniques and applicable technologies and have a proven record of system development abilities or published research in relevant areas.” Read more

Gadgets, Devices, and Appliances That Listen to Your Every Command

witaiRachel Metz of the MIT Technology Review recently wrote, “It’s not unusual to find yourself talking to an uncooperative appliance or gadget. Soon, though, it could soon be more common for those devices to actually pay attention. A startup called Wit.ai [read our previous coverage] plans to make it easy for hardware makers and software developers to add custom voice controls to everything from smartphones and smart watches to Internet-connected thermostats and drones.”

Metx continues, “With Wit.ai, developers type a handful of plain-English commands they want it to recognize, such as ‘Wake me up tomorrow at 6’ or ‘Wake me up in 20 minutes,’ and note what they want to accomplish through each command—in this case, set the alarm on a hypothetical voice-controlled smart watch. Read more

Oxdata’s CEO on the Intersection of Big Data and Machine Learning

oxdataJosiah Motley recently wrote, “Big data is big money, and a relative new-comer to the game is trying to make a big impact. 0xdata (pronounced hexadata), started by SriSatish Ambati, is that new-comer. Their current flagship product, simply titled H20, is an open source platform used to crunch huge amounts of data to more accurately display analytic results. It is able to compute these large data sets by combining machine learning with advanced mathematical algorithms. H20 allows for customers to their entire data sets, instead of sample sets which are traditionally used for such processes. We recently had a chance to talk with SriSatish Ammbati, CEO and co-founder of 0xdata to help shed more light on their product.” Read more

Metreeca Offers Easier Interaction With Semantic Knowledge Bases

griverAccessing an enterprise’s semantic knowledge base has its challenges for the business’ general population. Perhaps development teams already have integrated specific SPARQL queries inside a customer app or custom dashboard or otherwise accommodated some very task-oriented activities and searches, but that has its limits for non-technical users who want to explore outside the box. All the same, these users aren’t likely to write new SPARQL queries on their own — but nor do they necessarily want to wait for their IT departments to pull that together for them. Interactive query builders are options but some may find these still a little too-technically oriented.

This is a problem that Metreeca is looking to solve with Graph Rover, a self-service search and analysis tool that enables non technical users to interact visually with semantic knowledge bases. It has just released the latest beta update of the product, which lets users build queries using a graphical interface, but Graph Rover has been in development for two years while the company was in stealth mode, and tech lead Alessandro Bollini says it is already a mature solution that should be available commercially in the first quarter of 2015.

Read more

<< PREVIOUS PAGENEXT PAGE >>