Orbis Technologies is looking for a Software Developer – Cloud and Big Data in Annapolis, MD. According to the post, “Orbis Technologies, located in Annapolis, MD, is a leader in providing cloud computing-based semantic text analytics, using MapReduce, to support entity extraction, relationship identification, and semantic search in a Hadoop cloud-processing environment. We are interested in speaking to talented candidates who desire to further their careers in big data and open source development predominantly in Java, and work with cloud computing including the use of Hadoop, Accumulo, CloudBase, HBase, and core semantic web technologies.” Read more
REDWOOD CITY, CA–(Marketwired – Oct 28, 2014) – Yummly (http://www.yummly.com), the leading innovator in recipe search & discovery, announced today, the introduction of contextual recommendations on the iPhone and iPad apps. When users open the app, in addition to personalizing the content to a user’s tastes, Yummly will now tailor to a person’s time, place and patterns.
Yummly’s proprietary Food Genome and patent-pending Food Intelligence technology already blends together to create an unmatched user experience with data-driven features such as personalized recommendations, semantic search, and a smart shopping list. With the new contextual recommendations functionality, it is bringing together more relevant and dynamic content to the users by leveraging a combination of contextual signals such as time of day, day of week, season, location, trends and more. Read more
Zach Miners of PC World reports, “Bing now supports searches with emoji, meaning you can insert or paste a range of emoji icons like hearts, smiley faces, food graphics, or any combination thereof, for some interesting, though not always useful, results. It’s a novelty feature, yes, but still fun. And one that could help Bing draw at least some attention away from Google. Google at the moment does not give results for emoji searches, though its auto-complete technology does recognize them. Yahoo, meanwhile, does support emoji searches. Bing’s tool is available in all English markets, the search engine said, offered as an homage to the shorthand’s popularity.” Read more
Word came from the World Wide Web Consortium (W3C) yesterday that it has published the 5th major revision of HTML, the core language of the web. While HTML5 is already in use by developers (having become a W3C candidate recommendation a couple of years ago), the recommendation for the standard is a lynchpin for the community, as it now formalizes stable guidelines for the development of innovative and cross-platform web sites and applications.
A key feature of HTML5 – the first major new HTML standard in more than a decade – is that it provides the ability to describe the structure of a web document with standard semantics. It uses semantic tags for things like page headers, footers, body, ordered lists, time, and more to better identify an element and how it is being used. Greater use of these tags should improve a browser’s ability to understand content for display across a range of devices and screen sizes without requiring any development rejiggering, and search engines’ ability to more effectively index a page, which could lead to better rankings.
“Transformations for Integrating VA data with FHIR in RDF”
Part 1 of that series, “The Yosemite Project: An RDF Roadmap for Healthcare Information Interoperability,” is available as a recorded webinar and slide deck.
Part 2, “The Ideal Medium for Health Data? A Dive into Lab Tests,” will take place on November 7, 2014 (registration is open as of this writing).
Announcing Part 3:
TITLE: Transformations for Integrating VA data with FHIR in RDF
DATE: Wednesday, November 12, 2014
TIME: 2 PM Eastern / 11 AM Pacific
PRICE: Free to all attendees
DESCRIPTION: In our series on The Yosemite Project, we explore RDF as a data standard for health data. In this installment, we will hear from Rafael Richards, Physician Informatician, Office of Informatics and Analytics in the Veterans Health Administration (VHA), about “Transformations for Integrating VA data with FHIR in RDF.”
The VistA EHR has its own data model and vocabularies for representing healthcare data. This webinar describes how SPARQL Inference Notation (SPIN) can be used to translate VistA data to the data represented used by FHIR, an emerging interchange standard.
The IBM Collaborative Discovery Research Team is looking for a Research Scientist in San Jose, CA. The post states, “We are looking for Research Staff Members with experience in building larger software systems and collaborative software development to help build our platform. Candidates in this role generate highly novel ideas (theoretical or experimental) in a specific engineering or scientific discipline and invent and design complex products and processes. This position may be involved in engineering these ideas to an advanced state of feasibility by evaluating ideas and plans and participating in their implementation. The full cycle of innovation to delivery is typically a multiple-year effort.” Read more
Loek Essers of Tech World recently wrote, “Researchers at the University of Amsterdam are using neural networks to help a statistical machine translation systems learn what all human translators know — that the best translation of a word often depends on the context. Machine translation systems such as Google Translate or those at iTranslate4.eu guess how to translate words and phrases based on how often they appear in a large corpus of human-translated texts. Such tools are increasingly important as individuals and businesses seek to access information or buy products and services from other countries where different languages are spoken.” Read more
Ben Woods of The Next Web reports, “Google has joined forces with the University of Oxford in the UK in order to better study the potential of artificial intelligence (AI) in the areas of image recognition and natural language processing. The hope is that by joining forces with an esteemed academic institution, the research will progress more rapidly than going it alone for its DeepMind project. In total, Google has hired seven individuals (who also happen to be world experts in deep learning for natural language understanding), three of which will remain as professors holding joint appointments at Oxford University.” Read more
MITRE Corporation is looking for an Artificial Intelligence Engineer in McLean, VA. The post states, “This position involves application of a variety of ontology, knowledge representation and semantic web tools and techniques to address sponsor problems. Candidates with practical experience designing, developing, and extending ontologies and data modeling systems preferred. This role involves technical consulting, implementing and advisement to our sponsors and therefore requires strong interpersonal and communication skills, and the ability to work with highly skilled scientists and engineers. The successful candidate must have strong skills in semantic modeling techniques and applicable technologies and have a proven record of system development abilities or published research in relevant areas.” Read more
Rachel Metz of the MIT Technology Review recently wrote, “It’s not unusual to find yourself talking to an uncooperative appliance or gadget. Soon, though, it could soon be more common for those devices to actually pay attention. A startup called Wit.ai [read our previous coverage] plans to make it easy for hardware makers and software developers to add custom voice controls to everything from smartphones and smart watches to Internet-connected thermostats and drones.”
Metx continues, “With Wit.ai, developers type a handful of plain-English commands they want it to recognize, such as ‘Wake me up tomorrow at 6’ or ‘Wake me up in 20 minutes,’ and note what they want to accomplish through each command—in this case, set the alarm on a hypothetical voice-controlled smart watch. Read more
NEXT PAGE >>