Scientific and Research Applications
Symplectic Takes Another Step In Helping Universities Engage In Research Collaboration And Discovery
This summer, Symplectic Limited become the first DuraSpace Registered Service Provider (RSP) for the VIVO Project, an open-source, open-ontology, open-process platform for hosting semantically structured information about the interests, activities and accomplishments of scientists and scholars. (See our coverage here.) “Universities want to capture all that their researchers do, collaborate and reuse the data the research brings out,” says Sabih Ali, head of brand at Symplectic. “A lot of them are looking to be a part of something like VIVO and join the whole semantic web technology movement, but they don’t have the capacity to do it themselves.”
Symplectic brings that to the table with its role as a services provider and the expertise in data quality, organization and transfer that it has thanks to being a developer of Elements, software that captures, collects and showcases institutional research, and which is used by many leading universities including Cambridge and Oxford. It also offers an open-source VIVO harvester for clients allows the ingestion of information into VIVO profiles using the rich data that Elements captures.
More recently, Symplectic has taken on the role of authorized services provider for Profiles Research Networking Software, as well. Profiles RNS is an NIH-funded open source tool to speed the process of finding researchers with specific areas of expertise for collaboration and professional networking. It’s based on the VIVO 1.4 ontology, with support for RDF, SPARQL, and Linked Open Data.
Derrick Harris of GigaOM reports, “Researchers from the University of California, Irvine, have published a paper demonstrating the effectiveness of deep learning in helping discover exotic particles such as Higgs bosons and supersymmetric particles. The research, which was published in Nature Communications, found that modern approaches to deep neural networks might be significantly more accurate than the types of machine learning scientists traditionally use for particle discovery and might also save scientists a lot of work. To get a sense of how challenging particle discovery is, consider that a collider can produce 100 billion collisions per hour and only about 300 will produce a Higgs boson. Because the particles decay almost immediately, scientists can’t expressly identify them, but instead must analyze (and sometimes infer) the products of their decay.” Read more
John Biggs of Tech Crunch reports, “A new research project by a computer science team at Cornell University is using human volunteers to train robots to perform tasks. How is it unique? They’re showing robots how to infer actions based on very complex, human comments. Instead of having to say ‘move arm left 5 inches’ they are hoping that, one day, robots will respond to ‘Make me some ramen’ or ‘Clean up my mess.’ The commands are quite rudimentary right now and focus mostly around loose requests like “boil the ramen for a few minutes” which, with enough processing, can be turned into a step-by-step set of commands. For example, in the video above a subject asks for an affogato, basically coffee with ice cream. The robot has learned the basic recipe and so uses what is at hand — a barrel of ice cream, a bowl, and a coffee dispenser — to produce a tasty treat for its human customer.” Read more
CLAREMONT, Calif.–(BUSINESS WIRE)–Claremont McKenna College assistant professor of mathematics Deanna Needell has been awarded a prestigious, five-year National Science Foundation CAREER grant of more than $413,000 for her research on the practical application of compressive signal processing (CSP). The grant, from the NSF’s Faculty Early Career Development Program, supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education, and integration of education and research within the context of the mission of their organizations. Read more
Dante D’Orazio of The Verge reports, “Eugene Goostman seems like a typical 13-year-old Ukrainian boy — at least, that’s what a third of judges at a Turing Test competition this Saturday thought. Goostman says that he likes hamburgers and candy and that his father is a gynecologist, but it’s all a lie. This boy is a program created by computer engineers led by Russian Vladimir Veselov and Ukrainian Eugene Demchenko. That a third of judges were convinced that Goostman was a human is significant — at least 30 percent of judges must be swayed for a computer to pass the famous Turing Test. The test, created by legendary computer scientist Alan Turing in 1950, was designed to answer the question ‘Can machines think?’ and is a well-known staple of artificial intelligence studies.” Read more
Katie Fehrenbacher of GigaOM recently asked, “What happens when you leverage technologies like IBM’s artificial intelligence engine Watson for clean power? The answer is the awesomely named Watt-sun project, a machine learning platform that IBM Research has quietly been building over the last year, and which is now highly accurate at predicting how cloud cover, weather and atmosphere (among many other data points) affect the way solar panel systems operate.Solar forecasting has been around as long as solar panels have been plugged into the grid. But the forecasting systems historically haven’t been all that accurate, given that so many factors can contribute to the amount of sunlight that’s able to descend from the sky and onto the solar panel and then get converted into electricity.” Read more
Donald B. Johnston of Phys.org reports, “Catalyst, a first-of-a-kind supercomputer at Lawrence Livermore National Laboratory (LLNL), is available to industry collaborators to test big data technologies, architectures and applications. Developed by a partnership of Cray, Intel and Lawrence Livermore, this Cray CS300 high performance computing (HPC) cluster is available for collaborative projects with industry through Livermore’s High Performance Computing Innovation Center (HPCIC). ‘Over the next decade, global data volume is forecasted to reach more than 35 zettabytes,’ (a zettabyte is a trillion gigabytes) said Fred Streitz, director of the HPCIC. ‘That enormous amount of unstructured data provides an opportunity. But how do we extract value and inform better decisions out of that wealth of raw information?’ ” Read more
Alexander Saltarin of Tech Times reports, “Computer science researchers have developed a new computer system that has the capability of solving word problems automatically. The new system was developed by researchers from the Massachusetts Institute of Technology (MIT) with the help of other researchers from the University of Washington. Most of the research to develop the new system was conducted at the Computer Science and Artificial Intelligence Laboratory in the MIT. Linguistic problems have always been a tricky subject for computer scientists. Unlike math, which is considered by many experts as a pure and accurate ‘language,’ computers often have difficulties in understanding the sometimes vague and confusing languages that humans use on a daily basis. However, the new computer system can actually be used to solve word problems often seen in basic math lessons at schools.” Read more
NEXT PAGE >>