Scientists Can Blind a Self-Driving Car From Seeing Pedestrians

Vocativ

Researchers at the University of Freiberg and the Bosch Center for Artificial Intelligence in Germany have demonstrated it is possible to prevent machine-vision systems from seeing specific categories of objects in a scene, such as pedestrians in the road. The method works by strategically flooding an image with noise that degrades the artificial intelligence's (AI) ability to recognize objects, but keeps the image looking unaltered to humans. The researchers say these "universal perturbations" are generated by an algorithm, and work regardless of what type of image, scene, or computer-vision system to which they are being applied. Instead of stopping the algorithm from identifying the entire image, the algorithm uses a process called "semantic segmentation," which divides the image into groups of pixels in order to identify different types of objects in the scene, enabling the researchers to prevent the AI from perceiving specific objects in the scene.

From "Scientists Can Blind a Self-Driving Car From Seeing Pedestrians"
Vocativ (04/24/17) Joshua Kopstein
View Full Article

A Massive New Library of 3D Images Could Help Your Robot Butler Get Around Your House

Technology Review

Researchers at Stanford and Princeton universities and the Technical University of Munich (TUM) in Germany have created ScanNet, an immense new database of three-dimensional (3D) images with millions of annotated objects. Their goal is for ScanNet to help train machines to better understand the physical world via deep learning. The team used a 3D camera to scan 1,513 scenes and construct the dataset, while volunteers provided annotations via Amazon's Mechanical Turk platform. The application of deep learning enabled ScanNet to reliably identify many objects using only their shape or depth information, says TUM professor Matthias Niessner. Carnegie Mellon University professor Siddhartha Srinivas says the new dataset could be a "good start" toward enabling machines to understand the interiors of homes. "Although simulating real-life imagery is often unrealistic, as you can see from the [computer-generated imagery] in movies, simulating depth is quite realistic," he says.

From "A Massive New Library of 3D Images Could Help Your Robot Butler Get Around Your House"
Technology Review (04/24/17) Will Knight
View Full Article

Alastair Donaldson Announced as Winner of BCS Roger Needham Award

BCS-The Chartered Institute for IT

Alastair Donaldson, from the Department of Computing at Imperial College London, has received this year's BCS Roger Needham Award for his work in many-core programming. Donaldson says his design and application of program analysis techniques to this emerging discipline focuses on "formal verification, to mathematically prove that parallel programs are free from bugs; systematic testing, to automatically find and reproduce bugs; programming language design, to equip developers with better languages that make it easier to write correct parallel code; and compiler technology, so that high performance, correct parallel code can be generated automatically." Donaldson notes such efforts reflect his commitment to improve the design process for efficient and reliable software. Microsoft Research Cambridge principal researcher Andy Gordon says, "formal verification and programming methodology have long been strengths of U.K. computer science, so it is great to see Donaldson's many achievements from applying these techniques to achieve practical benefits for parallel programming."

From "Alastair Donaldson Announced as Winner of BCS Roger Needham Award"
BCS-The Chartered Institute for IT (04/19/17)
View Full Article

Canada Is Quietly Adding 10 Petaflops to Its Network of Academic Supercomputers

TOP500.org

Canada's government is moving forward with upgrades to its high-performance supercomputing network for academia with the official launch of "Cedar," a 3.6-petaflop system at Simon Fraser University. The upgrade will entail a 10-petaflop increase in aggregate performance to 12 petaflops and a boost in storage capacity to more than 50 petabytes from the current 20 petabytes. Cedar's rollout will be followed by the May launch of the University of Waterloo's petascale "Graham" supercomputer which, like Cedar, will be a heterogeneous cluster. The installation of a third petascale system, "Niagara," is slated for later this year at the University of Toronto, while a fourth system, "Arbutus," has been online at the University of Victoria since September. The systems are expected to help accommodate the research needs of approximately 10,000 scientists spread across more than 70 institutions.

From "Canada Is Quietly Adding 10 Petaflops to Its Network of Academic Supercomputers"
TOP500.org (04/23/17) Michael Feldman
View Full Article

New Method to Ensure Reproducibility in Computational Experiments

Center for Genomic Regulation (Spain)

Researchers at the Center for Genomic Regulation (CRG) in Spain have developed Nextflow, a workflow management system designed to guarantee reproducibility in computer analysis of large genomics datasets. The team says Nextflow plays a role in establishing good scientific practices and is an important framework for projects in which analyzing large datasets is employed to take decisions in precision medicine, for example. "When doing computational analysis, tiny variations across computational platforms can induce numerical instability that result in irreproducibility," says lead CRG researcher Cedric Notredame. "Nextflow allows scientists to avoid these variations and contributes to standardizing good practices in computational experiments." To address irreproducibility, Nextflow uses containerization of a complete pre-configured execution environment to manage a computational workflow and its dependencies. "It is like freezing the experiment, so everyone aiming at reproducing it can do it the same way without having to manually re-introduce complex configurations," the researchers note.

From "New Method to Ensure Reproducibility in Computational Experiments"
Center for Genomic Regulation (Spain) (04/25/17) Laia Cendros
View Full Article

Finally, a Peek Inside the 'Black Box' of Machine Learning Systems

Stanford University

Researchers at Stanford University have developed a new tool designed to shed light on the little-understood mechanics of neural networks by mathematically testing their validity. The Reluplex error-checking tool probes the possible network configurations, periodically spotting and removing invalid pathways so only a small segment of the network's tree-like expanse is fully explored. The researchers say this approach saves an enormous amount of time. For example, the application of Reluplex to a 300-node network required the assessment of only about 1 million possibilities, which "is not a tough number for a computer to deal with," says Stanford professor Clark Barrett. The researchers say Reluplex users can make specific queries about the properties of the system for which the network is used. They note the tool cannot yet test networks with millions of nodes, but among its possible uses is making networks more resilient against "adversarial cases."

From "Finally, a Peek Inside the 'Black Box' of Machine Learning Systems"
Stanford University (04/25/17) Marina Krakovsky
View Full Article

Scientific Discovery Game Significantly Speeds Up Neuroscience Research Process

UW Today

Researchers at the University of Washington (UW) Center for Game Science and the Allen Institute for Brain Science have developed Mozak, a scientific discovery game enabling citizen scientists to generate three-dimensional reconstructions of neurons in different regions of human and animal brains. "It's really exciting that regular people out in the world can, in a short period of time, be taught how to reconstruct neurons on the same level as experts who have been doing this a long time," says the Allen Institute's Staci Sorensen. Since Mozak's November launch, about 200 daily players and Allen Institute neuroscientists have rebuilt neurons 3.6 times faster than previous techniques, and are outperforming computers at tracing neuronal structures. "There's a big bottleneck in processing and analyzing the data coming in, which is where the Mozak community is making a big impact," says UW professor Zoran Popovic.

From "Scientific Discovery Game Significantly Speeds Up Neuroscience Research Process"
UW Today (04/24/17) Jennifer Langston
View Full Article

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

HPC Wire

Massachusetts Institute of Technology professor Andrew V. Sutherland has set a record for the largest Google Compute Engine (GCE) task. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine (VM) instances, which represents the largest known high-performance computing cluster to run in the public cloud. Sutherland used Google's cloud to explore generalizations of the Sato-Tate Conjecture and the conjecture of Birch and Swinnerton-Dyer to curves of higher genus, according to Google researchers Alex Barrett and Michael Basilyan. They note Sutherland explored 1,017 hyperelliptic curves of genus 3 in an attempt to find curves with L-functions that can be easily computed and have potentially interesting Sato-Tate distributions. The process yielded about 70,000 curves of interest, each of which eventually will have its own entry in the L-functions and Modular Forms Database. Sutherland now is planning a larger run of 400,000 cores.

From "MIT Mathematician Spins Up 220,000-Core Google Compute Cluster"
HPC Wire (04/21/17) Tiffany Trader
View Full Article

AI Learns to Play Video Game from Instructions in Plain English

New Scientist

Stanford University researchers have developed an artificial intelligence (AI) system that learned to play the game "Montezuma's Revenge" by taking instructions in plain English. The game has been challenging for other AI systems to learn because it offers sparse rewards, requiring players to make several moves before earning any points. To help the new AI system learn the game more quickly, the researchers gave the reinforcement-learning system assistance in the form of natural language instructions. Teaching the AI in this way could have far-reaching applications, because using natural language means anyone could advise the AI, not just programmers. The researchers trained the AI to associate instructions with screenshots of the same action being carried out in the game. The system scored 3,500 points, easily beating the top score of 2,500 on OpenAI Gym, an online platform for testing AIs in virtual environments.

From "AI Learns to Play Video Game from Instructions in Plain English"
New Scientist (04/24/17) Edd Gent
View Full Article

Facebook Is Building Brain-Computer Interfaces for Typing and Skin-Hearing

TechCrunch

A team of 60 engineers at Facebook's Building 8 research laboratory is building a brain-computer interface to enable mind-controlled typing. The interface will be designed to scan the user's brain 100 times a second via optical imaging, translating thoughts into text. "This is about decoding the words you've already decided to share by sending them to the speech center of your brain," according to Facebook. Project participants include specialists in machine learning for deciphering speech and language, building optical neuroimaging systems with advanced spatial resolution, and next-generation neural prosthetics. The goal is to eventually construct non-invasive interface devices that do not require a physical implant. Another project at Building 8 is concentrating on hardware and software that enables a person's skin to emulate the ear's cochlea, which could enable hearing-impaired people to "hear" by circumventing their ears.

From "Facebook Is Building Brain-Computer Interfaces for Typing and Skin-Hearing"
TechCrunch (04/19/17) Josh Constine
View Full Article

Organ Donation: A New Frontier for AI?

UdeM News

Researchers at the University of Montreal (UdeM) and its Montreal Polytechnic engineering school in Canada are developing a computerized machine-learning method for better predicting how well a typical organ transplant will proceed. The risk calculator the researchers aim to design via machine learning would be used by physicians and patients to decide whether an organ is well suited to the recipient. They are employing the U.S. Scientific Registry of Transplant Recipients database to retrospectively review all U.S. patients who received a new kidney over 15 years, comparing old and new survival time modeling methods. The next step is classifying the information physicians and patients use to a make better organ-transplant choices based on risk. UdeM's Heloise Cardinal thinks machine learning will be an improvement over statistical analysis in weighing the myriad interactions between donated organs and recipients. UdeM's Andrea Lodi expects their research to be a game-changer in improving the accuracy of organ-transplant predictions.

From "Organ Donation: A New Frontier for AI?"
UdeM News (04/21/17) Salle de Presse
View Full Article

Using Data Science to Understand Global Climate Systems

University of Rochester NewsCenter

Researchers at the University of Rochester are using data science to understand the phenomena driving the global climate system. Rochester professor Lee Murray builds computer models of the dynamics and composition of the atmosphere, which he compares to satellite data and other surface observations worldwide. Murray employs high-performance computing systems to model and predict how air pollution and the climate system influence each other. Meanwhile, Rochester professor Tom Weber uses large datasets compiled at sea and by satellite sensors to generate numerical models and understand how marine ecosystems, elemental cycles, and the climate interact, and how perturbations impact this system. Weber is focused on the sequence of processes that transfers carbon from the atmosphere to the deep ocean. Murray and Weber also will be collaborating on a joint project that uses computer models and satellite data to examine the global methane cycle.

From "Using Data Science to Understand Global Climate Systems"
University of Rochester NewsCenter (04/21/17) Lindsey Valich
View Full Article

Artificial Intelligence Expert Shares His Vision of the Future of Education

EdTech Magazine

In an interview, University of Idaho professor Joseph Qualls says he envisions education being transformed by artificial intelligence (AI). He anticipates AI effecting a "massive change in education" across the board, with large universities teaching students a likely casualty in the long term as personalized education emerges. "You will have a student interact with an AI system that will understand him or her and provide an educational path for that particular student," Qualls predicts. He says teachers may stop educating students and start educating AIs. However, Qualls also believes human educators will stay relevant by providing underlying intuition to AIs. As far as conscious AI is concerned, Qualls says, "it will be AI systems writing new AI in ways we have never thought about. That's when you will have a system that's thinking on its own and forming its own agenda to do whatever it chooses to do."

From "Artificial Intelligence Expert Shares His Vision of the Future of Education"
EdTech Magazine (04/21/17) Amy Burroughs
View Full Article