Neuromorphic Computing investigates the computational principles that enable high-level sensory processing and sensory cognition in the human brain by attempting to implement these principles into large-scale, high-performance computer models.
Even after several decades of exponential growth in processing power, computers still cannot match the ability of the brain to interpret, respond to, and learn from natural sensory inputs. Rapid progress in neuroscience, however, is enabling an alternative strategy for achieving brain-like behavior: identifying the computational primitives that underlie the processing in biological neural circuits. The enormous scale of biological neural systems means neuromorphic computing research requires high-performance neural simulations tools in order to test complex scientific hypotheses at scale.

Collaborative Research: A Neurally Inspired, Event-Based Computer Vision Pipeline
David Mascarenas, NMC Research Scientist
This project looks to develop deep sparse generative autoencoders for processing video feeds from mobile platforms such as drones, autonomous vehicles and cell phones. The project endeavors to show that DSGAs can support video processing tasks such as 3D reconstruction, object detection, compression and spatiotemporal upsampling. This involves a close collaboration between a theoretical neuroscientist, a team of neuromorphic chip designers, and a mechanical engineer who will build a prototype computer vision system using a silicon retina.

Synthetic Cognition through Petascale Models of the Primate Visual Cortex
Garrett Kenyon, Principal Investigator, NMC Affiliate Researcher, LANL Staff Scientist
Pete Schultz, NMC Research Scientist
Austin Thresher, NMC Research Scientist
The PetaVision project seeks to develop an open-source high-performance neural simulation toolbox in the context of the active investigation of the computational principles underlying human sensory cognition.
The ultimate goal of PetaVision is to create a synthetic cognition system that emulates the functional architecture of the primate visual cortex. Using Petascale computational resources and a growing knowledge of the structure and function of biological neural systems, this research has begun to reproduce the information processing capabilities of cortical circuits in the brain. This research pushes the limits of supercomputers and advances in this area are closely coupled with advanced computing and exascale computing research. Funding for this research comes from the National Science Foundation.

SALLSA: Sparse Adaptive Local Learning for Sensing and Analytics
TInference Models (IMs) of the primate visual cortex seek to learn their deep network structure directly from their environmental sensory inputs.
By representing the deep structure of the visual environment, SALLSA seeks to enable more accurate performance of basic visual judgements, such as object detection and tracking. Because IMs learn to detect objects directly from the data, these models can compensate for internal structural damage and adapt to changing environmental conditions. By focusing on biologically-inspired neural architectures, SALLSA seeks to enable target detection and tracking in streaming video obtained by mobile, lightweight platforms. In particular, SALLSA seeks to yield improved capabilities for intelligent video processing from mobile platforms such as drones, robots, cubeSats, by exploiting the ultra-light-weight, low-power, and low bandwidth capabilities of biological neural circuits. Along with collaborators at the University of Michigan, SALLSA seeks to implement neurally-inspired architectures in mixed-signal, memristor-based circuits for ultra-low-power operation. This is a joint research project with the University of Michigan and funding for this project comes from the Defense Advanced Research Projects Agency (DARPA) through the DARPA UPSIDE Program.