Framework for Spiking Neural Network Modeling on High-Performance Architectures
by Jayram Moorkanikara Nageswaran
Prof. Nikil Dutt (chair)
Prof. Jeffrey L Krichmar
Prof. Alex Nicolau
Prof. Alex Veidenbaum
Spiking neural network (SNN) models are emerging as a plausible paradigm for characterizing neural dynamics in the cerebral cortex. Traditionally these SNN models were simulated on large-scale clusters, super-computers, or on dedicated VLSI architectures. Alternatively, Graphics Processing Units (GPUs) can provide a low-cost, programmable, and high-performance computing platform for simulation of SNNs. This thesis proposes a systematic framework for modeling and simulation of biologically realistic large-scale spiking neural networks on high-performance graphics processors.
The first part of the framework consists of a high-level specification to quickly build arbitrary, large-scale spiking neural network for different applications. The high-level SNN specification is then converted to a sparse adjacency matrix representation and mapped on the GPUs. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and compact adjacency matrix representation for effective simulation of SNNs on GPUs. The last part of the generic framework proposes an evolutionary approach to automate parameter tuning in spiking neural networks. We demonstrate the validity of this framework by applying to three different spike-based computation problems. Significant improvements in modeling, simulation and parameter tuning time have been achieved by these approaches.