You are here: Home News 2011 CNS*2011 Workshop: "Supercomputational …

CNS*2011 Workshop: "Supercomputational Neuroscience – tools and applications"

CNS*2011 Workshop (July 28, 2011)
"Supercomputational Neuroscience – tools and applications"

 

Organisers
Abigail Morrison (Functional Neural Circuits Group, Faculty of Biology and BCF, University of Freiburg, Germany)
Markus Diesmann (Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Research Center Jülich, Germany and RIKEN Brain Science Institute, Wako City, Japan)
Anders Lansner (Computational Biology, Kungliga Tekniska högskolan, Stockholm, Sweden)

 

Topic description
The increasing availability of supercomputers presents the neuroscientific community with unprecedented opportunities to perform novel and groundbreaking brain research, but also with formidable challenges. These challenges include integrating data from multiple brain areas to define large-scale models and developing appropriately scalable tools to specify, simulate, visualize and analyze such complex models. This workshop will feature front-line research on the use of supercomputers and other parallel computers (e.g. GPUs) for neuroscience. The intention is to highlight different aspects of data analysis and tool development for large-scale modeling as well as new biological insights resulting from the application of such tools.

 

Schedule (28th July 2011)

9:00

Introduction (Abigail Morrison)

9:15

Tobias Potjans
The role of data-based connectivity in large-scale spiking network simulations

10:00

Mikael Djurfeldt
Tools for modularity and connectivity in large-scale models

10:45

Coffee Break

11:00

Susanne Kunkel
Meeting the memory challenges of brain-scale network simulation

11:45

David Silverstein
Scaling up a biophysical neocortical attractor model with NEURON

12:30

Lunch Break

14:00

Marcus Kaiser
Towards clinical tools for diagnosis and treatment evaluation: Parallel processing of brain connectivity and activity data

14:45

Eugene Izhikevich
Simulating 1011 neurons (in 2005)

15:30

Coffee Break

15:45

Ingo Bojak
Realism ate my model - Pitting computational power against biological complexity

16:30

James Kozloski
Brain Systems Computation with the Neural Tissue Simulator

17:15

Concluding remarks (Anders Lansner)

 

 

Abstracts

Tobias Potjans
Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Research Center Jülich, Germany

The role of data-based connectivity in large-scale spiking network simulations

Recent developments in neural simulation technology allow the representation of spiking networks that comprise millions of neurons and billions of synapses. Neural networks of this size comprise not only the local cortical network, but large parts of the brain like multiple cortical areas. The capability for brain-scale spiking network simulations, however, has to be matched with the capability to describe
the multi-scale structural connectivity data on the level of single cell-types. Here, I firstly summarize our efforts to integrate the major data sets on local microcircuit connectivity and illuminate by means of full-scale microcircuit simulations the necessity to rely on data from multiple methods from anatomy and electrophysiology. Secondly, I describe the construction of connectivity maps for brain-scale networks which combine detailed microcircuitry with macroscopic long-range connectivity.

 


Mikael Djurfeldt
INCF, Stockholm, Sweden
PDC Center for High Performance Computing, KTH, Stockholm, Sweden

Tools for modularity and connectivity in large-scale models

In this talk I will give an overview of some challenges in building large-scale models of neuronal networks. In particular, I will focus on how to build large-scale models in a modular fashion and how to handle the specification and setup of connectivity. Two tools addressing these problems are MUSIC (The Multi-simulation Coordinator) and the CSA (Connection Set Algebra). MUSIC is an interface for on-line communication between parallel neuronal network simulators supporting interoperability and the modular construction of large-scale models. Current developments include alternative communication algorithms, support for multi-scale modeling, and an interface (MEXI) for real-time communication with the external world. CSA is a novel and general formalism for the description of connectivity in neuronal network models, from its small-scale to its large-scale structure. Python and C++ implementations are under development.

 


Susanne Kunkel
Bernstein Center Freiburg, Germany

Meeting the memory challenges of brain-scale network simulation

In this talk I will demonstate that as neuronal network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. I will introduce a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cyles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST) shows that the memory consumption can be substantially reduced by using data structures that effectively exploit the sparseness of the local representation of the network. These adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond.

 


David Silverstein
KTH, Stockholm, Sweden

Scaling up a biophysical neocortical attractor model with NEURON

This talk will describe scaling a biophysical model of the neocortex using parallel NEURON on Blue Gene supercomputers. Previous scaling experiments have been done with the SPLIT simulator on the Blue Gene/L with a similar neocortical model. We chose a biophysical model of medium complexity based on the Hodgkin-Huxley formalism because this provides the capability of exploring the effects of neuro-pharmacological drugs as well as the oscillatory effects of cortical microcircuits and globally correlated network activity. Neocortical simulations were performed to determine both strong (fixed network size, increasing cores) and weak (increasing network size, fixed load per core) scaling with two variations of a square necortical patch of hypercolumns and internal minicolumns. Simulations were performed with both single patches of increasing area and cascades of multiple patches with feed-forward projections. Individual simulations to measure performance consisted of stimulation and completion of a single memory pattern in 1 second of cortical activity. Preliminary results show near linear speedups of the computational part of the simulation, but degradation of file I/O performance as the number of cores increase. With performance analysis and validation, ongoing work includes scaling and measurement of associative memory storage capacity. Additional work includes linking biophysical and more abstract models of neocortex and comparing behavioral results.

 


Markus Kaiser
School of Computing Science, Newcastle University, United Kingdom
Institute of Neuroscience, Newcastle University, United Kingdom
Department of Brain and Cognitive Sciences, Seoul National University, Korea

Towards clinical tools for diagnosis and treatment evaluation: Parallel processing of brain connectivity and activity data

Large-scale datasets of electrophysiological or neuroimaging measurements pose challenges in both data storage and processing. Analysing motifs of functional correlation networks, for example, can take several weeks for large motif sizes although we recently proposed a faster algorithm that can reduce the computational speed (1). Problems with processing time can also arise for multi-electrode array (MEA) recordings that could now consist of 4,000 channels recording at 20kHz for several hours (2). As part of the CARMEN e-Science project, we worked on some of these problems.
We here present three approaches. First, we show the ADAPA platform (3) for easily distributing computing tasks over a cluster, the CONDOR grid, or the National Grid Service. As an example, we show how the extraction of correlation networks from MEA recordings can be performed in parallel. Second, we present network analysis routines that can help to identify characteristic features of correlation or anatomical networks (3, 4). These features form a fingerprint for each network which can be used for comparing different groups, e.g. patients and controls, or for comparing different network conditions, e.g. before and after drug application (5). Third, we present simulations of cortical activity whereby the local field potential in the simulation can be compared with experimental recordings by using a virtual electrode.
In conclusion, high performance computing has become a necessity for dealing with electrophysiological and neuroimaging data as well as for simulation and modelling. The future is therefore either providing user-friendly tools that can be used by clinicians and experimentalists (shown here) or (less likely) introducing computational skills into the curricula of biomedical degrees.

(1) Ribeiro, Silva & Kaiser (2009) IEEE Intl. Conf. on e-Science DOI 10.1109/e-Science.2009.20
(2) Roopun et al. (2010) PNAS 107:338–343
(3) Ribeiro et al. (2009) J Neurosci Meth 184:357-364
(4) Kaiser (2011) Neuroimage 57:892–907
(5) Echtermeyer et al. (2011) Frontiers in Neuroinformatics 5:10

This work is supported by EPSRC (EP/G03950X/1) and the EPSRC CARMEN
e-Science project (EP/E002331/1). More information and downloadable tools are available at http://www.biological-networks.org/ and http://www.carmen.org.uk/

 


Eugene Izhikevich
Brain Corporation, San Diego, USA

Simulating 1011 neurons (in 2005)

I will describe the details of the simulation of a spiking network that has the size of the human brain, i.e., 1011 neurons and nearly one quadrillion synapses, which I performed in 2005 while at The Neurosciences Institute.

 


Ingo Bojak
School of Psychology (CN-CR), University of Birmingham, UK
Donders Centre for Neuroscience, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands

Realism ate my model - Pitting computational power against biological complexity

Computational modelling is increasingly expected to match experimental data quantitatively, not just qualitatively, and is making inroads into medical applications. This raises the stakes for implementing more anatomical and physiological realism. From the perspective of
neural population models, which are again becoming popular for describing mesoscopic brain activity, I will consider some typical problems with increasing realism - and to what extent more computing power actually helps solving them. In particular, I will look at implementing proper signal expression, connectivity and continuous signal streams, as well as the general issue of parameter fitting. I will argue that generally it is not raw computing power any more which is limiting progress.

 


James Kozloski
IBM Research Division, Thomas J. Watson Research Center, New York, USA

Brain Systems Computation with the Neural Tissue Simulator

Neural tissue simulation extends requirements and constraints of previous neuronal and neural circuit simulation methods, creating a tissue coordinate system. We have developed a novel tissue volume decomposition, and a hybrid branched cable equation solver. The decomposition divides the simulation into regular tissue blocks and distributes them on a parallel multithreaded machine. The solver computes neurons that have been divided arbitrarily across blocks. We demonstrate thread, strong, and weak scaling of our approach on a machine of 4,096 nodes with 4 threads per node. Scaling synapses to physiological numbers had little effect on performance, since our decomposition approach generates synapses that are almost always computed locally. The largest simulation included in our scaling results comprised 1 million neurons, 1 billion compartments, and 10 billion conductance based synapses and gap junctions. Based on results from the ultra-scalable Neural Tissue Simulator, we estimate requirements for a simulation at the scale of a human brain and derive an approach to scaling computational neuroscience together with expected supercomputational resources over the next decade.

 

 

Conference Website

 

 

Latest News

Back to overview

All

20242023 | 202220212020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009