CoreNEURON
|
Optimised simulator engine for NEURON
CoreNEURON is a compute engine for the NEURON simulator optimised for both memory usage and computational speed. Its goal is to simulate large cell networks with small memory footprint and optimal performance.
CoreNEURON is designed as a library within the NEURON simulator and can transparently handle all spiking network simulations including gap junction coupling with the fixed time step method. In order to run a NEURON model with CoreNEURON:
In addition to this, you will need other NEURON dependencies such as Python, Flex, Bison etc.
CoreNEURON is now integrated into the development version of the NEURON simulator. If you are a NEURON user, the preferred way to install CoreNEURON is to enable extra build options during NEURON installation as follows:
Load software dependencies
Currently CoreNEURON relies on compiler auto-vectorisation and hence we advise to use one of Intel, Cray, or PGI compilers to ensure vectorized code is generated. These compilers are able to vectorize the code better than GCC or Clang, achieving the best possible performance gains. Note that Intel compiler can be installed by downloading oneAPI HPC Toolkit. CoreNEURON supports GPU execution using OpenACC programming model. Currently the best supported compiler for the OpenACC backend is PGI (available via NVIDIA-HPC-SDK) and this is the recommended one for compilation.
HPC systems often use a module system to select software. For example, you can load the compiler, cmake, and python dependencies using module as follows:
Note that if you are building on Cray system with the GNU toolchain, you have to set following environment variable:
-DNRN_ENABLE_CORENEURON=ON
option:-DCORENRN_ENABLE_GPU=ON
option and use the PGI/NVIDIA HPC SDK compilers with CUDA. For example,By default the GPU code will be compiled for NVIDIA devices with compute capability 6.0 or 7.0. This can be steered by passing, for example, -DCMAKE_CUDA_ARCHITECTURES=50;60;70
to CMake.
NOTE : If the CMake command fails, please make sure to delete temporary CMake cache files (CMakeCache.txt
) before rerunning CMake.
Build and Install : once the configure step is done, you can build and install the project as:
```bash cmake –build . –parallel 8 –target install `` Feel free to define the number of parallel jobs building by setting a number for the
–parallel` option.
Once NEURON is installed with CoreNEURON support, you need setup setup the PATH
and PYTHONPATH
environment variables as:
As in a typical NEURON workflow, you can use nrnivmodl
to translate MOD files:
In order to enable CoreNEURON support, you must set the -coreneuron
flag. Make sure to necessary modules (compilers, CUDA, MPI etc) are loaded before using nrnivmodl:
If you see any compilation error then one of the mod files might be incompatible with CoreNEURON. Please open an issue with an example and we can help to fix it.
With CoreNEURON, existing NEURON models can be run with minimal changes. For a given NEURON model, we typically need to adjust as follows:
h.cvode.cache_efficient(1)
Enable CoreNEURON :
``` from neuron import coreneuron coreneuron.enable = True ```
Use psolve
to run simulation after initialization :
``` h.stdinit() pc.psolve(h.tstop) ```
Here is a simple example model that runs with NEURON first, followed by CoreNEURON and compares results between NEURON and CoreNEURON execution:
We can run this model as:
:warning: If you want to run this example with a GPU build due to technical limitations you need to use the NEURON special
executable:
You can find HOC example here.
At the end of the simulation CoreNEURON transfers by default : spikes, voltages, state variables, NetCon weights, all Vector.record, and most GUI trajectories to NEURON. These variables can be recorded using regular NEURON API (e.g. Vector.record or spike_record).
One can specify C/C++ optimization flags specific to the compiler with -DCMAKE_CXX_FLAGS
and -DCMAKE_C_FLAGS
options to the CMake command. For example:
By default, OpenMP threading is enabled. You can disable it with -DCORENRN_ENABLE_OPENMP=OFF
For other errors, please open an issue.
As CoreNEURON is mostly used as a compute library of NEURON it needs to be incorporated with NEURON to test most of its functionality. Consequently its tests are included in the NEURON repository. To enable and run all the tests of CoreNEURON you need to add the -DNRN_ENABLE_TESTS=ON
CMake flag in NEURON. Those tests include:
If you want to build the standalone CoreNEURON version, first download the repository as:
Once the appropriate modules for compiler, MPI, CMake are loaded, you can build CoreNEURON with:
If you don't have MPI, you can disable the MPI dependency using the CMake option -DCORENRN_ENABLE_MPI=OFF
.
In order to compile mod files, one can use nrnivmodl-core as:
This will create a special-core
executable under <arch>
directory.
CoreNEURON has support for GPUs using the OpenACC programming model when enabled with -DCORENRN_ENABLE_GPU=ON
. Below are the steps to compile with PGI compiler:
You have to run GPU executable with the --gpu
flag. Make sure to enable cell re-ordering mechanism to improve GPU performance using --cell_permute
option (permutation types : 2 or 1):
Note: If your model is using Random123 random number generator, you cannot use the same executable for CPU and GPU runs. We suggest to install separate NEURON with CoreNEURON for CPU and GPU simulations. This will be fixed in future releases.
If you have a different mpi launcher (than mpirun
), you can specify it during cmake configuration as:
You can disable tests using with options:
To see all CLI options for CoreNEURON, see ./bin/nrniv-core -h
.
In order to format code with cmake-format
and clang-format
tools, before creating a PR, enable below CMake options:
and now you can use cmake-format
or clang-format
targets:
CoreNeuron run several CI:
See the README of gitlab pipelines
to configure build.
If you would like to know more about CoreNEURON or would like to cite it, then use the following paper:
If you see any issue, feel free to raise a ticket. If you would like to improve this library, see open issues.
You can see current contributors here.
CoreNEURON is developed in a joint collaboration between the Blue Brain Project and Yale University. This work is supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology, NIH grant number R01NS11613 (Yale University), the European Union Seventh Framework Program (FP7/20072013) under grant agreement n◦ 604102 (HBP) and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement n◦ 720270 (Human Brain Project SGA1), n◦ 785907 (Human Brain Project SGA2) and n◦ 945539 (Human Brain Project SGA3).
Copyright (c) 2016 - 2022 Blue Brain Project/EPFL