Brian 2 documentation

Brian is a simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms. We believe that a simulator should not only save the time of processors, but also the time of scientists. Brian is therefore designed to be easy to learn and use, highly flexible and easily extensible.

To get an idea of what writing a simulation in Brian looks like, take a look at a simple example, or run our interactive demo.

You can actually edit and run the examples in the browser without having to install Brian, using the Binder service (note: sometimes this service is down or running slowly):

http://mybinder.org/badge.svg

Once you have a feel for what is involved in using Brian, we recommend you start by following the installation instructions, then going through the tutorials, and finally reading the User Guide.

While reading the documentation, you will see the names of certain functions and classes are highlighted links (e.g. PoissonGroup). Clicking on these will take you to the “reference documentation”. This section is automatically generated from the code, and includes complete and very detailed information, so for new users we recommend sticking to the User’s guide. However, there is one feature that may be useful for all users. If you click on, for example, PoissonGroup, and scroll down to the bottom, you’ll get a list of all the example code that uses PoissonGroup. This is available for each class or method, and can be helpful in understanding how a feature works.

Finally, if you’re having problems, please do let us know at our support page.

Contents:

Introduction

Installation

We recommend users to use the Anaconda distribution by Continuum Analytics. Its use will make the installation of Brian 2 and its dependencies simpler, since packages are provided in binary form, meaning that they don’t have to be build from the source code at your machine. Furthermore, our automatic testing on the continuous integration services travis and appveyor are based on Anaconda, we are therefore confident that it works under this configuration.

However, Brian 2 can also be installed independent of Anaconda, either with other Python distributions (Enthought Canopy, Python(x,y) for Windows, ...) or simply based on Python and pip (see Installation from source below).

Installation with Anaconda

Installing Anaconda

Download the Anaconda distribution for your Operating System. For Windows users that want to use Python 3.x, we strongly recommend installing the 32 Bit version even on 64 Bit systems, since setting the compilation environment (see Requirements for C++ code generation below) is less complicated in that case. Note that the choice between Python 2.7 and Python 3.x is not very important at this stage, Anaconda allows you to create a Python 3 environment from Python 2 Anaconda and vice versa.

After the installation, make sure that your environment is configured to use the Anaconda distribution. You should have access to the conda command in a terminal and running python (e.g. from your IDE) should show a header like this, indicating that you are using Anaconda’s Python interpreter:

Python 2.7.10 |Anaconda 2.3.0 (64-bit)| (default, May 28 2015, 17:02:03)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org

Here’s some documentation on how to set up some popular IDEs for Anaconda: https://docs.continuum.io/anaconda/ide_integration

Installing Brian 2

You can either install Brian 2 in the Anaconda root environment, or create a new environment for Brian 2 (http://conda.pydata.org/docs/using/envs.html). The latter has the advantage that you can update (or not update) the dependencies of Brian 2 independently from the rest of your system.

Since Brian 2 is not part of the main Anaconda distribution, you have to install it from the brian-team channel. To do so, use:

conda install -c brian-team brian2

You can also permanently add the channel to your list of channels:

conda config --add channels brian-team

This has only to be done once. After that, you can install and update the brian2 packages as any other Anaconda package:

conda install brian2
Installing other useful packages

There are various packages that are useful but not necessary for working with Brian. These include: matplotlib (for plotting), nose (for running the test suite), ipython and jupyter-notebook (for an interactive console). To install them from anaconda, simply do:

conda install matplotlib nose ipython jupyter-notebook

You should also have a look at the brian2tools package, which contains several useful functions to visualize Brian 2 simulations and recordings. You can install it with pip or anaconda, in the same way as Brian 2 itself, e.g. with:

conda install -c brian-team brian2tools

Installation from source

If you decide not to use Anaconda, you can install Brian 2 from the Python package index: https://pypi.python.org/pypi/Brian2

To do so, use the pip utility:

pip install brian2

You might want to add the --user flag, to install Brian 2 for the local user only, which means that you don’t need administrator privileges for the installation.

In principle, the above command also install Brian’s dependencies. Unfortunately, this does not work for numpy, it has to be installed in a separate step before all other dependencies (pip install numpy), if it is not already installed.

If you have an older version of pip, first update pip itself:

# On Linux/MacOsX:
pip install -U pip

# On Windows
python -m pip install -U pip

If you don’t have pip but you have the easy_install utility, you can use it to install pip:

easy_install pip

If you have neither pip nor easy_install, use the approach described here to install pip: https://pip.pypa.io/en/latest/installing/

Alternatively, you can download the source package directly and uncompress it. You can then either run python setup.py install or python setup.py develop to install it, or simply add the source directory to your PYTHONPATH (this will only work for Python 2.x).

Requirements for C++ code generation

C++ code generation is highly recommended since it can drastically increase the speed of simulations (see Computational methods and efficiency for details). To use it, you need a C++ compiler and either Cython or weave (only for Python 2.x). Cython/weave will be automatically installed if you perform the installation via Anaconda, as recommended. Otherwise you can install them in the usual way, e.g. using pip install cython or pip install weave.

Linux and OS X

On Linux and Mac OS X, you will most likely already have a working C++ compiler installed (try calling g++ --version in a terminal). If not, use your distribution’s package manager to install a g++ package.

Windows

On Windows, the necessary steps to get Runtime code generation (i.e. Cython/weave) to work depend on the Python version you are using:

Python 2.7

This should be all you need.

Python 3.4

For 64 Bit Windows with Python 3.4, you have to additionally set up your environment correctly every time you run your Brian script (this is why we recommend against using this combination on Windows). To do this, run the following commands (assuming the default installation path) at the CMD prompt, or put them in a batch file:

setlocal EnableDelayedExpansion
CALL "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /x64 /release
set DISTUTILS_USE_SDK=1

Python 3.5

  • Download and install Visual Studio Community 2015. Do not chose the default install but instead customize it, the only necessary option is “Programming Languages / Visual C++ / Common Tools for Visual C++ 2015”

For Standalone code generation, you can either use the compiler installed above or any other version of Visual Studio – in this case, the Python version does not matter.

Try running the test suite (see Testing Brian below) after the installation to make sure everything is working as expected.

Development version

To run the latest development code, you can install from brian-team’s “dev” channel with Anaconda. Note that if you previously added the brian-team channel to your list of channels, you have to first remove it:

conda config --remove channels brian-team -f

Also uninstall any version of Brian 2 that you might have previously installed:

conda remove brian2

Finally, install the brian2 package from the development channel:

conda install -c brian-team/channel/dev brian2

If this fails with an error message about the py-cpuinfo package (a dependency that we provide in the main brian-team channel), install it from the main channel:

conda install -c brian-team py-cpuinfo

Then repeat the command to install Brian 2 from the development channel.

You can also directly clone the git repository at github (https://github.com/brian-team/brian2) and then run python setup.py install or python setup.py develop or simply add the source directory to your PYTHONPATH (this will only work for Python 2.x).

Finally, another option is to use pip to directly install from github:

pip install https://github.com/brian-team/brian2/archive/master.zip

Testing Brian

If you have the nose testing utility installed, you can run Brian’s test suite:

import brian2
brian2.test()

It should end with “OK”, possibly showing a number of skipped tests but no warnings or errors. For more control about the tests that are run see the developer documentation on testing.

Release notes

Brian 2.0.1

This is a bug-fix release that fixes a number of important bugs (see below), but does not introduce any new features. We recommend all users of Brian 2 to upgrade.

As always, please report bugs or suggestions to the github bug tracker (https://github.com/brian-team/brian2/issues) or to the brian-development mailing list (brian-development@googlegroups.com).

Improvements and bug fixes
  • Fix PopulationRateMonitor for recordings from subgroups (#772)
  • Fix SpikeMonitor for recordings from subgroups (#777)
  • Check that string expressions provided as the rates argument for PoissonGroup have correct units.
  • Fix compilation errors when multiple run statements with different report arguments are used in C++ standalone mode.
  • Several documentation updates and fixes
Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to anyone we forgot...):

  • Myung Seok Shim
  • Pamela Hathway

Brian 2.0 (changes since 1.4)

Major new features
  • Much more flexible model definitions. The behaviour of all model elements can now be defined by arbitrary equations specified in standard mathematical notation.
  • Code generation as standard. Behind the scenes, Brian automatically generates and compiles C++ code to simulate your model, making it much faster.
  • “Standalone mode”. In this mode, Brian generates a complete C++ project tree that implements your model. This can be then be compiled and run entirely independently of Brian. This leads to both highly efficient code, as well as making it much easier to run simulations on non-standard computational hardware, for example on robotics platforms.
  • Multicompartmental modelling.
  • Python 2 and 3 support.
New features
  • Installation should now be much easier, especially if using the Anaconda Python distribution. See Installation.
  • Many improvements to Synapses which replaces the old Connection object in Brian 1. This includes: synapses that are triggered by non-spike events; synapses that target other synapses; huge speed improvements thanks to using code generation; new “generator syntax” when creating synapses is much more flexible and efficient. See Synapses.
  • New model definitions allow for much more flexible refractoriness. See Refractoriness.
  • SpikeMonitor and StateMonitor are now much more flexible, and cover a lot of what used to be covered by things like MultiStateMonitor, etc. See Recording during a simulation.
  • Multiple event types. In addition to the default spike event, you can create arbitrary events, and have these trigger code blocks (like reset) or synaptic events. See Custom events.
  • New units system allows arrays to have units. This eliminates the need for a lot of the special casing that was required in Brian 1. See Physical units.
  • Indexing variable by condition, e.g. you might write G.v['x>0'] to return all values of variable v in NeuronGroup G where the group’s variable x>0. See State variables.
  • Correct numerical integration of stochastic differential equations. See Numerical integration.
  • “Magic” run() system has been greatly simplified and is now much more transparent. In addition, if there is any ambiguity about what the user wants to run, an erorr will be raised rather than making a guess. This makes it much safer. In addition, there is now a store()/restore() mechanism that simplifies restarting simulations and managing separate training/testing runs. See Running a simulation.
  • Changing an external variable between runs now works as expected, i.e. something like tau=1*ms; run(100*ms); tau=5*ms; run(100*ms). In Brian 1 this would have used tau=1*ms for both runs. More generally, in Brian 2 there is now better control over namespaces. See Namespaces.
  • New “shared” variables with a single value shared between all neurons. See Shared variables.
  • New Group.run_regularly() method for a codegen-compatible way of doing things that used to be done with network_operation() (which can still be used). See Regular operations.
  • New system for handling externally defined functions. They have to specify which units they accept in their arguments, and what they return. In addition, you can easily specify the implementation of user-defined functions in different languages for code generation. See Functions.
  • State variables can now be defined as integer or boolean values. See Equations.
  • State variables can now be exported directly to Pandas data frame. See Storing state variables.
  • New generalised “flags” system for giving additional information when defining models. See Flags.
  • TimedArray now allows for 2D arrays with arbitrary indexing. See Timed arrays.
  • Better support for using Brian in IPython/Jupyter. See, for example, start_scope().
  • New preferences system. See Preferences.
  • Random number generation can now be made reliably reproducible. See Random numbers.
  • New profiling option to see which parts of your simulation are taking the longest to run. See Profiling.
  • New logging system allows for more precise control. See Logging.
  • New ways of importing Brian for advanced Python users. See Importing Brian.
  • Improved control over the order in which objects are updated during a run. See Scheduling and custom progress reporting.
  • Users can now easily define their own numerical integration methods. See State update.
  • Support for parallel processing using the OpenMP version of standalone mode. Note that all Brian tests pass with this, but it is still considered to be experimental. See Multi-threading with OpenMP.
Backwards incompatible changes

See Detailed Brian 1 to Brian 2 conversion notes.

Behind the scenes changes
  • All user models are now passed through the code generation system. This allows us to be much more flexible about introducing new target languages for generated code to make use of non-standard computational hardware. See Code generation.
  • New standalone/device mode allows generation of a complete project tree that can be compiled and built independently of Brian and Python. This allows for even more flexible use of Brian on non-standard hardware. See Devices.
  • All objects now have a unique name, used in code generation. This can also be used to access the object through the Network object.
Contributions

Full list of all Brian 2 contributors, ordered by the time of their first contribution:

Brian 2.0 (changes since 2.0rc3)

New features
  • A new flag constant over dt can be applied to subexpressions to have them only evaluated once per timestep (see Models and neuron groups). This flag is mandatory for stateful subexpressions, e.g. expressions using rand() or randn(). (#720, #721)
Improvements and bug fixes
  • Fix EventMonitor.values() and SpikeMonitor.spike_trains() to always return sorted spike/event times (#725).
  • Respect the active attribute in C++ standalone mode (#718).
  • More consistent check of compatible time and dt values (#730).
  • Attempting to set a synaptic variable or to start a simulation with synapses without any preceding connect call now raises an error (#737).
  • Improve the performance of coordinate calculation for Morphology objects, which previously made plotting very slow for complex morphologies (#741).
  • Fix a bug in SpatialNeuron where it did not detect non-linear dependencies on v, introduced via point currents (#743).
Infrastructure and documentation improvements
  • An interactive demo, tutorials, and examples can now be run in an interactive jupyter notebook on the mybinder platform, without any need for a local Brian installation (#736). Thanks to Ben Evans for the idea and help with the implementation.
  • A new extensive guide for converting Brian 1 simulations to Brian 2 user coming from Brian 1: Changes for Brian 1 users
  • A re-organized User’s guide, with clearer indications which information is important for new Brian users.
Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to anyone we forgot...):

  • Chaofei Hong
  • Daniel Bliss
  • Jacopo Bono
  • Ruben Tikidji-Hamburyan

Brian 2.0rc3

This is another “release candidate” for Brian 2.0 that fixes a range of bugs and introduces better support for random numbers (see below). We are getting close to the final Brian 2.0 release, the remaining work will focus on bug fixes, and better error messages and documentation.

As always, please report bugs or suggestions to the github bug tracker (https://github.com/brian-team/brian2/issues) or to the brian-development mailing list (brian-development@googlegroups.com).

New features
  • Brian now comes with its own seed() function, allowing to seed the random number generator and thereby to make simulations reproducible. This function works for all code generation targets and in runtime and standalone mode. See Random numbers for details.
  • Brian can now export/import state variables of a group or a full network to/from a pandas DataFrame and comes with a mechanism to extend this to other formats. Thanks to Dominik Krzemiński for this contribution (see #306).
Improvements and bug fixes
  • Use a Mersenne-Twister pseudorandom number generator in C++ standalone mode, replacing the previously used low-quality random number generator from the C standard library (see #222, #671 and #706).
  • Fix a memory leak in code running with the weave code generation target, and a smaller memory leak related to units stored repetitively in the UnitRegistry.
  • Fix a difference of one timestep in the number of simulated timesteps between runtime and standalone that could arise for very specific values of dt and t (see #695).
  • Fix standalone compilation failures with the most recent gcc version which defaults to C++14 mode (see #701)
  • Fix incorrect summation in synapses when using the (summed) flag and writing to pre-synaptic variables (see #704)
  • Make synaptic pathways work when connecting groups that define nested subexpressions, instead of failing with a cryptic error message (see #707).
Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to anyone we forgot...):

  • Craig Henriquez
  • Daniel Bliss
  • David Higgins
  • Gordon Erlebacher
  • Max Gillett
  • Moritz Augustin
  • Sami Abdul-Wahid

Brian 2.0rc1

This is a bug fix release that we release only about two weeks after the previous release because that release introduced a bug that could lead to wrong integration of stochastic differential equations. Note that standard neuronal noise models were not affected by this bug, it only concerned differential equations implementing a “random walk”. The release also fixes a few other issues reported by users, see below for more information.

Improvements and bug fixes
  • Fix a regression from 2.0b4: stochastic differential equations without any non-stochastic part (e.g. dx/dt = xi/sqrt(ms)`) were not integrated correctly (see #686).
  • Repeatedly calling restore() (or Network.restore()) no longer raises an error (see #681).
  • Fix an issue that made PoissonInput refuse to run after a change of dt (see #684).
  • If the rates argument of PoissonGroup is a string, it will now be evaluated at every time step instead of once at construction time. This makes time-dependent rate expressions work as expected (see #660).
Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to anyone we forgot...):

  • Cian O’Donnell
  • Daniel Bliss
  • Ibrahim Ozturk
  • Olivia Gozel

Brian 2.0rc

This is a release candidate for the final Brian 2.0 release, meaning that from now on we will focus on bug fixes and documentation, without introducing new major features or changing the syntax for the user. This release candidate itself does however change a few important syntax elements, see “Backwards-incompatible changes” below.

As always, please report bugs or suggestions to the github bug tracker (https://github.com/brian-team/brian2/issues) or to the brian-development mailing list (brian-development@googlegroups.com).

Major new features
  • New “generator syntax” to efficiently generate synapses (e.g. one-to-one connections), see Creating synapses for more details.
  • For synaptic connections with multiple synapses between a pair of neurons, the number of the synapse can now be stored in a variable, allowing its use in expressions and statements (see Creating synapses).
  • Synapses can now target other Synapses objects, useful for some models of synaptic modulation.
  • The Morphology object has been completely re-worked and several issues have been fixed. The new Section object allows to model a section as a series of truncated cones (see Creating a neuron morphology).
  • Scripts with a single run() call, no longer need an explicit device.build() call to run with the C++ standalone device. A set_device() in the beginning is enough and will trigger the build call after the run (see Standalone code generation).
  • All state variables within a Network can now be accessed by Network.get_states() and Network.set_states() and the store()/restore() mechanism can now store the full state of a simulation to disk.
  • Stochastic differential equations with multiplicative noise can now be integrated using the Euler-Heun method (heun). Thanks to Jan-Hendrik Schleimer for this contribution.
  • Error messages have been significantly improved: errors for unit mismatches are now much clearer and error messages triggered during the intialization phase point back to the line of code where the relevant object (e.g. a NeuronGroup) was created.
  • PopulationRateMonitor now provides a smooth_rate method for a filtered version of the stored rates.
Improvements and bug fixes
  • In addition to the new synapse creation syntax, sparse probabilistic connections are now created much faster.
  • The time for the initialization phase at the beginning of a run() has been significantly reduced.
  • Multicompartmental simulations with a large number of compartments are now simulated more efficiently and are making better use of several processor cores when OpenMP is activated in C++ standalone mode. Thanks to Moritz Augustin for this contribution.
  • Simulations will use compiler settings that optimize performance by default.
  • Objects that have user-specified names are better supported for complex simulation scenarios (names no longer have to be unique at all times, but only across a network or across a standalone device).
  • Various fixes for compatibility with recent versions of numpy and sympy
Important backwards-incompatible changes
  • The argument names in Synapses.connect() have changed and the first argument can no longer be an array of indices. To connect based on indices, use Synapses.connect(i=source_indices, j=target_indices). See Creating synapses and the documentation of Synapses.connect() for more details.
  • The actions triggered by pre-synaptic and post-synaptic spikes are now described by the on_pre and on_post keyword arguments (instead of pre and post).
  • The Morphology object no longer allows to change attributes such as length and diameter after its creation. Complex morphologies should instead be created using the Section class, allowing for the specification of all details.
  • Morphology objects that are defined with coordinates need to provide the start point (relative to the end point of the parent compartment) as the first coordinate. See Creating a neuron morphology for more details.
  • For simulations using the C++ standalone mode, no longer call Device.build (if using a single run() call), or use set_device() with build_on_run=False (see Standalone code generation).
Infrastructure improvements
  • Our test suite is now also run on Mac OS-X (on the Travis CI platform).
Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to anyone we forgot...):

  • Chaofei Hong
  • Kees de Leeuw
  • Luke Y Prince
  • Myung Seok Shim
  • Owen Mackwood
  • Github users: @epaxon, @flinz, @mariomulansky, @martinosorb, @neuralyzer, @oleskiw, @prcastro, @sudoankit

Brian 2.0b4

This is the fourth (and probably last) beta release for Brian 2.0. This release adds a few important new features and fixes a number of bugs so we recommend all users of Brian 2 to upgrade. If you are a user new to Brian, we also recommend to directly start with Brian 2 instead of using the stable release of Brian 1. Note that the new recommended way to install Brian 2 is to use the Anaconda distribution and to install the Brian 2 conda package (see Installation).

This is however still a Beta release, please report bugs or suggestions to the github bug tracker (https://github.com/brian-team/brian2/issues) or to the brian-development mailing list (brian-development@googlegroups.com).

Major new features
  • In addition to the standard threshold/reset, groups can now define “custom events”. These can be recorded with the new EventMonitor (a generalization of SpikeMonitor) and Synapses can connect to these events instead of the standard spike event. See Custom events for more details.
  • SpikeMonitor and EventMonitor can now also record state variable values at the time of spikes (or custom events), thereby offering the functionality of StateSpikeMonitor from Brian 1. See Recording variables at spike time for more details.
  • The code generation modes that interact with C++ code (weave, Cython, and C++ standalone) can now be more easily configured to work with external libraries (compiler and linker options, header files, etc.). See the documentation of the cpp_prefs module for more details.
Improvemements and bug fixes
  • Cython simulations no longer interfere with each other when run in parallel (thanks to Daniel Bliss for reporting and fixing this).
  • The C++ standalone now works with scalar delays and the spike queue implementation deals more efficiently with them in general.
  • Dynamic arrays are now resized more efficiently, leading to faster monitors in runtime mode.
  • The spikes generated by a SpikeGeneratorGroup can now be changed between runs using the set_spikes method.
  • Multi-step state updaters now work correctly for non-autonomous differential equations
  • PoissonInput now correctly works with multiple clocks (thanks to Daniel Bliss for reporting and fixing this)
  • The get_states method now works for StateMonitor. This method provides a convenient way to access all the data stored in the monitor, e.g. in order to store it on disk.
  • C++ compilation is now easier to get to work under Windows, see Installation for details.
Important backwards-incompatible changes
  • The custom_operation method has been renamed to run_regularly and can now be called without the need for storing its return value.
  • StateMonitor will now by default record at the beginning of a time step instead of at the end. See Recording variables continuously for details.
  • Scalar quantities now behave as python scalars with respect to in-place modifications (augmented assignments). This means that x = 3*mV; y = x; y += 1*mV will no longer increase the value of the variable x as well.
Infrastructure improvements
  • We now provide conda packages for Brian 2, making it very easy to install when using the Anaconda distribution (see Installation).
Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to everyone we forgot...):

  • Daniel Bliss
  • Damien Drix
  • Rainer Engelken
  • Beatriz Herrera Figueredo
  • Owen Mackwood
  • Augustine Tan
  • Ot de Wiljes

Brian 2.0b3

This is the third beta release for Brian 2.0. This release does not add many new features but it fixes a number of important bugs so we recommend all users of Brian 2 to upgrade. If you are a user new to Brian, we also recommend to directly start with Brian 2 instead of using the stable release of Brian 1.

This is however still a Beta release, please report bugs or suggestions to the github bug tracker (https://github.com/brian-team/brian2/issues) or to the brian-development mailing list (brian-development@googlegroups.com).

Major new features
  • A new PoissonInput class for efficient simulation of Poisson-distributed input events.
Improvements
  • The order of execution for pre and post statements happending in the same time step was not well defined (it fell back to the default alphabetical ordering, executing post before pre). It now explicitly specifies the order attribute so that pre gets executed before post (as in Brian 1). See the Synapses documentation for details.
  • The default schedule that is used can now be set via a preference (core.network.default_schedule). New automatically generated scheduling slots relative to the explicitly defined ones can be used, e.g. before_resets or after_synapses. See Scheduling for details.
  • The scipy package is no longer a dependency (note that weave for compiled C code under Python 2 is now available in a separate package). Note that multicompartmental models will still benefit from the scipy package if they are simulated in pure Python (i.e. with the numpy code generation target) – otherwise Brian 2 will fall back to a numpy-only solution which is significantly slower.
Important bug fixes
  • Fix SpikeGeneratorGroup which did not emit all the spikes under certain conditions for some code generation targets (#429)
  • Fix an incorrect update of pre-synaptic variables in synaptic statements for the numpy code generation target (#435).
  • Fix the possibility of an incorrect memory access when recording a subgroup with SpikeMonitor (#454).
  • Fix the storing of results on disk for C++ standalone on Windows – variables that had the same name when ignoring case (e.g. i and I) where overwriting each other (#455).
Infrastructure improvements
Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to everyone we forgot...):

  • Daniel Bliss
  • Owen Mackwood
  • Ankur Sinha
  • Richard Tomsett

Brian 2.0b2

This is the second beta release for Brian 2.0, we recommend all users of Brian 2 to upgrade. If you are a user new to Brian, we also recommend to directly start with Brian 2 instead of using the stable release of Brian 1.

This is however still a Beta release, please report bugs or suggestions to the github bug tracker (https://github.com/brian-team/brian2/issues) or to the brian-development mailing list (brian-development@googlegroups.com).

Major new features
  • Multi-compartmental simulations can now be run using the Standalone code generation mode (this is not yet well-tested, though).
  • The implementation of TimedArray now supports two-dimensional arrays, i.e. different input per neuron (or synapse, etc.), see Timed arrays for details.
  • Previously, not setting a code generation target (using the codegen.target preference) would mean that the numpy target was used. Now, the default target is auto, which means that a compiled language (weave or cython) will be used if possible. See Computational methods and efficiency for details.
  • The implementation of SpikeGeneratorGroup has been improved and it now supports a period argument to repeatedly generate a spike pattern.
Improvements
  • The selection of a numerical algorithm (if none has been specified by the user) has been simplified. See Numerical integration for details.
  • Expressions that are shared among neurons/synapses are now updated only once instead of for every neuron/synapse which can lead to performance improvements.
  • On Windows, The Microsoft Visual C compiler is now supported in the cpp_standalone mode, see the respective notes in the Installation and Computational methods and efficiency documents.
  • Simulation runs (using the standard “runtime” device) now collect profiling information. See Profiling for details.
Infrastructure and documentation improvements
  • Tutorials for beginners in the form of ipython notebooks (currently only covering the basics of neurons and synapses) are now available.
  • The Examples in the documentation now include the images they generated. Several examples have been adapted from Brian 1.
  • The code is now automatically tested on Windows machines, using the appveyor service. This complements the Linux testing on travis.
  • Using a version of a dependency (e.g. sympy) that we don’t support will now raise an error when you import brian2 – see Dependency checks for more details.
  • Test coverage for the cpp_standalone mode has been significantly increased.
Important bug fixes
  • The preparation time for complicated equations has been significantly reduced.
  • The string representation of small physical quantities has been corrected (#361)
  • Linking variables from a group of size 1 now works correctly (#383)
Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to everyone we forgot...):

  • Conor Cox
  • Gordon Erlebacher
  • Konstantin Mergenthaler

Brian 2.0beta

This is the first beta release for Brian 2.0 and the first version of Brian 2.0 we recommend for general use. From now on, we will try to keep changes that break existing code to a minimum. If you are a user new to Brian, we’d recommend to start with the Brian 2 beta instead of using the stable release of Brian 1.

This is however still a Beta release, please report bugs or suggestions to the github bug tracker (https://github.com/brian-team/brian2/issues) or to the brian-development mailing list (brian-development@googlegroups.com).

Major new features
  • New classes Morphology and SpatialNeuron for the simulation of Multicompartment models
  • A temporary “bridge” for brian.hears that allows to use its Brian 1 version from Brian 2 (Brian Hears)
  • Cython is now a new code generation target, therefore the performance benefits of compiled code are now also available to users running simulations under Python 3.x (where scipy.weave is not available)
  • Networks can now store their current state and return to it at a later time, e.g. for simulating multiple trials starting from a fixed network state (Continuing/repeating simulations)
  • C++ standalone mode: multiple processors are now supported via OpenMP (Multi-threading with OpenMP), although this code has not yet been well tested so may be inaccurate.
  • C++ standalone mode: after a run, state variables and monitored values can be loaded from disk transparently. Most scripts therefore only need two additional lines to use standalone mode instead of Brian’s default runtime mode (Standalone code generation).
Syntax changes
  • The syntax and semantics of everything around simulation time steps, clocks, and multiple runs have been cleaned up, making reinit obsolete and also making it unnecessary for most users to explicitly generate Clock objects – instead, a dt keyword can be specified for objects such as NeuronGroup (Running a simulation)
  • The scalar flag for parameters/subexpressions has been renamed to shared
  • The “unit” for boolean variables has been renamed from bool to boolean
  • C++ standalone: several keywords of CPPStandaloneDevice.build have been renamed
  • The preferences are now accessible via prefs instead of brian_prefs
  • The runner method has been renamed to custom_operation
Improvements
Bug fixes

57 github issues have been closed since the alpha release, of which 26 had been labeled as bugs. We recommend all users of Brian 2 to upgrade.

Contributions

Code and documentation contributions (ordered by the number of commits):

Testing, suggestions and bug reports (ordered alphabetically, apologies to everyone we forgot…):

  • Guillaume Bellec
  • Victor Benichoux
  • Laureline Logiaco
  • Konstantin Mergenthaler
  • Maurizio De Pitta
  • Jan-Hendrick Schleimer
  • Douglas Sterling
  • Katharina Wilmes

Changes for Brian 1 users

In most cases, Brian 2 works in a very similar way to Brian 1 but there are some important differences to be aware of. The major distinction is that in Brian 2 you need to be more explicit about the definition of your simulation in order to avoid inadvertent errors. In some cases, you will now get a warning in other even an error – often the error/warning message describes a way to resolve the issue.

Specific examples how to convert code from Brian 1 can be found in the document Detailed Brian 1 to Brian 2 conversion notes.

Physical units

The unit system now extends to arrays, e.g. np.arange(5) * mV will retain the units of volts and not discard them as Brian 1 did. Brian 2 is therefore also more strict in checking the units. For example, if the state variable v uses the unit of volt, the statement G.v = np.rand(len(G)) / 1000. will now raise an error. For consistency, units are returned everywhere, e.g. in monitors. If mon records a state variable v, mon.t will return a time in seconds and mon.v the stored values of v in units of volts.

If you need a pure numpy array without units for further processing, there are several options: if it is a state variable or a recorded variable in a monitor, appending an underscore will refer to the variable values without units, e.g. mon.t_ returns pure floating point values. Alternatively, you can remove units by diving by the unit (e.g. mon.t / second) or by explicitly converting it (np.asarray(mon.t)).

Here’s an overview showing a few expressions and their respective values in Brian 1 and Brian 2:

Expression Brian 1 Brian 2
1 * mV 1.0 * mvolt 1.0 * mvolt
np.array(1) * mV 0.001 1.0 * mvolt
np.array([1]) * mV array([ 0.001]) array([1.]) * mvolt
np.mean(np.arange(5) * mV) 0.002 2.0 * mvolt
np.arange(2) * mV array([ 0. , 0.001]) array([ 0., 1.]) * mvolt
(np.arange(2) * mV) >= 1 * mV array([False, True], dtype=bool) array([False, True], dtype=bool)
(np.arange(2) * mV)[0] >= 1 * mV False False
(np.arange(2) * mV)[1] >= 1 * mV DimensionMismatchError True

Unported packages

The following packages have not (yet) been ported to Brian 1. If your simulation critically depends on them, you should consider staying with Brian 1 for now.

  • brian.tools
  • brian.hears (the Brian 1 version can be used via brian2.hears, though, see Brian Hears)
  • brian.library.modelfitting
  • brian.library.electrophysilogy

Removed classes/functions and their replacements

In Brian 2, we have tried to keep the number of classes/functions to a minimum, but make each of them flexible enough to encompass a large number of use cases. A lot of the classes and functions that existed in Brian 1 have therefore been removed. The following table lists (most of) the classes that existed in Brian 1 but do no longer exist in Brian 2. You can consult it when you get a NameError while converting an existing script from Brian 1. The third column links to a document with further explanation and the second column gives either:

  1. the equivalent class in Brian 2 (e.g. StateMonitor can record multiple variables now and therefore replaces MultiStateMonitor);
  2. the name of a Brian 2 class in square brackets (e.g. [Synapses] for STDP), this means that the class can be used as a replacement but needs some additional code (e.g. explicitly specified STDP equations). The “More details” document should help you in making the necessary changes;
  3. “string expression”, if the functionality of a previously existing class can be expressed using the general string expression framework (e.g. threshold=VariableThreshold('Vt', 'V') can be replaced by threshold='V > Vt');
  4. a link to the relevant github issue if no equivalent class/function does exist so far in Brian 2;
  5. a remark such as “obsolete” if the particular class/function is no longer needed.
Brian 1 Brian 2 More details
AdEx [Equations] Library models (Brian 1 –> 2 conversion)
aEIF [Equations] Library models (Brian 1 –> 2 conversion)
AERSpikeMonitor #298 Monitors (Brian 1 –> 2 conversion)
alpha_conductance [Equations] Library models (Brian 1 –> 2 conversion)
alpha_current [Equations] Library models (Brian 1 –> 2 conversion)
alpha_synapse [Equations] Library models (Brian 1 –> 2 conversion)
AutoCorrelogram [SpikeMonitor] Monitors (Brian 1 –> 2 conversion)
biexpr_conductance [Equations] Library models (Brian 1 –> 2 conversion)
biexpr_current [Equations] Library models (Brian 1 –> 2 conversion)
biexpr_synapse [Equations] Library models (Brian 1 –> 2 conversion)
Brette_Gerstner [Equations] Library models (Brian 1 –> 2 conversion)
CoincidenceCounter [SpikeMonitor] Monitors (Brian 1 –> 2 conversion)
CoincidenceMatrixCounter [SpikeMonitor] Monitors (Brian 1 –> 2 conversion)
Compartments #443 Multicompartmental models (Brian 1 –> 2 conversion)
Connection Synapses Synapses (Brian 1 –> 2 conversion)
Current #443 Multicompartmental models (Brian 1 –> 2 conversion)
CustomRefractoriness [string expression] Neural models (Brian 1 –> 2 conversion)
DefaultClock Clock Networks and clocks (Brian 1 –> 2 conversion)
EmpiricalThreshold string expression Neural models (Brian 1 –> 2 conversion)
EventClock Clock Networks and clocks (Brian 1 –> 2 conversion)
exp_conductance [Equations] Library models (Brian 1 –> 2 conversion)
exp_current [Equations] Library models (Brian 1 –> 2 conversion)
exp_IF [Equations] Library models (Brian 1 –> 2 conversion)
exp_synapse [Equations] Library models (Brian 1 –> 2 conversion)
FileSpikeMonitor #298 Monitors (Brian 1 –> 2 conversion)
FloatClock Clock Networks and clocks (Brian 1 –> 2 conversion)
FunReset [string expression] Neural models (Brian 1 –> 2 conversion)
FunThreshold [string expression] Neural models (Brian 1 –> 2 conversion)
hist_plot no equivalent
HomogeneousPoissonThreshold string expression Neural models (Brian 1 –> 2 conversion)
IdentityConnection Synapses Synapses (Brian 1 –> 2 conversion)
IonicCurrent #443 Multicompartmental models (Brian 1 –> 2 conversion)
ISIHistogramMonitor [SpikeMonitor] Monitors (Brian 1 –> 2 conversion)
Izhikevich [Equations] Library models (Brian 1 –> 2 conversion)
K_current_HH [Equations] Library models (Brian 1 –> 2 conversion)
leak_current [Equations] Library models (Brian 1 –> 2 conversion)
leaky_IF [Equations] Library models (Brian 1 –> 2 conversion)
MembraneEquation #443 Multicompartmental models (Brian 1 –> 2 conversion)
MultiStateMonitor StateMonitor Monitors (Brian 1 –> 2 conversion)
Na_current_HH [Equations] Library models (Brian 1 –> 2 conversion)
NaiveClock Clock Networks and clocks (Brian 1 –> 2 conversion)
NoReset obsolete Neural models (Brian 1 –> 2 conversion)
NoThreshold obsolete Neural models (Brian 1 –> 2 conversion)
OfflinePoissonGroup [SpikeGeneratorGroup] Inputs (Brian 1 –> 2 conversion)
OrnsteinUhlenbeck [Equations] Library models (Brian 1 –> 2 conversion)
perfect_IF [Equations] Library models (Brian 1 –> 2 conversion)
PoissonThreshold string expression Neural models (Brian 1 –> 2 conversion)
PopulationSpikeCounter SpikeMonitor Monitors (Brian 1 –> 2 conversion)
PulsePacket [SpikeGeneratorGroup] Inputs (Brian 1 –> 2 conversion)
quadratic_IF [Equations] Library models (Brian 1 –> 2 conversion)
raster_plot plot_raster (brian2tools) brian2tools documentation
RecentStateMonitor no direct equivalent Monitors (Brian 1 –> 2 conversion)
Refractoriness string expression Neural models (Brian 1 –> 2 conversion)
RegularClock Clock Networks and clocks (Brian 1 –> 2 conversion)
Reset string expression Neural models (Brian 1 –> 2 conversion)
SimpleCustomRefractoriness [string expression] Neural models (Brian 1 –> 2 conversion)
SimpleFunThreshold [string expression] Neural models (Brian 1 –> 2 conversion)
SpikeCounter SpikeMonitor Monitors (Brian 1 –> 2 conversion)
StateHistogramMonitor [StateMonitor] Monitors (Brian 1 –> 2 conversion)
StateSpikeMonitor SpikeMonitor Monitors (Brian 1 –> 2 conversion)
STDP [Synapses] Synapses (Brian 1 –> 2 conversion)
STP [Synapses] Synapses (Brian 1 –> 2 conversion)
StringReset string expression Neural models (Brian 1 –> 2 conversion)
StringThreshold string expression Neural models (Brian 1 –> 2 conversion)
Threshold string expression Neural models (Brian 1 –> 2 conversion)
VanRossumMetric [SpikeMonitor] Monitors (Brian 1 –> 2 conversion)
VariableReset string expression Neural models (Brian 1 –> 2 conversion)
VariableThreshold string expression Neural models (Brian 1 –> 2 conversion)
List of detailed instructions
Detailed Brian 1 to Brian 2 conversion notes

These documents are only relevant for former users of Brian 1. If you do not have any Brian 1 code to convert, go directly to the main User’s guide.

Neural models (Brian 1 –> 2 conversion)

The syntax for specifying neuron models in a NeuronGroup changed in several details. In general, a string-based syntax (that was already optional in Brian 1) consistently replaces the use of classes (e.g. VariableThreshold) or guessing (e.g. which variable does threshold=50*mV check).

Threshold and Reset

String-based thresholds are now the only possible option and replace all the methods of defining threshold/reset in Brian 1:

Brian 1 Brian 2
group = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                    threshold=-50*mV,
                    reset=-70*mV)
group = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                    threshold='v > -50*mV',
                    reset='v = -70*mV')
group = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                    threshold=Threshold(-50*mV, state='v'),
                    reset=Reset(-70*mV, state='w'))
group = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                    threshold='v > -50*mV',
                    reset='v = -70*mV')
group = NeuronGroup(N, '''dv/dt = -v / tau : volt
                          dvt/dt = -vt / tau : volt
                          vr : volt''',
                    threshold=VariableThreshold(state='v',
                                                threshold_state='vt'),
                    reset=VariableThreshold(state='v',
                                            resetvaluestate='vr'))
group = NeuronGroup(N, '''dv/dt = -v / tau : volt
                          dvt/dt = -vt / tau : volt
                          vr : volt''',
                    threshold='v > vt',
                    reset='v = vr')
group = NeuronGroup(N, 'rate : Hz',
                    threshold=PoissonThreshold(state='rate'))
group = NeuronGroup(N, 'rate : Hz',
                    threshold='rand()<rate*dt')

There’s no direct equivalent for the “functional threshold/reset” mechanism from Brian 1. In simple cases, they can be implemented using the general string expression/statement mechanism (note that in Brian 1, reset=myreset is equivalent to reset=FunReset(myreset)):

Brian 1 Brian 2
def myreset(P,spikes):
    P.v_[spikes] = -70*mV+rand(len(spikes))*5*mV

group = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                    threshold=-50*mV,
                    reset=myreset)
group = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                    threshold='v > -50*mV',
                    reset='-70*mV + rand()*5*mV')
def mythreshold(v):
    return (v > -50*mV) & (rand(N) > 0.5)

group = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                    threshold=SimpleFunThreshold(mythreshold,
                                                 state='v'),
                    reset=-70*mV)
group = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                    threshold='v > -50*mV and rand() > 0.5',
                    reset='v = -70*mV')

For more complicated cases, you can use the general mechanism for User-provided functions that Brian 2 provides. The only caveat is that you’d have to provide an implementation of the function in the code generation target language which is by default C++ or Cython. However, in the default Runtime code generation mode, you can chose different code generation targets for different parts of your simulation. You can thus switch the code generation target for the threshold/reset mechanism to numpy while leaving the default target for the rest of the simulation in place. The details of this process and the correct definition of the functions (e.g. global_reset needs a “dummy” return value) are somewhat cumbersome at the moment and we plan to make them more straightforward in the future. Also note that if you use this kind of mechanism extensively, you’ll lose all the performance advantage that Brian 2’s code generation mechanism provides (in addition to not being able to use Standalone code generation mode at all).

Brian 1 Brian 2
def single_threshold(v):
    # Only let a single neuron spike
    crossed_threshold = np.nonzero(v > -50*mV)[0]
    should_spike = np.zeros(len(P), dtype=np.bool)
    if len(crossed_threshold):
        choose = np.random.randint(len(crossed_threshold))
        should_spike[crossed_threshold[choose]] = True
    return should_spike

def global_reset(P, spikes):
    # Reset everything
    if len(spikes):
        P.v_[:] = -70*mV

neurons = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                      threshold=SimpleFunThreshold(single_threshold,
                                                   state='v'),
                      reset=global_reset)
@check_units(v=volt, result=bool)
def single_threshold(v):
    pass # ... (identical to Brian 1)

@check_units(spikes=1, result=1)
def global_reset(spikes):
    # Reset everything
    if len(spikes):
         neurons.v_[:] = -0.070

neurons = NeuronGroup(N, 'dv/dt = -v / tau : volt',
                      threshold='single_threshold(v)',
                      reset='dummy = global_reset(i)')
# Set the code generation target for threshold/reset only:
neuron.thresholder['spike'].codeobj_class = NumpyCodeObject
neuron.resetter['spike'].codeobj_class = NumpyCodeObject

For an example how to translate EmpiricalThreshold, see the section on “Refractoriness” below.

Refractoriness

For a detailed description of Brian 2’s refractoriness mechanism see Refractoriness.

In Brian 1, refractoriness was tightly linked with the reset mechanism and some combinations of refractoriness and reset were not allowed. The standard refractory mechanism had two effects during the refractoriness: it prevented the refractory cell from spiking and it clamped a state variable (normally the membrane potential of the cell). In Brian 2, refractoriness is independent of reset and the two effects are specified separately: the refractory keyword specifies the time (or an expression evaluating to a time) during which the cell does not spike, and the (unless refractory) flag marks one or more variables to be clamped during the refractory period. To correctly translate the standard refractory mechanism from Brian 1, you’ll therefore need to specify both:

Brian 1 Brian 2
group = NeuronGroup(N, 'dv/dt = (I - v)/tau : volt',
                    threshold=-50*mV,
                    reset=-70*mV,
                    refractory=3*ms)
group = NeuronGroup(N, 'dv/dt = (I - v)/tau : volt (unless refractory)',
                    threshold='v > -50*mV',
                    reset='v = -70*mV',
                    refractory=3*ms)

More complex refractoriness mechanisms based on SimpleCustomRefractoriness and CustomRefractoriness can be translatated using string expressions or user-defined functions, see the remarks in the preceding section on “Threshold and Reset”.

Brian 2 no longer has an equivalent to the EmpiricalThreshold class (which detects at the first threshold crossing but ignores all following threshold crossings for a certain time after that). However, the standard refractoriness mechanism can be used to implement the same behaviour, since it does not reset/clamp any value if not explicitly asked for it (which would be fatal for Hodgkin-Huxley type models):

Brian 1 Brian 2
group = NeuronGroup(N,'''
                    dv/dt = (I_L - I_Na - I_K + I)/Cm : volt
                    ...''',
                    threshold=EmpiricalThreshold(threshold=20*mV,
                                                 refractory=1*ms,
                                                 state='v'))
group = NeuronGroup(N,'''
                    dv/dt = (I_L - I_Na - I_K + I)/Cm : volt
                    ...''',
                    threshold='v > -20*mV',
                    refractory=1*ms)
Subgroups

The class NeuronGroup in Brian 2 does no longer provide a subgroup method, the only way to construct subgroups is therefore the slicing syntax (that works in the same way as in Brian 1):

Brian 1 Brian 2
group = NeuronGroup(4000, ...)
group_exc = group.subgroup(3200)
group_inh = group.subgroup(800)
group = NeuronGroup(4000, ...)
group_exc = group[:3200]
group_inh = group[3200:]
Linked Variables

For a description of Brian 2’s mechanism to link variables between groups, see Linked variables.

Linked variables need to be explicitly annotated with the (linked) flag in Brian 2:

Brian 1 Brian 2
group1 = NeuronGroup(N,
                     'dv/dt = -v / tau : volt')
group2 = NeuronGroup(N,
                     '''dv/dt = (-v + w) / tau : volt
                        w : volt''')
group2.w = linked_var(group1, 'v')
group1 = NeuronGroup(N,
                     'dv/dt = -v / tau : volt')
group2 = NeuronGroup(N,
                     '''dv/dt = (-v + w) / tau : volt
                        w : volt (linked)''')
group2.w = linked_var(group1, 'v')
Synapses (Brian 1 –> 2 conversion)
Converting Brian 1’s Connection class

In Brian 2, the Synapses class is the only class to model synaptic connections, you will therefore have to convert all uses of Brian 1’s Connection class. The Connection class increases a post-synaptic variable by a certain amount (the “synaptic weight”) each time a pre-synaptic spike arrives. This has to be explicitly specified when using the Synapses class, the equivalent to the basic Connection usage is:

Brian 1 Brian 2
conn = Connection(source, target, 'ge')
conn = Synapses(source, target, 'w : siemens',
                on_pre='ge += w')

Note that he variable w, which stores the synaptic weight, has to have the same units as the post-synaptic variable (in this case: ge) that it increases.

Creating synapses and setting weights

With the Connection class, creating a synapse and setting its weight is a single process whereas with the Synapses class those two steps are separate. There is no direct equivalent to the convenience functions connect_full, connect_random and connect_one_to_one, but you can easily implement the same functionality with the general mechanism of Synapses.connect():

Brian 1 Brian 2
conn1 = Connection(source, target, 'ge')
conn1[3, 5] = 3*nS
conn1 = Synapses(source, target, 'w: siemens',
                 on_pre='ge += w')
conn1.connect(i=3, j=5)
conn1.w[3, 5] = 3*nS  # (or conn1.w = 3*nS)
conn2 = Connection(source, target, 'ge')
conn2.connect_full(source, target, 5*nS)
conn2 = ... # see above
conn2.connect()
conn2.w = 5*nS
conn3 = Connection(source, target, 'ge')
conn3.connect_random(source, target,
                     sparseness=0.02,
                     weight=2*ns)
conn3 = ... # see above
conn3.connect(p=0.02)
conn3.w = 2*nS
conn4 = Connection(source, target, 'ge')
conn4.connect_one_to_one(source, target,
                         weight=4*nS)
conn4 = ... # see above
conn4.connect(j='i')
conn4.w = 4*nS
conn5 = IdentityConnection(source, target,
                           weight=3*nS)
conn5 = Synapses(source, target,
                 'w : siemens (shared)')
conn5.w = 3*nS
Weight matrices

Brian 2’s Synapses class does not support setting the weights of a neuron with a weight matrix. However, Synapses.connect() creates the synapses in a predictable order (first all synapses for the first pre-synaptic cell, then all synapses for the second pre-synaptic cell, etc.), so a reshaped “flat” weight matrix can be used:

Brian 1 Brian 2
# len(source) == 20, len(target) == 30
conn6 = Connection(source, target, 'ge')
W = rand(20, 30)*nS
conn6.connect(source, target, weight=W)
# len(source) == 20, len(target) == 30
conn6 = Synapses(source, target, 'w: siemens',
                 on_pre='ge += w')
W = rand(20, 30)*nS
conn6.connect()
conn6.w = W.flatten()

However note that if your weight matrix can be described mathematically (e.g. random as in the example above), then you should not create a weight matrix in the first place but use Brian 2’s mechanism to set variables based on mathematical expressions (in the above case: conn5.w = 'rand()'). Especially for big connection matrices this will have better performance, since it will be executed in generated code. You should only resort to explicit weight matrices when there is no alternative (e.g. to load weights from previous simulations).

In Brian 1, you can restrict the functions connect, connect_random, etc. to subgroups. Again, there is no direct equivalent to this in Brian 2, but the general string syntax allows you to make connections conditional on logical statements that refer to pre-/post-synaptic indices and can therefore also used to restrict the connection to a subgroup of cells. When you set the synaptic weights, you can however use subgroups to restrict the subset of weights you want to set.

Brian 1 Brian 2
conn7 = Connection(source, target, 'ge')
conn7.connect_full(source[:5], target[5:10], 5*nS)
conn7 = Synapses(source, target, 'w: siemens',
                 on_pre='ge += w')
conn7.connect('i < 5 and j >=5 and j <10')
# Alternative (more efficient):
# conn7.connect(j='k in range(5, 10) if i < 5')
conn7.w[source[:5], target[5:10]] = 5*nS
Connections defined by functions

Brian 1 allowed you to pass in a function as the value for the weight argument in a connect call (and also for the sparseness argument in connect_random). You should be able to replace such use cases by the the general, string-expression based method:

Brian 1 Brian 2
conn8 = Connection(source, target, 'ge')
conn8.connect_full(source, target,
                   weight=lambda i,j:(1+cos(i-j))*2*nS)
conn8 = Synapses(source, target, 'w: siemens',
                 on_pre='ge += w')
conn8.connect()
conn8.w = '(1 + cos(i - j))*2*nS'
conn9 = Connection(source, target, 'ge')
conn9.connect_random(source, target,
                     sparseness=0.02,
                     weight=lambda:rand()*nS)
conn9 = ... # see above
conn9.connect(p=0.02)
conn9.w = 'rand()*nS'
conn10 = Connection(source, target, 'ge')
conn10.connect_random(source, target,
                      sparseness=lambda i,j:exp(-abs(i-j)*.1),
                      weight=2*ns)
conn10 = ... # see above
conn10.connect(p='exp(-abs(i - j)*.1)')
conn10.w = 2*nS
Delays

The specification of delays changed in several aspects from Brian 1 to Brian 2: In Brian 1, delays where homogeneous by default, and heterogeneous delays had to be marked by delay=True, together with the specification of the maximum delay. In Brian 2, homogeneous delays are the default and you do not have to state the maximum delay. Brian 1’s syntax of specifying a pair of values to get randomly distributed delays in that range is no longer supported, instead use Brian 2’s standard string syntax:

Brian 1 Brian 2
conn11 = Connection(source, target, 'ge', delay=True,
                    max_delay=5*ms)
conn11.connect_full(source, target, weight=3*nS,
                    delay=(0*ms, 5*ms))
conn11 = Synapses(source, target, 'w : siemens',
                  on_pre='ge += w')
conn11.connect()
conn11.w = 3*nS
conn11.delay = 'rand()*5*ms'
Modulation

In Brian 2, there’s no need for the modulation keyword that Brian 1 offered, you can describe the modulation as part of the on_pre action:

Brian 1 Brian 2
conn12 = Connection(source, target, 'ge',
                    modulation='u')
conn12 = Synapses(source, target, 'w : siemens',
                  on_pre='ge += w * u_pre')
Structure

There’s no equivalen for Brian 1’s structure keyword in Brian 2, synapses are always stored in a sparse data structure. There is currently no support for changing synapses at run time (i.e. the “dynamic” structure of Brian 1).

Converting Brian 1’s Synapses class

Brian 2’s Synapses class works for the most part like the class of the same name in Brian 1. There are however some differences in details, listed below:

Synaptic models

The basic syntax to define a synaptic model is unchanged, but the keywords pre and post have been renamed to on_pre and on_post, respectively.

Brian 1 Brian 2
stdp_syn = Synapses(inputs, neurons, model='''
                    w:1
                    dApre/dt = -Apre/taupre : 1 (event-driven)
                    dApost/dt = -Apost/taupost : 1 (event-driven)''',
                    pre='''ge + =w
                           Apre += delta_Apre
                           w = clip(w + Apost, 0, gmax)''',
                    post='''Apost += delta_Apost
                            w = clip(w + Apre, 0, gmax)''')
stdp_syn = Synapses(inputs, neurons, model='''
                    w:1
                    dApre/dt = -Apre/taupre : 1 (event-driven)
                    dApost/dt = -Apost/taupost : 1 (event-driven)''',
                    on_pre='''ge + =w
                           Apre += delta_Apre
                           w = clip(w + Apost, 0, gmax)''',
                    on_post='''Apost += delta_Apost
                            w = clip(w + Apre, 0, gmax)''')
Lumped variables (summed variables)

The syntax to define lumped variables (we use the term “summed variables” in Brian 2) has been changed: instead of assigning the synaptic variable to the neuronal variable you’ll have to include the summed variable in the synaptic equations with the flag (summed):

Brian 1 Brian 2
# a non-linear synapse (e.g. NMDA)
neurons = NeuronGroup(1, model='''
                      dv/dt = (gtot - v)/(10*ms) : 1
                      gtot : 1''')
syn = Synapses(inputs, neurons,
               model='''
               dg/dt = -a*g+b*x*(1-g) : 1
               dx/dt = -c*x : 1
               w : 1 # synaptic weight''',
               pre='x += w')
neurons.gtot=S.g
# a non-linear synapse (e.g. NMDA)
neurons = NeuronGroup(1, model='''
                      dv/dt = (gtot - v)/(10*ms) : 1
                      gtot : 1''')
syn = Synapses(inputs, neurons,
               model='''
               dg/dt = -a*g+b*x*(1-g) : 1
               dx/dt = -c*x : 1
               w : 1 # synaptic weight
               gtot_post = g : 1 (summed)''',
               on_pre='x += w')
Creating synapses

In Brian 1, synapses were created by assigning True or an integer (the number of synapses) to an indexed Synapses object. In Brian 2, all synapse creation goes through the Synapses.connect() function. For examples how to create more complex connection patterns, see the section on translating Connections objects above.

Brian 1 Brian 2
syn = Synapses(...)
# single synapse
syn[3, 5] = True
syn = Synapses(...)
# single synapse
syn.connect(i=3, j=5)
# all-to-all connections
syn[:, :] = True
# all-to-all connections
syn.connect()
# all to neuron number 1
syn[:, 1] = True
# all to neuron number 1
syn.connect(j='1')
# multiple synapses
syn[4, 7] = 3
# multiple synapses
syn.connect(i=4, j=7, n=3)
# connection probability 2%
syn[:, :] = 0.02
# connection probability 2%
syn.connect(p=0.02)
Multiple pathways

As Brian 1, Brian 2 supports multiple pre- or post-synaptic pathways, with separate pre-/post-codes and delays. In Brian 1, you have to specify the pathways as tuples and can then later access them individually by using their index. In Brian 2, you specify the pathways as a dictionary, i.e. by giving them individual names which you can then later use to access them (the default pathways are called pre and post):

Brian 1 Brian 2
S = Synapses(...,
             pre=('ge + =w',
                  '''w = clip(w + Apost, 0, inf)
                     Apre += delta_Apre'''),
             post='''Apost += delta_Apost
                     w = clip(w + Apre, 0, inf)''')

S[:, :] = True
S.delay[1][:, :] = 3*ms # delayed trace
S = Synapses(...,
             pre={'pre_transmission':
                  'ge += w',
                  'pre_plasticity':
                  '''w = clip(w + Apost, 0, inf)
                     Apre += delta_Apre'''},
             post='''Apost += delta_Apost
                     w = clip(w + Apre, 0, inf)''')

S.connect()
S.pre_plasticity.delay[:, :] = 3*ms # delayed trace
Monitoring synaptic variables

Both in Brian 1 and Brian 2, you can record the values of synaptic variables with a StateMonitor. You no longer have to call an explicit indexing function, but you can directly provide an appropriately indexed Synapses object. You can now also use the same technique to index the StateMonitor object to get the recorded values, see the respective section in the Synapses documentation for details.

Brian 1 Brian 2
syn = Synapses(...)
# record all synapse targetting neuron 3
indices = syn.synapse_index((slice(None), 3))
mon = StateMonitor(S, 'w', record=indices)
syn = Synapses(...)
# record all synapse targetting neuron 3
mon = StateMonitor(S, 'w', record=S[:, 3])
Inputs (Brian 1 –> 2 conversion)
Poisson Input

Brian 2 provides the same two groups that Brian 1 provided: PoissonGroup and PoissonInput. The mechanism for inhomogoneous Poisson processes has changed: instead of providing a Python function of time, you’ll now have to provide a string expression that is evaluated at every time step. For most use cases, this should allow a direct translation:

Brian 1 Brian 2
rates = lambda t:(1+cos(2*pi*t*1*Hz))*10*Hz
group = PoissonGroup(100, rates=rates)
rates = '(1 + cos(2*pi*t*1*Hz)*10*Hz)'
group = PoissonGroup(100, rates=rates)

For more complex rate modulations, the expression can refer to User-provided functions and/or you can replace the PoissonGroup by a general NeuronGroup with a threshold condition rand()<rates*dt (which allows you to store per-neuron attributes).

There is currently no direct replacement for the more advanced features of PoissonInput (record, freeze, copies, jitter, and reliability keywords), but various workarounds are possible, e.g. by directly using a BinomialFunction in the equations. For example, you can get the functionality of the freeze keyword (identical Poisson events for all neurons) by storing the input in a shared variable and then distribute the input to all neurons:

Brian 1 Brian 2
group = NeuronGroup(10,
                    'dv/dt = -v/(10*ms) : 1')
input = PoissonInput(group, N=1000, rate=1*Hz,
                     weight=0.1, state='v',
                     freeze=True)
group = NeuronGroup(10, '''dv/dt = -v / (10*ms) : 1
                           shared_input : 1 (shared)''')
poisson_input = BinomialFunction(n=1000, p=1*Hz*group.dt)
group.run_regularly('''shared_input = poisson_input()*0.1
                       v += shared_input''')
Spike generation

SpikeGeneratorGroup provides mostly the same functionality as in Brian 1. In contrast to Brian 1, there is only one way to specify which neurons spike and when – you have to provide the index array and the times array as separate arguments:

Brian 1 Brian 2
gen1 = SpikeGeneratorGroup(2, [(0, 0*ms), (1, 1*ms)])
gen2 = SpikeGeneratorGroup(2, [(array([0, 1]), 0*ms),
                               (array([0, 1]), 1*ms)]
gen3 = SpikeGeneratorGroup(2, (array([0, 1]),
                               array([0, 1])*ms))
gen4 = SpikeGeneratorGroup(2, array([[0, 0.0],
                                    [1, 0.001]])
gen1 = SpikeGeneratorGroup(2, [0, 1], [0, 1]*ms)
gen2 = SpikeGeneratorGroup(2, [0, 1, 0, 1],
                           [0, 0, 1, 1]*ms)
gen3 = SpikeGeneratorGroup(2, [0, 1], [0, 1]*ms)

gen4 = SpikeGeneratorGroup(2, [0, 1], [0, 1]*ms)

Note

For large arrays, make sure to provide a Quantity array (e.g. [0, 1, 2]*ms) and not a list of Quantity values (e.g. [0*ms, 1*ms, 2*ms]). A list has first to be translated into an array which can take a considerable amount of time for a list with many elements.

There is no direct equivalent of the Brian 1 option to use a generator that updates spike times online. The easiest alternative in Brian 2 is to pre-calculate the spikes and then use a standard SpikeGeneratorGroup. If this is not possible (e.g. there are two many spikes to fit in memory), then you can workaround the restriction by using custom code (see User-provided functions and Arbitrary Python code (network operations)).

Arbitrary time-dependent input (TimedArray)

For a detailed description of the TimedArray mechanism in Brian 2, see Timed arrays.

In Brian 1, timed arrays where special objects that could be assigned to a state variable and would then be used to update this state variable at every time step. In Brian 2, a timed array is implemented using the standard Functions mechanism which has the advantage that more complex access patterns can be implemented (e.g. by not using t as an argument, but something like t - delay). This syntax was possible in Brian 1 as well, but was disadvantageous for performance and had other limits (e.g. no unit support, no linear integration). In Brian 2, these disadvantages no longer apply and the function syntax is therefore the only available syntax. You can convert the old-style Brian 1 syntax to Brian 2 as follows:

Warning

The example below does not correctly translate the changed semantics of TimedArray related to the time. In Brian 1, TimedArray([0, 1, 2], dt=10*ms) will return 0 for t<5*ms, 1 for 5*ms<=t<15*ms, and 2 for t>=15*ms. Brian 2 will return 0 for t<10*ms, 1 for 10*ms<=t<20*ms, and 2 for t>=20*ms.

Brian 1 Brian 2
# same input for all neurons
eqs = '''
      dv/dt = (I - v)/tau : volt
      I : volt
      '''
group = NeuronGroup(1, model=eqs,
                    reset=0*mV, threshold=15*mV)
group.I = TimedArray(linspace(0*mV, 20*mV, 100),
                     dt=10*ms)
# same input for all neurons
I = TimedArray(linspace(0*mV, 20*mV, 100),
               dt=10*ms)
eqs = '''
      dv/dt = (I(t) - v)/tau : volt
      '''
group = NeuronGroup(1, model=eqs,
                    reset='v = 0*mV',
                    threshold='v > 15*mV')
# neuron-specific input
eqs = '''
      dv/dt = (I - v)/tau : volt
      I : volt
      '''
group = NeuronGroup(5, model=eqs,
                    reset=0*mV, threshold=15*mV)
values = (linspace(0*mV, 20*mV, 100)[:, None] *
          linspace(0, 1, 5))
group.I = TimedArray(values, dt=10*ms)
# neuron-specific input
values = (linspace(0*mV, 20*mV, 100)[:, None] *
          linspace(0, 1, 5))
I = TimedArray(values, dt=10*ms)
eqs = '''
      dv/dt = (I(t, i) - v)/tau : volt
      '''
group = NeuronGroup(5, model=eqs,
                    reset='v = 0*mV',
                    threshold='v > 15*mV')
Monitors (Brian 1 –> 2 conversion)
Monitoring spiking activity

The main class to record spiking activity is SpikeMonitor which is created in the same way as in Brian 1. However, the internal storage and retrieval of spikes is different. In Brian 1, spikes were stored as a list of pairs (i, t), the index and time of each spike. In Brian 2, spikes are stored as two arrays i and t, storing the indices and times. You can access these arrays as attributes of the monitor, there’s also a convenience attribute it that returns both at the same time. The following table shows how the spike indices and times can be retrieved in various forms in Brian 1 and Brian 2:

Brian 1 Brian 2
mon = SpikeMonitor(group)
#... do the run
list_of_pairs = mon.spikes
index_list, time_list = zip(*list_of_pairs)
index_array = array(index_list)
time_array = array(time_list)
# time_array is unitless in Brian 1
mon = SpikeMonitor(group)
#... do the run
list_of_pairs = zip(*mon.it)
index_list = list(mon.i)
time_list = list(mon.t)
index_array, time_array = mon.i, mon.t
# time_array has units in Brian 2

You can also access the spike times for individual neurons. In Brian 1, you could directly index the monitor which is no longer allowed in Brian 2. Instead, ask for a dictionary of spike times and index the returned dictionary:

Brian 1 Brian 2
# dictionary of spike times for each neuron:
spike_dict = mon.spiketimes
# all spikes for neuron 3:
spikes_3 = spike_dict[3] #  (no units)
spikes_3 = mon[3] #  alternative (no units)
# dictionary of spike times for each neuron:
spike_dict = mon.spike_trains()
# all spikes for neuron 3:
spikes_3 = spike_dict[3]  # with units

In Brian 2, SpikeMonitor also provides the functionality of the Brian 1 classes SpikeCounter and PopulationSpikeCounter. If you are only interested in the counts and not in the individual spike events, use record=False to save the memory of storing them:

Brian 1 Brian 2
counter = SpikeCounter(group)
pop_counter = PopulationSpikeCounter(group)
#... do the run
# Number of spikes for neuron 3:
count_3 = counter[3]
# Total number of spikes:
total_spikes = pop_counter.nspikes
counter = SpikeMonitor(group, record=False)

#... do the run
# Number of spikes for neuron 3
count_3 = counter.count[3]
# Total number of spikes:
total_spikes = counter.num_spikes

Currently Brian 2 provides no functionality to calculate statistics such as correlations or histograms online, there is no equivalent to the following classes that existed in Brian 1: AutoCorrelogram, CoincidenceCounter, CoincidenceMatrixCounter, ISIHistogramMonitor, VanRossumMetric. You will therefore have to be calculate the corresponding statistiacs manually after the simulation based on the information stored in the SpikeMonitor. If you use the default Runtime code generation, you can also create a new Python class that calculates the statistic online (see this example from a Brian 2 tutorial).

Monitoring variables

Single variables are recorded with a StateMonitor in the same way as in Brian 1, but the times and variable values are accessed differently:

Brian 1 Brian 2
mon = StateMonitor(group, 'v',
                   record=True)
# ... do the run
# plot the trace of neuron 3:
plot(mon.times/ms, mon[3]/mV)
# plot the traces of all neurons:
plot(mon.times/ms, mon.values.T/mV)
mon = StateMonitor(group, 'v',
                   record=True)
# ... do the run
# plot the trace of neuron 3:
plot(mon.t/ms, mon[3].v/mV)
# plot the traces of all neurons:
plot(mon.t/ms, mon.v.T/mV)

Further differences:

  • StateMonitor now records in the 'start' scheduling slot by default. This leads to a more intuitive correspondence between the recorded times and the values: in Brian 1 (where StateMonitor recorded in the 'end' slot) the recorded value at 0ms was not the initial value of the variable but the value after integrating it for a single time step. The disadvantage of this new default is that the very last value at the end of the last time step of a simulation is not recorded anymore. However, this value can be manually added to the monitor by calling StateMonitor.record_single_timestep().
  • To not record every time step, use the dt argument (as for all other classes) instead of specifying a number of timesteps.
  • Using record=False does no longer provide mean and variance of the recorded variable.

In contrast to Brian 1, StateMonitor can now record multiple variables and therefore replaces Brian 1’s MultiStateMonitor:

Brian 1 Brian 2
mon = MultiStateMonitor(group, ['v', 'w'],
                        record=True)
# ... do the run
# plot the traces of v and w for neuron 3:
plot(mon['v'].times/ms, mon['v'][3]/mV)
plot(mon['w'].times/ms, mon['w'][3]/mV)
mon = StateMonitor(group, ['v', 'w'],
                   record=True)
# ... do the run
# plot the traces of v and w for neuron 3:
plot(mon.t/ms, mon[3].v/mV)
plot(mon.t/ms, mon[3].w/mV)

To record variable values at the times of spikes, Brian 2 no longer provides a separate class as Brian 1 did (StateSpikeMonitor). Instead, you can use SpikeMonitor to record additional variables (in addition to the neuron index and the spike time):

Brian 1 Brian 2
# We assume that "group" has a varying threshold
mon = StateSpikeMonitor(group, 'v')
# ... do the run
# plot the mean v at spike time for each neuron
mean_values = [mean(mon.values('v', idx))
                for idx in range(len(group))]

plot(mean_values/mV, 'o')
# We assume that "group" has a varying threshold
mon = SpikeMonitor(group, variables='v')
# ... do the run
# plot the mean v at spike time for each neuron
values = mon.values('v')
mean_values = [mean(values[idx])
               for idx in range(len(group))]
plot(mean_values/mV, 'o')

Note that there is no equivalent to StateHistogramMonitor, you will have to calculate the histogram from the recorded values or write your own custom monitor class.

Networks and clocks (Brian 1 –> 2 conversion)
Clocks and timesteps

Brian’s system of handling clocks has substantially changed. For details about the new system in place see Setting the simulation time step. The main differences to Brian 1 are:

  • There is no more “clock guessing” – objects either use the defaultclock or a dt/clock value that was explicitly specified during their construction.
  • In Brian 2, the time step is allowed to change after the creation of an object and between runs – the relevant value is the value in place at the point of the run() call.
  • It is rarely necessary to create an explicit Clock object, most of the time you should use the defaultclock or provide a dt argument during the construction of the object.
  • There’s only one Clock class, the (deprecated) FloatClock, RegularClock, etc. classes that Brian 1 provided no longer exist.
  • It is no longer possible to (re-)set the time of a clock explicitly, there is no direct equivalent of Clock.reinit and reinit_default_clock. To start a completely new simulation after you have finished a previous one, either create a new Network or use the start_scope() mechanism. To “rewind” a simulation to a previous point, use the new store()/restore() mechanism. For more details, see below and Running a simulation.
Networks

Both Brian 1 and Brian 2 offer two ways to run a simulation: either by explicitly creating a Network object, or by using a MagicNetwork, i.e. a simple run() statement.

Explicit network

The mechanism to create explicit Network objects has not changed significantly from Brian 1 to Brian 2. However, creating a new Network will now also automatically reset the clock back to 0s, and stricter checks no longer allow the inclusion of the same object in multiple networks.

Brian 1 Brian 2
group = ...
mon = ...
net = Network(group, mon)
net.run(1*ms)

reinit()
group = ...
mon = ...
net = Network(group, mon)
net.run(1*ms)
group = ...
mon = ...
net = Network(group, mon)
net.run(1*ms)

# new network starts at 0s
group = ...
mon = ...
net = Network(group, mon)
net.run(1*ms)
“Magic” network

For most simple, “flat”, scripts (see e.g. the Examples), the run() statement in Brian 2 automatically collects all the Brian objects (NeuronGroup, etc.) into a “magic” network in the same way as Brian 1 did. The logic behind this collection has changed, though, with important consequences for more complex simulation scripts: in Brian 1, the magic network includes all Brian objects that have been created in the same execution frame as the run() call. Objects that are created in other functions could be added using magic_return and magic_register. In Brian 2, the magic network contains all Brian objects that are visible in the same execution frame as the run() call. The advantage of the new system is that it is clearer what will be included in the network and there is no danger of including previously created, but no longer needed, objects in a simulation. E.g. in the following example, a common mistake in Brian 1 was to not include the clear(), which meant that each run not only simulated the current objects, but also all objects from previous loop iterations. Also, without the reinit_default_clock(), each run would start at the end time of the previous run. In Brian 2, this loop does not need any explicit clearing up, each run() will only simulate the object that it “sees” (group1, group2, syn, and mon) and start each simulation at 0s:

Brian 1 Brian 2
for r in range(100):
    reinit_default_clock()
    clear()
    group1 = NeuronGroup(...)
    group2 = NeuronGroup(...)
    syn = Synapses(group1, group2, ...)
    mon = SpikeMonitor(group2)
    run(1*second)
for r in range(100):


    group1 = NeuronGroup(...)
    group2 = NeuronGroup(...)
    syn = Synapses(group1, group2, ...)
    mon = SpikeMonitor(group2)
    run(1*second)

There is no replacement for the magic_return and magic_register functions. If the returned object is stored in a variable at the level of the run() call, then it is no longer necessary to use magic_return, as the returned object is “visible” at the level of the run() call:

Brian 1 Brian 2
@magic_return
def f():
    return PoissonGroup(100, rates=100*Hz)

pg = f() # needs magic_return
mon = SpikeMonitor(pg)
run(100*ms)
def f():
    return PoissonGroup(100, rates=100*Hz)

pg = f() # is "visible" and will be included
mon = SpikeMonitor(pg)
run(100*ms)

The general recommendation is however: if your script is complex (multiple functions/files/classes) and you are not sure whether some objects will be included in the magic network, use an explicit Network object.

Note that one consequence of the “is visible” approach is that objects stored in containers (lists, dictionaries, ...) will not be automatically included in Brian 2. Use an explicit Network object to get around this restriction:

Brian 1 Brian 2
groups = {'exc': NeuronGroup(...),
          'inh': NeuronGroup(...)}
...

run(5*ms)
groups = {'exc': NeuronGroup(...),
          'inh': NeuronGroup(...)}
...
net = Network(groups)
net.run(5*ms)
External constants

In Brian 2, external constants are taken from the surrounding namespace at the point of the run() call and not when the object is defined (for other ways to define the namespace, see External variables and functions). This allows to easily change external constants between runs, in contrast to Brian 1 where the whether this worked or not depended on details of the model (e.g. whether linear integration was used):

Brian 1 Brian 2
tau = 10*ms
# to be sure that changes between runs are taken into
# account, define "I" as a neuronal parameter
group = NeuronGroup(10, '''dv/dt = (-v + I) / tau : 1
                           I : 1''')
group.v = linspace(0, 1, 10)
group.I = 0.0
mon = StateMonitor(group, 'v', record=True)
run(5*ms)
group.I = 0.5
run(5*ms)
group.I = 0.0
run(5*ms)
tau = 10*ms

# The value for I will be updated at each run
group = NeuronGroup(10, 'dv/dt = (-v + I) / tau : 1')

group.v = linspace(0, 1, 10)
I = 0.0
mon = StateMonitor(group, 'v', record=True)
run(5*ms)
I = 0.5
run(5*ms)
I = 0.0
run(5*ms)
Multicompartmental models (Brian 1 –> 2 conversion)

Brian 1 offered support for simple multi-compartmental models in the compartments module. This module allowed you to combine the equations for several compartments into a single Equations object. This is only a suitable solution for simple morphologies (e.g. “ball-and-stick” models) but has the advantage over using SpatialNeuron that you can have several of such neurons in a NeuronGroup.

If you already have a definition of a model using Brian 1’s compartments module, then you can simply print out the equations and use them directly in Brian 2. For simple models, writing the equations without that help is rather straightforward anyway:

Brian 1 Brian 2
V0 = 10*mV
C = 200*pF
Ra = 150*kohm
R = 50*Mohm
soma_eqs = (MembraneEquation(C) +
            IonicCurrent('I=(vm-V0)/R : amp'))
dend_eqs = MembraneEquation(C)
neuron_eqs = Compartments({'soma': soma_eqs,
                           'dend': dend_eqs})

neuron = NeuronGroup(N, neuron_eqs)
V0 = 10*mV
C = 200*pF
Ra = 150*kohm
R = 50*Mohm
neuron_eqs = '''
dvm_soma/dt = (I_soma + I_soma_dend)/C : volt
I_soma = (V0 - vm_soma)/R : amp
I_soma_dend = (vm_dend - vm_soma)/Ra : amp
dvm_dend/dt = -I_soma_dend/C : volt'''

neuron = NeuronGroup(N, neuron_eqs)
Library models (Brian 1 –> 2 conversion)
Neuron models

The neuron models in Brian 1’s brian.library.IF package are nothing more than shorthands for equations. The following table shows how the models from Brian 1 can be converted to explicit equations (and reset statements in the case of the adaptive exponential integrate-and-fire model) for use in Brian 2. The examples include a “current” I (depending on the model not necessarily in units of Ampère) and could e.g. be used to plot the f-I curve of the neuron.

Perfect integrator
Brian 1 Brian 2
eqs = (perfect_IF(tau=10*ms) +
       Current('I : volt'))
group = NeuronGroup(N, eqs,
                    threshold='v > -50*mV',
                    reset='v = -70*mV')
tau = 10*ms
eqs = '''dvm/dt = I/tau : volt
         I : volt'''
group = NeuronGroup(N, eqs,
                    threshold='v > -50*mV',
                    reset='v = -70*mV')
Leaky integrate-and-fire neuron
Brian 1 Brian 2
eqs = (leaky_IF(tau=10*ms, El=-70*mV) +
       Current('I : volt'))
group = ... # see above
tau = 10*ms; El = -70*mV
eqs = '''dvm/dt = ((El - vm) + I)/tau : volt
         I : volt'''
group = ... # see above
Exponential integrate-and-fire neuron
Brian 1 Brian 2
eqs = (exp_IF(C=1*nF, gL=30*nS, EL=-70*mV,
              VT=-50*mV, DeltaT=2*mV) +
       Current('I : amp'))
group = ... # see above
C = 1*nF; gL = 30*nS; EL = -70*mV; VT = -50*mV; DeltaT = 2*mV
eqs = '''dvm/dt = (gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT) + I)/C : volt
         I : amp'''
group = ... # see above
Quadratic integrate-and-fire neuron
Brian 1 Brian 2
eqs = (quadratic_IF(C=1*nF, a=5*nS/mV,
       EL=-70*mV, VT=-50*mV) +
       Current('I : amp'))
group = ... # see above
C = 1*nF; a=5*nS/mV; EL=-70*mV; VT = -50*mV
eqs = '''dvm/dt = (a_q*(vm-EL)*(vm-VT) + I)/C : volt
         I : amp'''
group = ... # see above
Izhikevich neuron
Brian 1 Brian 2
eqs = (Izhikevich(a=0.02/ms, b=0.2/ms) +
       Current('I : volt/second'))
group = ... # see above
a = 0.02/ms; b = 0.2/ms
eqs = '''dvm/dt = (0.04/ms/mV)*vm**2+(5/ms)*vm+140*mV/ms-w + I : volt
         dw/dt = a_I*(b_I*vm-w) : volt/second
         I : volt/second'''
group = ... # see above
Adaptive exponential integrate-and-fire neuron (“Brette-Gerstner model”)
Brian 1 Brian 2
# AdEx, aEIF, and Brette_Gerstner all refer to the same model
eqs = (aEIF(C=1*nF, gL=30*nS, EL=-70*mV,
            VT=-50*mV, DeltaT=2*mV, tauw=150*ms, a=4*nS) +
       Current('I:amp'))
group = NeuronGroup(N, eqs,
                    threshold='v > -20*mV',
                    reset=AdaptiveReset(Vr=-70*mV, b=0.08*nA))
C = 1*nF; gL = 30*nS; EL = -70*mV; VT = -50*mV; DeltaT = 2*mV; tauw = 150*ms; a = 4*nS
eqs = '''dvm/dt = (gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT) -w + I)/C : volt
         dw/dt=(a_BG*(vm-EL)-w)/tauw : amp
         I : volt/second'''
group = NeuronGroup(N, eqs,
                    threshold='v > -20*mV',
                    reset='vm=-70*mV; w += 0.08*nA')
Ionic currents

Brian 1’s functions for ionic currents, provided in brian.library.ionic_currents correspond to the following equations (note that the currents follow the convention to use a shifted membrane potential, i.e. the membrane potential at rest is 0mV):

Brian 1 Brian 2
from brian.library.ionic_currents import *
defaultclock.dt = 0.01*ms
eqs_leak = leak_current(gl=60*nS, El=10.6*mV, current_name='I_leak')

eqs_K = K_current_HH(gmax=7.2*uS, EK=-12*mV, current_name='I_K')

eqs_Na = Na_current_HH(gmax=24*uS, ENa=115*mV, current_name='I_Na')

eqs = (MembraneEquation(C=200*pF) +
       eqs_leak + eqs_K + eqs+Na +
       Current('I_inj : amp'))
defaultclock.dt = 0.01*ms
gl = 60*nS; El = 10.6*mV
eqs_leak = Equations('I_leak = gl*(El - vm) : amp')
g_K = 7.2*uS; EK = -12*mV
eqs_K = Equations('''I_K = g_K*n**4*(EK-vm) : amp
                     dn/dt = alphan*(1-n)-betan*n : 1
                     alphan = .01*(10*mV-vm)/(exp(1-.1*vm/mV)-1)/mV/ms : Hz
                     betan = .125*exp(-.0125*vm/mV)/ms : Hz''')
g_Na = 24*uS; ENa = 115*mV
eqs_Na = Equations('''I_Na = g_Na*m**3*h*(ENa-vm) : amp
                      dm/dt=alpham*(1-m)-betam*m : 1
                      dh/dt=alphah*(1-h)-betah*h : 1
                      alpham=.1*(25*mV-vm)/(exp(2.5-.1*vm/mV)-1)/mV/ms : Hz
                      betam=4*exp(-.0556*vm/mV)/ms : Hz
                      alphah=.07*exp(-.05*vm/mV)/ms : Hz
                      betah=1./(1+exp(3.-.1*vm/mV))/ms : Hz''')
C = 200*pF
eqs = Equations('''dvm/dt = (I_leak + I_K + I_Na + I_inj)/C : volt
                   I_inj : amp''') + eqs_leak + eqs_K + eqs_Na
Synapses

Brian 1’s synaptic models, provided in brian.library.synpases can be converted to the equivalent Brian 2 equations as follows:

Current-based synapses
Brian 1 Brian 2
syn_eqs = exp_current('s', tau=5*ms, current_name='I_syn')
eqs = (MembraneEquation(C=1*nF) + Current('Im = gl*(El-vm) : amp') +
       syn_eqs)
group = NeuronGroup(N, eqs, threshold='vm>-50*mV', reset='vm=-70*mV')
syn = Synapses(source, group, pre='s += 1*nA')
# ... connect synapses, etc.
tau = 5*ms
syn_eqs = Equations('dI_syn/dt = -I_syn/tau : amp')
eqs = (Equations('dvm/dt = (gl*(El - vm) + I_syn)/C : volt') +
       syn_eqs)
group = NeuronGroup(N, eqs, threshold='vm>-50*mV', reset='vm=-70*mV')
syn = Synapses(source, group, pre='I_syn += 1*nA')
# ... connect synapses, etc.
syn_eqs = alpha_current('s', tau=2.5*ms, current_name='I_syn')
eqs = ... # remaining code as above
tau = 2.5*ms
syn_eqs = Equations('''dI_syn/dt = (s - I_syn)/tau : amp
                       ds/dt = -s/tau : amp''')
group = NeuronGroup(N, eqs, threshold='vm>-50*mV', reset='vm=-70*mV')
syn = Synapses(source, group, pre='s += 1*nA')
# ... connect synapses, etc.
syn_eqs = biexp_current('s', tau1=2.5*ms, tau2=10*ms, current_name='I_syn')
eqs = ... # remaining code as above
tau1 = 2.5*ms; tau2 = 10*ms; invpeak = (tau2 / tau1) ** (tau1 / (tau2 - tau1))
syn_eqs = Equations('''dI_syn/dt = (invpeak*s - I_syn)/tau1 : amp
                       ds/dt = -s/tau2 : amp''')
eqs = ... # remaining code as above
Conductance-based synapses
Brian 1 Brian 2
syn_eqs = exp_conductance('s', tau=5*ms, E=0*mV, conductance_name='g_syn')
eqs = (MembraneEquation(C=1*nF) + Current('Im = gl*(El-vm) : amp') +
       syn_eqs)
group = NeuronGroup(N, eqs, threshold='vm>-50*mV', reset='vm=-70*mV')
syn = Synapses(source, group, pre='s += 10*nS')
# ... connect synapses, etc.
tau = 5*ms; E = 0*mV
syn_eqs = Equations('dg_syn/dt = -g_syn/tau : siemens')
eqs = (Equations('dvm/dt = (gl*(El - vm) + g_syn*(E - vm))/C : volt') +
       syn_eqs)
group = NeuronGroup(N, eqs, threshold='vm>-50*mV', reset='vm=-70*mV')
syn = Synapses(source, group, pre='g_syn += 10*nS')
# ... connect synapses, etc.
syn_eqs = alpha_conductance('s', tau=2.5*ms, E=0*mV, conductance_name='g_syn')
eqs = ... # remaining code as above
tau = 2.5*ms; E = 0*mV
syn_eqs = Equations('''dg_syn/dt = (s - g_syn)/tau : siemens
                       ds/dt = -s/tau : siemens''')
group = NeuronGroup(N, eqs, threshold='vm>-50*mV', reset='vm=-70*mV')
syn = Synapses(source, group, pre='s += 10*nS')
# ... connect synapses, etc.
syn_eqs = biexp_conductance('s', tau1=2.5*ms, tau2=10*ms, E=0*mV,
                            conductance_name='g_syn')
eqs = ... # remaining code as above
tau1 = 2.5*ms; tau2 = 10*ms; E = 0*mV
invpeak = (tau2 / tau1) ** (tau1 / (tau2 - tau1))
syn_eqs = Equations('''dg_syn/dt = (invpeak*s - g_syn)/tau1 : siemens
                       ds/dt = -s/tau2 : siemens''')
eqs = ... # remaining code as above
Brian Hears

This module is designed for users of the Brian 1 library “Brian Hears”. It allows you to use Brian Hears with Brian 2 with only a few modifications (although it’s not compatible with the “standalone” mode of Brian 2). The way it works is by acting as a “bridge” to the version in Brian 1. To make this work, you must have a copy of Brian 1 installed (preferably the latest version), and import Brian Hears using:

from brian2.hears import *

Many scripts will run without any changes, but there are a few caveats to be aware of. Mostly, the problems are due to the fact that the units system in Brian 2 is not 100% compatible with the units system of Brian 1.

FilterbankGroup now follows the rules for NeuronGroup in Brian 2, which means some changes may be necessary to match the syntax of Brian 2, for example, the following would work in Brian 1 Hears:

# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1
I : 1
'''
anf = FilterbankGroup(ihc, 'I', eqs, reset=0, threshold=1, refractory=5*ms)

However, in Brian 2 Hears you would need to do:

# Leaky integrate-and-fire model with noise and refractoriness
eqs = '''
dv/dt = (I-v)/(1*ms)+0.2*xi*(2/(1*ms))**.5 : 1 (unless refractory)
I : 1
'''
anf = FilterbankGroup(ihc, 'I', eqs, reset='v=0', threshold='v>1', refractory=5*ms)

Slicing sounds no longer works. Previously you could do, e.g. sound[:20*ms] but with Brian 2 you would need to do sound.slice(0*ms, 20*ms).

In addition, some functions may not work correctly with Brian 2 units. In most circumstances, Brian 2 units can be used interchangeably with Brian 1 units in the bridge, but in some cases it may be necessary to convert units from one format to another, and to do that you can use the functions convert_unit_b1_to_b2 and convert_unit_b2_to_b1.

Known issues

In addition to the issues noted below, you can refer to our bug tracker on GitHub.

Cannot find msvcr90d.dll

If you see this message coming up, find the file PythonDir\Lib\site-packages\numpy\distutils\mingw32ccompiler.py and modify the line msvcr_dbg_success = build_msvcr_library(debug=True) to read msvcr_dbg_success = False (you can comment out the existing line and add the new line immediately after).

“Missing compiler_cxx fix for MSVCCompiler”

If you keep seeing this message, do not worry. It’s not possible for us to hide it, but doesn’t indicate any problems.

Problems with numerical integration

In some cases, the automatic choice of numerical integration method will not be appropriate, because of a choice of parameters that couldn’t be determined in advance. In this case, typically you will get nan (not a number) values in the results, or large oscillations. In this case, Brian will generate a warning to let you know, but will not raise an error.

Jupyter notebooks and C++ standalone mode progress reporting

When you run simulations in C++ standalone mode and enable progress reporting (e.g. by using report='text' as a keyword argument), the progress will not be displayed in the jupyter notebook. If you started the notebook from a terminal, you will find the output there. Unfortunately, this is a tricky problem to solve at the moment, due to the details of how the jupyter notebook handles output.

Parallel Brian simulations with the weave code generation target

When using the weave code generation target (the default runtime target on Python 2.x, see Runtime code generation for details), you should avoid running multiple Brian simulations in parallel. The weave package caches compiled files, but this cache is not prepared for multiple concurrent updates. If two Python scripts (or two processes started from the same Python script, e.g. via the multiprocessing package) try to store compilation results at the same time, weave will crash with an error message. The numpy and cython targets are not affected by this problem.

Support

If you are stuck with a problem using Brian, please do get in touch at our email support list.

You can save time by following this procedure when reporting a problem:

  1. Do try to solve the problem on your own first. Read the documentation, including using the search feature, index and reference documentation.
  2. Search the mailing list archives to see if someone else already had the same problem.
  3. Before writing, try to create a minimal example that reproduces the problem. You’ll get the fastest response if you can send just a handful of lines of code that show what isn’t working.

Tutorials

The tutorial consists of a series of Jupyter Notebooks [1]. You can quickly view these using the first links below. To use them interactively - allowing you to edit and run the code - there are two options. The easiest option is to click on the “Launch Binder” link, which will open up an interactive version in the browser without having to install Brian locally. This uses the Binder service provided by the Freeman lab. Occasionally, this service will be down or running slowly. The other option is to download the notebook file and run it locally, which requires you to have Brian installed.

For more information about how to use Jupyter Notebooks, see the Jupyter Notebook documentation.

Introduction to Brian part 1: Neurons

Note

This tutorial is a static non-editable version. You can launch an interactive, editable version without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Alternatively, you can download a copy of the notebook file to use locally: 1-intro-to-brian-neurons.ipynb

See the tutorial overview page for more details.

All Brian scripts start with the following. If you’re trying this notebook out in IPython, you should start by running this cell.

from brian2 import *

Later we’ll do some plotting in the notebook, so we activate inline plotting in the IPython notebook by doing this:

%matplotlib inline

Units system

Brian has a system for using quantities with physical dimensions:

20*volt
\[20.0\,\mathrm{V}\]

All of the basic SI units can be used (volt, amp, etc.) along with all the standard prefixes (m=milli, p=pico, etc.), as well as a few special abbreviations like mV for millivolt, pF for picofarad, etc.

1000*amp
\[1.0\,\mathrm{k}\,\mathrm{A}\]
1e6*volt
\[1.0\,\mathrm{M}\,\mathrm{V}\]
1000*namp
\[1.0\,\mathrm{\mu}\,\mathrm{A}\]

Also note that combinations of units with work as expected:

10*nA*5*Mohm
\[50.0\,\mathrm{m}\,\mathrm{V}\]

And if you try to do something wrong like adding amps and volts, what happens?

5*amp+10*volt
---------------------------------------------------------------------------

DimensionMismatchError                    Traceback (most recent call last)

<ipython-input-8-ad1fc5691a4b> in <module>()
----> 1 5*amp+10*volt


/home/marcel/programming/brian2/brian2/units/fundamentalunits.pyc in __add__(self, other)
   1412         return self._binary_operation(other, operator.add,
   1413                                       fail_for_mismatch=True,
-> 1414                                       operator_str='+')
   1415
   1416     def __radd__(self, other):


/home/marcel/programming/brian2/brian2/units/fundamentalunits.pyc in _binary_operation(self, other, operation, dim_operation, fail_for_mismatch, operator_str, inplace)
   1352                 _, other_dim = fail_for_dimension_mismatch(self, other, message,
   1353                                                            value1=self,
-> 1354                                                            value2=other)
   1355
   1356         if other_dim is None:


/home/marcel/programming/brian2/brian2/units/fundamentalunits.pyc in fail_for_dimension_mismatch(obj1, obj2, error_message, **error_quantities)
    183             raise DimensionMismatchError(error_message, dim1)
    184         else:
--> 185             raise DimensionMismatchError(error_message, dim1, dim2)
    186     else:
    187         return dim1, dim2


DimensionMismatchError: Cannot calculate 5. A + 10. V, units do not match (units are amp and volt).

If you haven’t see an error message in Python before that can look a bit overwhelming, but it’s actually quite simple and it’s important to know how to read these because you’ll probably see them quite often.

You should start at the bottom and work up. The last line gives the error type DimensionMismatchError along with a more specific message (in this case, you were trying to add together two quantities with different SI units, which is impossible).

Working upwards, each of the sections starts with a filename (e.g. C:\Users\Dan\...) with possibly the name of a function, and then a few lines surrounding the line where the error occurred (which is identified with an arrow).

The last of these sections shows the place in the function where the error actually happened. The section above it shows the function that called that function, and so on until the first section will be the script that you actually run. This sequence of sections is called a traceback, and is helpful in debugging.

If you see a traceback, what you want to do is start at the bottom and scan up the sections until you find your own file because that’s most likely where the problem is. (Of course, your code might be correct and Brian may have a bug in which case, please let us know on the email support list.)

A simple model

Let’s start by defining a simple neuron model. In Brian, all models are defined by systems of differential equations. Here’s a simple example of what that looks like:

tau = 10*ms
eqs = '''
dv/dt = (1-v)/tau : 1
'''

In Python, the notation ''' is used to begin and end a multi-line string. So the equations are just a string with one line per equation. The equations are formatted with standard mathematical notation, with one addition. At the end of a line you write : unit where unit is the SI unit of that variable.

Now let’s use this definition to create a neuron.

G = NeuronGroup(1, eqs)

In Brian, you only create groups of neurons, using the class NeuronGroup. The first two arguments when you create one of these objects are the number of neurons (in this case, 1) and the defining differential equations.

Let’s see what happens if we didn’t put the variable tau in the equation:

eqs = '''
dv/dt = 1-v : 1
'''
G = NeuronGroup(1, eqs)
run(100*ms)
---------------------------------------------------------------------------

BrianObjectException                      Traceback (most recent call last)

<ipython-input-11-d086eea0b2de> in <module>()
      3 '''
      4 G = NeuronGroup(1, eqs)
----> 5 run(100*ms)


/home/marcel/programming/brian2/brian2/units/fundamentalunits.pyc in new_f(*args, **kwds)
   2426                         raise DimensionMismatchError(error_message,
   2427                                                      newkeyset[k])
-> 2428             result = f(*args, **kwds)
   2429             if 'result' in au:
   2430                 if au['result'] == bool:


/home/marcel/programming/brian2/brian2/core/magic.pyc in run(duration, report, report_period, namespace, profile, level)
    369     '''
    370     return magic_network.run(duration, report=report, report_period=report_period,
--> 371                              namespace=namespace, profile=profile, level=2+level)
    372 run.__module__ = __name__
    373


/home/marcel/programming/brian2/brian2/core/magic.pyc in run(self, duration, report, report_period, namespace, profile, level)
    229         self._update_magic_objects(level=level+1)
    230         Network.run(self, duration, report=report, report_period=report_period,
--> 231                     namespace=namespace, profile=profile, level=level+1)
    232
    233     def store(self, name='default', filename=None, level=0):


/home/marcel/programming/brian2/brian2/core/base.pyc in device_override_decorated_function(*args, **kwds)
    276                 return getattr(curdev, name)(*args, **kwds)
    277             else:
--> 278                 return func(*args, **kwds)
    279
    280         device_override_decorated_function.__doc__ = func.__doc__


/home/marcel/programming/brian2/brian2/units/fundamentalunits.pyc in new_f(*args, **kwds)
   2426                         raise DimensionMismatchError(error_message,
   2427                                                      newkeyset[k])
-> 2428             result = f(*args, **kwds)
   2429             if 'result' in au:
   2430                 if au['result'] == bool:


/home/marcel/programming/brian2/brian2/core/network.pyc in run(self, duration, report, report_period, namespace, profile, level)
    787             namespace = get_local_namespace(level=level+3)
    788
--> 789         self.before_run(namespace)
    790
    791         if len(self.objects)==0:


/home/marcel/programming/brian2/brian2/core/base.pyc in device_override_decorated_function(*args, **kwds)
    276                 return getattr(curdev, name)(*args, **kwds)
    277             else:
--> 278                 return func(*args, **kwds)
    279
    280         device_override_decorated_function.__doc__ = func.__doc__


/home/marcel/programming/brian2/brian2/core/network.pyc in before_run(self, run_namespace)
    687                     obj.before_run(run_namespace)
    688                 except Exception as ex:
--> 689                     raise brian_object_exception("An error occurred when preparing an object.", obj, ex)
    690
    691         # Check that no object has been run as part of another network before


BrianObjectException: Original error and traceback:
Traceback (most recent call last):
  File "/home/marcel/programming/brian2/brian2/core/network.py", line 687, in before_run
    obj.before_run(run_namespace)
  File "/home/marcel/programming/brian2/brian2/groups/neurongroup.py", line 778, in before_run
    self.equations.check_units(self, run_namespace=run_namespace)
  File "/home/marcel/programming/brian2/brian2/equations/equations.py", line 867, in check_units
    *ex.dims)
DimensionMismatchError: Inconsistent units in differential equation defining variable v:
Expression 1-v does not have the expected unit Unit(1) / second (unit is 1).

Error encountered with object named "neurongroup_1".
Object was created here (most recent call only, full details in debug log):
  File "<ipython-input-11-d086eea0b2de>", line 4, in <module>
    G = NeuronGroup(1, eqs)

An error occurred when preparing an object. DimensionMismatchError: Inconsistent units in differential equation defining variable v:
Expression 1-v does not have the expected unit Unit(1) / second (unit is 1).
(See above for original error message and traceback.)

An error is raised, but why? The reason is that the differential equation is now dimensionally inconsistent. The left hand side dv/dt has units of 1/second but the right hand side 1-v is dimensionless. People often find this behaviour of Brian confusing because this sort of equation is very common in mathematics. However, for quantities with physical dimensions it is incorrect because the results would change depending on the unit you measured it in. For time, if you measured it in seconds the same equation would behave differently to how it would if you measured time in milliseconds. To avoid this, we insist that you always specify dimensionally consistent equations.

Now let’s go back to the good equations and actually run the simulation.

start_scope()

tau = 10*ms
eqs = '''
dv/dt = (1-v)/tau : 1
'''

G = NeuronGroup(1, eqs)
run(100*ms)
INFO       No numerical integration method specified for group 'neurongroup', using method 'linear' (took 0.06s). [brian2.stateupdaters.base.method_choice]

First off, ignore that start_scope() at the top of the cell. You’ll see that in each cell in this tutorial where we run a simulation. All it does is make sure that any Brian objects created before the function is called aren’t included in the next run of the simulation.

Secondly, you’ll see that there is an “INFO” message about not specifying the numerical integration method. This is harmless and just to let you know what method we chose, but we’ll fix it in the next cell by specifying the method explicitly.

So, what has happened here? Well, the command run(100*ms) runs the simulation for 100 ms. We can see that this has worked by printing the value of the variable v before and after the simulation.

start_scope()

G = NeuronGroup(1, eqs, method='linear')
print('Before v = %s' % G.v[0])
run(100*ms)
print('After v = %s' % G.v[0])
Before v = 0.0
After v = 0.99995460007

By default, all variables start with the value 0. Since the differential equation is dv/dt=(1-v)/tau we would expect after a while that v would tend towards the value 1, which is just what we see. Specifically, we’d expect v to have the value 1-exp(-t/tau). Let’s see if that’s right.

print('Expected value of v = %s' % (1-exp(-100*ms/tau)))
Expected value of v = 0.99995460007

Good news, the simulation gives the value we’d expect!

Now let’s take a look at a graph of how the variable v evolves over time.

start_scope()

G = NeuronGroup(1, eqs, method='linear')
M = StateMonitor(G, 'v', record=True)

run(30*ms)

plot(M.t/ms, M.v[0])
xlabel('Time (ms)')
ylabel('v');
_images/1-intro-to-brian-neurons_image_30_0.png

This time we only ran the simulation for 30 ms so that we can see the behaviour better. It looks like it’s behaving as expected, but let’s just check that analytically by plotting the expected behaviour on top.

start_scope()

G = NeuronGroup(1, eqs, method='linear')
M = StateMonitor(G, 'v', record=0)

run(30*ms)

plot(M.t/ms, M.v[0], '-b', lw=2, label='Brian')
plot(M.t/ms, 1-exp(-M.t/tau), '--r', lw=2, label='Analytic')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
_images/1-intro-to-brian-neurons_image_32_0.png

As you can see, the blue (Brian) and dashed red (analytic solution) lines coincide.

In this example, we used the object StateMonitor object. This is used to record the values of a neuron variable while the simulation runs. The first two arguments are the group to record from, and the variable you want to record from. We also specify record=0. This means that we record all values for neuron 0. We have to specify which neurons we want to record because in large simulations with many neurons it usually uses up too much RAM to record the values of all neurons.

Now try modifying the equations and parameters and see what happens in the cell below.

start_scope()

tau = 10*ms
eqs = '''
dv/dt = (sin(2*pi*100*Hz*t)-v)/tau : 1
'''

# Change to Euler method because exact integrator doesn't work here
G = NeuronGroup(1, eqs, method='euler')
M = StateMonitor(G, 'v', record=0)

G.v = 5 # initial value

run(60*ms)

plot(M.t/ms, M.v[0])
xlabel('Time (ms)')
ylabel('v');
_images/1-intro-to-brian-neurons_image_34_0.png

Adding spikes

So far we haven’t done anything neuronal, just played around with differential equations. Now let’s start adding spiking behaviour.

start_scope()

tau = 10*ms
eqs = '''
dv/dt = (1-v)/tau : 1
'''

G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', method='linear')

M = StateMonitor(G, 'v', record=0)
run(50*ms)
plot(M.t/ms, M.v[0])
xlabel('Time (ms)')
ylabel('v');
_images/1-intro-to-brian-neurons_image_36_0.png

We’ve added two new keywords to the NeuronGroup declaration: threshold='v>0.8' and reset='v = 0'. What this means is that when v>1 we fire a spike, and immediately reset v = 0 after the spike. We can put any expression and series of statements as these strings.

As you can see, at the beginning the behaviour is the same as before until v crosses the threshold v>0.8 at which point you see it reset to 0. You can’t see it in this figure, but internally Brian has registered this event as a spike. Let’s have a look at that.

start_scope()

G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', method='linear')

spikemon = SpikeMonitor(G)

run(50*ms)

print('Spike times: %s' % spikemon.t[:])
Spike times: [ 16.   32.1  48.2] ms

The SpikeMonitor object takes the group whose spikes you want to record as its argument and stores the spike times in the variable t. Let’s plot those spikes on top of the other figure to see that it’s getting it right.

start_scope()

G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', method='linear')

statemon = StateMonitor(G, 'v', record=0)
spikemon = SpikeMonitor(G)

run(50*ms)

plot(statemon.t/ms, statemon.v[0])
for t in spikemon.t:
    axvline(t/ms, ls='--', c='r', lw=3)
xlabel('Time (ms)')
ylabel('v');
_images/1-intro-to-brian-neurons_image_40_0.png

Here we’ve used the axvline command from matplotlib to draw a red, dashed vertical line at the time of each spike recorded by the SpikeMonitor.

Now try changing the strings for threshold and reset in the cell above to see what happens.

Refractoriness

A common feature of neuron models is refractoriness. This means that after the neuron fires a spike it becomes refractory for a certain duration and cannot fire another spike until this period is over. Here’s how we do that in Brian.

start_scope()

tau = 10*ms
eqs = '''
dv/dt = (1-v)/tau : 1 (unless refractory)
'''

G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', refractory=5*ms, method='linear')

statemon = StateMonitor(G, 'v', record=0)
spikemon = SpikeMonitor(G)

run(50*ms)

plot(statemon.t/ms, statemon.v[0])
for t in spikemon.t:
    axvline(t/ms, ls='--', c='r', lw=3)
xlabel('Time (ms)')
ylabel('v');
_images/1-intro-to-brian-neurons_image_43_0.png

As you can see in this figure, after the first spike, v stays at 0 for around 5 ms before it resumes its normal behaviour. To do this, we’ve done two things. Firstly, we’ve added the keyword refractory=5*ms to the NeuronGroup declaration. On its own, this only means that the neuron cannot spike in this period (see below), but doesn’t change how v behaves. In order to make v stay constant during the refractory period, we have to add (unless refractory) to the end of the definition of v in the differential equations. What this means is that the differential equation determines the behaviour of v unless it’s refractory in which case it is switched off.

Here’s what would happen if we didn’t include (unless refractory). Note that we’ve also decreased the value of tau and increased the length of the refractory period to make the behaviour clearer.

start_scope()

tau = 5*ms
eqs = '''
dv/dt = (1-v)/tau : 1
'''

G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', refractory=15*ms, method='linear')

statemon = StateMonitor(G, 'v', record=0)
spikemon = SpikeMonitor(G)

run(50*ms)

plot(statemon.t/ms, statemon.v[0])
for t in spikemon.t:
    axvline(t/ms, ls='--', c='r', lw=3)
axhline(0.8, ls=':', c='g', lw=3)
xlabel('Time (ms)')
ylabel('v')
print("Spike times: %s" % spikemon.t[:])
Spike times: [  8.   23.1  38.2] ms
_images/1-intro-to-brian-neurons_image_45_1.png

So what’s going on here? The behaviour for the first spike is the same: v rises to 0.8 and then the neuron fires a spike at time 8 ms before immediately resetting to 0. Since the refractory period is now 15 ms this means that the neuron won’t be able to spike again until time 8 + 15 = 23 ms. Immediately after the first spike, the value of v now instantly starts to rise because we didn’t specify (unless refractory) in the definition of dv/dt. However, once it reaches the value 0.8 (the dashed green line) at time roughly 8 ms it doesn’t fire a spike even though the threshold is v>0.8. This is because the neuron is still refractory until time 23 ms, at which point it fires a spike.

Note that you can do more complicated and interesting things with refractoriness. See the full documentation for more details about how it works.

Multiple neurons

So far we’ve only been working with a single neuron. Let’s do something interesting with multiple neurons.

start_scope()

N = 100
tau = 10*ms
eqs = '''
dv/dt = (2-v)/tau : 1
'''

G = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', method='linear')
G.v = 'rand()'

spikemon = SpikeMonitor(G)

run(50*ms)

plot(spikemon.t/ms, spikemon.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index');
_images/1-intro-to-brian-neurons_image_48_0.png

This shows a few changes. Firstly, we’ve got a new variable N determining the number of neurons. Secondly, we added the statement G.v = 'rand()' before the run. What this does is initialise each neuron with a different uniform random value between 0 and 1. We’ve done this just so each neuron will do something a bit different. The other big change is how we plot the data in the end.

As well as the variable spikemon.t with the times of all the spikes, we’ve also used the variable spikemon.i which gives the corresponding neuron index for each spike, and plotted a single black dot with time on the x-axis and neuron index on the y-value. This is the standard “raster plot” used in neuroscience.

Parameters

To make these multiple neurons do something more interesting, let’s introduce per-neuron parameters that don’t have a differential equation attached to them.

start_scope()

N = 100
tau = 10*ms
v0_max = 3.
duration = 1000*ms

eqs = '''
dv/dt = (v0-v)/tau : 1 (unless refractory)
v0 : 1
'''

G = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', refractory=5*ms, method='linear')
M = SpikeMonitor(G)

G.v0 = 'i*v0_max/(N-1)'

run(duration)

figure(figsize=(12,4))
subplot(121)
plot(M.t/ms, M.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
subplot(122)
plot(G.v0, M.count/duration)
xlabel('v0')
ylabel('Firing rate (sp/s)');
_images/1-intro-to-brian-neurons_image_51_0.png

The line v0 : 1 declares a new per-neuron parameter v0 with units 1 (i.e. dimensionless).

The line G.v0 = 'i*v0_max/(N-1)' initialises the value of v0 for each neuron varying from 0 up to v0_max. The symbol i when it appears in strings like this refers to the neuron index.

So in this example, we’re driving the neuron towards the value v0 exponentially, but we fire spikes when v crosses v>1 it fires a spike and resets. The effect is that the rate at which it fires spikes will be related to the value of v0. For v0<1 it will never fire a spike, and as v0 gets larger it will fire spikes at a higher rate. The right hand plot shows the firing rate as a function of the value of v0. This is the I-f curve of this neuron model.

Note that in the plot we’ve used the count variable of the SpikeMonitor: this is an array of the number of spikes each neuron in the group fired. Dividing this by the duration of the run gives the firing rate.

Stochastic neurons

Often when making models of neurons, we include a random element to model the effect of various forms of neural noise. In Brian, we can do this by using the symbol xi in differential equations. Strictly speaking, this symbol is a “stochastic differential” but you can sort of thinking of it as just a Gaussian random variable with mean 0 and standard deviation 1. We do have to take into account the way stochastic differentials scale with time, which is why we multiply it by tau**-0.5 in the equations below (see a textbook on stochastic differential equations for more details).

start_scope()

N = 100
tau = 10*ms
v0_max = 3.
duration = 1000*ms
sigma = 0.2

eqs = '''
dv/dt = (v0-v)/tau+sigma*xi*tau**-0.5 : 1 (unless refractory)
v0 : 1
'''

G = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', refractory=5*ms, method='euler')
M = SpikeMonitor(G)

G.v0 = 'i*v0_max/(N-1)'

run(duration)

figure(figsize=(12,4))
subplot(121)
plot(M.t/ms, M.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
subplot(122)
plot(G.v0, M.count/duration)
xlabel('v0')
ylabel('Firing rate (sp/s)');
_images/1-intro-to-brian-neurons_image_54_0.png

That’s the same figure as in the previous section but with some noise added. Note how the curve has changed shape: instead of a sharp jump from firing at rate 0 to firing at a positive rate, it now increases in a sigmoidal fashion. This is because no matter how small the driving force the randomness may cause it to fire a spike.

End of tutorial

That’s the end of this part of the tutorial. The cell below has another example. See if you can work out what it is doing and why. Try adding a StateMonitor to record the values of the variables for one of the neurons to help you understand it.

You could also try out the things you’ve learned in this cell.

Once you’re done with that you can move on to the next tutorial on Synapses.

start_scope()

N = 1000
tau = 10*ms
vr = -70*mV
vt0 = -50*mV
delta_vt0 = 5*mV
tau_t = 100*ms
sigma = 0.5*(vt0-vr)
v_drive = 2*(vt0-vr)
duration = 100*ms

eqs = '''
dv/dt = (v_drive+vr-v)/tau + sigma*xi*tau**-0.5 : volt
dvt/dt = (vt0-vt)/tau_t : volt
'''

reset = '''
v = vr
vt += delta_vt0
'''

G = NeuronGroup(N, eqs, threshold='v>vt', reset=reset, refractory=5*ms, method='euler')
spikemon = SpikeMonitor(G)

G.v = 'rand()*(vt0-vr)+vr'
G.vt = vt0

run(duration)

_ = hist(spikemon.t/ms, 100, histtype='stepfilled', facecolor='k', weights=ones(len(spikemon))/(N*defaultclock.dt))
xlabel('Time (ms)')
ylabel('Instantaneous firing rate (sp/s)');
_images/1-intro-to-brian-neurons_image_57_0.png

Introduction to Brian part 2: Synapses

Note

This tutorial is a static non-editable version. You can launch an interactive, editable version without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Alternatively, you can download a copy of the notebook file to use locally: 2-intro-to-brian-synapses.ipynb

See the tutorial overview page for more details.

If you haven’t yet read part 1: Neurons, go read that now.

As before we start by importing the Brian package and setting up matplotlib for IPython:

from brian2 import *
%matplotlib inline

The simplest Synapse

Once you have some neurons, the next step is to connect them up via synapses. We’ll start out with doing the simplest possible type of synapse that causes an instantaneous change in a variable after a spike.

start_scope()

eqs = '''
dv/dt = (I-v)/tau : 1
I : 1
tau : second
'''
G = NeuronGroup(2, eqs, threshold='v>1', reset='v = 0', method='linear')
G.I = [2, 0]
G.tau = [10, 100]*ms

# Comment these two lines out to see what happens without Synapses
S = Synapses(G, G, on_pre='v_post += 0.2')
S.connect(i=0, j=1)

M = StateMonitor(G, 'v', record=True)

run(100*ms)

plot(M.t/ms, M.v[0], '-b', label='Neuron 0')
plot(M.t/ms, M.v[1], '-g', lw=2, label='Neuron 1')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
_images/2-intro-to-brian-synapses_image_5_0.png

There are a few things going on here. First of all, let’s recap what is going on with the NeuronGroup. We’ve created two neurons, each of which has the same differential equation but different values for parameters I and tau. Neuron 0 has I=2 and tau=10*ms which means that is driven to repeatedly spike at a fairly high rate. Neuron 1 has I=0 and tau=100*ms which means that on its own - without the synapses - it won’t spike at all (the driving current I is 0). You can prove this to yourself by commenting out the two lines that define the synapse.

Next we define the synapses: Synapses(source, target, ...) means that we are defining a synaptic model that goes from source to target. In this case, the source and target are both the same, the group G. The syntax on_pre='v_post += 0.2' means that when a spike occurs in the presynaptic neuron (hence on_pre) it causes an instantaneous change to happen v_post += 0.2. The _post means that the value of v referred to is the post-synaptic value, and it is increased by 0.2. So in total, what this model says is that whenever two neurons in G are connected by a synapse, when the source neuron fires a spike the target neuron will have its value of v increased by 0.2.

However, at this point we have only defined the synapse model, we haven’t actually created any synapses. The next line S.connect(i=0, j=1) creates a synapse from neuron 0 to neuron 1.

Adding a weight

In the previous section, we hard coded the weight of the synapse to be the value 0.2, but often we would to allow this to be different for different synapses. We do that by introducing synapse equations.

start_scope()

eqs = '''
dv/dt = (I-v)/tau : 1
I : 1
tau : second
'''
G = NeuronGroup(3, eqs, threshold='v>1', reset='v = 0', method='linear')
G.I = [2, 0, 0]
G.tau = [10, 100, 100]*ms

# Comment these two lines out to see what happens without Synapses
S = Synapses(G, G, 'w : 1', on_pre='v_post += w')
S.connect(i=0, j=[1, 2])
S.w = 'j*0.2'

M = StateMonitor(G, 'v', record=True)

run(50*ms)

plot(M.t/ms, M.v[0], '-b', label='Neuron 0')
plot(M.t/ms, M.v[1], '-g', lw=2, label='Neuron 1')
plot(M.t/ms, M.v[2], '-r', lw=2, label='Neuron 1')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
_images/2-intro-to-brian-synapses_image_8_0.png

This example behaves very similarly to the previous example, but now there’s a synaptic weight variable w. The string 'w : 1' is an equation string, precisely the same as for neurons, that defines a single dimensionless parameter w. We changed the behaviour on a spike to on_pre='v_post += w' now, so that each synapse can behave differently depending on the value of w. To illustrate this, we’ve made a third neuron which behaves precisely the same as the second neuron, and connected neuron 0 to both neurons 1 and 2. We’ve also set the weights via S.w = 'j*0.2'. When i and j occur in the context of synapses, i refers to the source neuron index, and j to the target neuron index. So this will give a synaptic connection from 0 to 1 with weight 0.2=0.2*1 and from 0 to 2 with weight 0.4=0.2*2.

Introducing a delay

So far, the synapses have been instantaneous, but we can also make them act with a certain delay.

start_scope()

eqs = '''
dv/dt = (I-v)/tau : 1
I : 1
tau : second
'''
G = NeuronGroup(3, eqs, threshold='v>1', reset='v = 0', method='linear')
G.I = [2, 0, 0]
G.tau = [10, 100, 100]*ms

S = Synapses(G, G, 'w : 1', on_pre='v_post += w')
S.connect(i=0, j=[1, 2])
S.w = 'j*0.2'
S.delay = 'j*2*ms'

M = StateMonitor(G, 'v', record=True)

run(50*ms)

plot(M.t/ms, M.v[0], '-b', label='Neuron 0')
plot(M.t/ms, M.v[1], '-g', lw=2, label='Neuron 1')
plot(M.t/ms, M.v[2], '-r', lw=2, label='Neuron 1')
xlabel('Time (ms)')
ylabel('v')
legend(loc='best');
_images/2-intro-to-brian-synapses_image_11_0.png

As you can see, that’s as simple as adding a line S.delay = 'j*2*ms' so that the synapse from 0 to 1 has a delay of 2 ms, and from 0 to 2 has a delay of 4 ms.

More complex connectivity

So far, we specified the synaptic connectivity explicitly, but for larger networks this isn’t usually possible. For that, we usually want to specify some condition.

start_scope()

N = 10
G = NeuronGroup(N, 'v:1')
S = Synapses(G, G)
S.connect(condition='i!=j', p=0.2)

Here we’ve created a dummy neuron group of N neurons and a dummy synapses model that doens’t actually do anything just to demonstrate the connectivity. The line S.connect(condition='i!=j', p=0.2) will connect all pairs of neurons i and j with probability 0.2 as long as the condition i!=j holds. So, how can we see that connectivity? Here’s a little function that will let us visualise it.

def visualise_connectivity(S):
    Ns = len(S.source)
    Nt = len(S.target)
    figure(figsize=(10, 4))
    subplot(121)
    plot(zeros(Ns), arange(Ns), 'ok', ms=10)
    plot(ones(Nt), arange(Nt), 'ok', ms=10)
    for i, j in zip(S.i, S.j):
        plot([0, 1], [i, j], '-k')
    xticks([0, 1], ['Source', 'Target'])
    ylabel('Neuron index')
    xlim(-0.1, 1.1)
    ylim(-1, max(Ns, Nt))
    subplot(122)
    plot(S.i, S.j, 'ok')
    xlim(-1, Ns)
    ylim(-1, Nt)
    xlabel('Source neuron index')
    ylabel('Target neuron index')

visualise_connectivity(S)
_images/2-intro-to-brian-synapses_image_16_0.png

There are two plots here. On the left hand side, you see a vertical line of circles indicating source neurons on the left, and a vertical line indicating target neurons on the right, and a line between two neurons that have a synapse. On the right hand side is another way of visualising the same thing. Here each black dot is a synapse, with x value the source neuron index, and y value the target neuron index.

Let’s see how these figures change as we change the probability of a connection:

start_scope()

N = 10
G = NeuronGroup(N, 'v:1')

for p in [0.1, 0.5, 1.0]:
    S = Synapses(G, G)
    S.connect(condition='i!=j', p=p)
    visualise_connectivity(S)
    suptitle('p = '+str(p))
_images/2-intro-to-brian-synapses_image_18_0.png _images/2-intro-to-brian-synapses_image_18_1.png _images/2-intro-to-brian-synapses_image_18_2.png

And let’s see what another connectivity condition looks like. This one will only connect neighbouring neurons.

start_scope()

N = 10
G = NeuronGroup(N, 'v:1')

S = Synapses(G, G)
S.connect(condition='abs(i-j)<4 and i!=j')
visualise_connectivity(S)
_images/2-intro-to-brian-synapses_image_20_0.png

Try using that cell to see how other connectivity conditions look like.

You can also use the generator syntax to create connections like this more efficiently. In small examples like this, it doesn’t matter, but for large numbers of neurons it can be much more efficient to specify directly which neurons should be connected than to specify just a condition. Note that the following example uses skip_if_invalid to avoid errors at the boundaries (e.g. do not try to connect the neuron with index 1 to a neuron with index -2).

start_scope()

N = 10
G = NeuronGroup(N, 'v:1')

S = Synapses(G, G)
S.connect(j='k for k in range(i-3, i+4) if i!=k', skip_if_invalid=True)
visualise_connectivity(S)
_images/2-intro-to-brian-synapses_image_23_0.png

If each source neuron is connected to precisely one target neuron, there is a special syntax that is extremely efficient. For example, 1-to-1 connectivity looks like this:

start_scope()

N = 10
G = NeuronGroup(N, 'v:1')

S = Synapses(G, G)
S.connect(j='i')
visualise_connectivity(S)
_images/2-intro-to-brian-synapses_image_25_0.png

You can also do things like specifying the value of weights with a string. Let’s see an example where we assign each neuron a spatial location and have a distance-dependent connectivity function. We visualise the weight of a synapse by the size of the marker.

start_scope()

N = 30
neuron_spacing = 50*umetre
width = N/4.0*neuron_spacing

# Neuron has one variable x, its position
G = NeuronGroup(N, 'x : metre')
G.x = 'i*neuron_spacing'

# All synapses are connected (excluding self-connections)
S = Synapses(G, G, 'w : 1')
S.connect(condition='i!=j')
# Weight varies with distance
S.w = 'exp(-(x_pre-x_post)**2/(2*width**2))'

scatter(G.x[S.i]/um, G.x[S.j]/um, S.w*20)
xlabel('Source neuron position (um)')
ylabel('Target neuron position (um)');
_images/2-intro-to-brian-synapses_image_27_0.png

Now try changing that function and seeing how the plot changes.

More complex synapse models: STDP

Brian’s synapse framework is very general and can do things like short-term plasticity (STP) or spike-timing dependent plasticity (STDP). Let’s see how that works for STDP.

STDP is normally defined by an equation something like this:

\[\Delta w = \sum_{t_{pre}} \sum_{t_{post}} W(t_{post}-t_{pre})\]

That is, the change in synaptic weight w is the sum over all presynaptic spike times \(t_{pre}\) and postsynaptic spike times \(t_{post}\) of some function \(W\) of the difference in these spike times. A commonly used function \(W\) is:

\[\begin{split}W(\Delta t) = \begin{cases} A_{pre} e^{-\Delta t/\tau_{pre}} & \Delta t>0 \\ A_{post}- e^{\Delta t/\tau_{post}} & \Delta t<0 \end{cases}\end{split}\]

This function looks like this:

tau_pre = tau_post = 20*ms
A_pre = 0.01
A_post = -A_pre*1.05
delta_t = linspace(-50, 50, 100)*ms
W = where(delta_t<0, A_pre*exp(delta_t/tau_pre), A_post*exp(-delta_t/tau_post))
plot(delta_t/ms, W)
xlabel(r'$\Delta t$ (ms)')
ylabel('W')
ylim(-A_post, A_post)
axhline(0, ls='-', c='k');
_images/2-intro-to-brian-synapses_image_29_0.png

Simulating it directly using this equation though would be very inefficient, because we would have to sum over all pairs of spikes. That would also be physiologically unrealistic because the neuron cannot remember all its previous spike times. It turns out there is a more efficient and physiologically more plausible way to get the same effect.

We define two new variables \(a_{pre}\) and \(a_{post}\) which are “traces” of pre- and post-synaptic activity, governed by the differential equations:

\[\begin{split}\begin{eqnarray} \tau_{pre}\frac{\mathrm{d}}{\mathrm{d}t} a_{pre} &=& -a_{pre}\\ \tau_{post}\frac{\mathrm{d}}{\mathrm{d}t} a_{post} &=& -a_{post}\\ \end{eqnarray}\end{split}\]

When a presynaptic spike occurs, the presynaptic trace is updated and the weight is modified according to the rule:

\[\begin{split}\begin{eqnarray} a_{pre} &\rightarrow& a_{pre}+A_{pre}\\ w &\rightarrow& w+a_{post} \end{eqnarray}\end{split}\]

When a postsynaptic spike occurs:

\[\begin{split}\begin{eqnarray} a_{post} &\rightarrow& a_{post}+A_{post}\\ w &\rightarrow& w+a_{pre} \end{eqnarray}\end{split}\]

To see that this formulation is equivalent, you just have to check that the equations sum linearly, and consider two cases: what happens if the presynaptic spike occurs before the postsynaptic spike, and vice versa. Try drawing a picture of it.

Now that we have a formulation that relies only on differential equations and spike events, we can turn that into Brian code.

start_scope()

taupre = taupost = 20*ms
wmax = 0.01
Apre = 0.01
Apost = -Apre*taupre/taupost*1.05

G = NeuronGroup(1, 'v:1', threshold='v>1')

S = Synapses(G, G,
             '''
             w : 1
             dapre/dt = -apre/taupre : 1 (event-driven)
             dapost/dt = -apost/taupost : 1 (event-driven)
             ''',
             on_pre='''
             v_post += w
             apre += Apre
             w = clip(w+apost, 0, wmax)
             ''',
             on_post='''
             apost += Apost
             w = clip(w+apre, 0, wmax)
             ''')

There are a few things to see there. Firstly, when defining the synapses we’ve given a more complicated multi-line string defining three synaptic variables (w, apre and apost). We’ve also got a new bit of syntax there, (event-driven) after the definitions of apre and apost. What this means is that although these two variables evolve continuously over time, Brian should only update them at the time of an event (a spike). This is because we don’t need the values of apre and apost except at spike times, and it is more efficient to only update them when needed.

Next we have a on_pre=... argument. The first line is v_post += w: this is the line that actually applies the synaptic weight to the target neuron. The second line is apre += Apre which encodes the rule above. In the third line, we’re also encoding the rule above but we’ve added one extra feature: we’ve clamped the synaptic weights between a minimum of 0 and a maximum of wmax so that the weights can’t get too large or negative. The function clip(x, low, high) does this.

Finally, we have a on_post=... argument. This gives the statements to calculate when a post-synaptic neuron fires. Note that we do not modify v in this case, only the synaptic variables.

Now let’s see how all the variables behave when a presynaptic spike arrives some time before a postsynaptic spike.

start_scope()

taupre = taupost = 20*ms
wmax = 0.01
Apre = 0.01
Apost = -Apre*taupre/taupost*1.05

G = NeuronGroup(2, 'v:1', threshold='t>(1+i)*10*ms', refractory=100*ms)

S = Synapses(G, G,
             '''
             w : 1
             dapre/dt = -apre/taupre : 1 (clock-driven)
             dapost/dt = -apost/taupost : 1 (clock-driven)
             ''',
             on_pre='''
             v_post += w
             apre += Apre
             w = clip(w+apost, 0, wmax)
             ''',
             on_post='''
             apost += Apost
             w = clip(w+apre, 0, wmax)
             ''', method='linear')
S.connect(i=0, j=1)
M = StateMonitor(S, ['w', 'apre', 'apost'], record=True)

run(30*ms)

figure(figsize=(4, 8))
subplot(211)
plot(M.t/ms, M.apre[0], label='apre')
plot(M.t/ms, M.apost[0], label='apost')
legend(loc='best')
subplot(212)
plot(M.t/ms, M.w[0], label='w')
legend(loc='best')
xlabel('Time (ms)');
_images/2-intro-to-brian-synapses_image_33_0.png

A couple of things to note here. First of all, we’ve used a trick to make neuron 0 fire a spike at time 10 ms, and neuron 1 at time 20 ms. Can you see how that works?

Secondly, we’ve replaced the (event-driven) by (clock-driven) so you can see how apre and apost evolve over time. Try reverting this change and see what happens.

Try changing the times of the spikes to see what happens.

Finally, let’s verify that this formulation is equivalent to the original one.

start_scope()

taupre = taupost = 20*ms
Apre = 0.01
Apost = -Apre*taupre/taupost*1.05
tmax = 50*ms
N = 100

# Presynaptic neurons G spike at times from 0 to tmax
# Postsynaptic neurons G spike at times from tmax to 0
# So difference in spike times will vary from -tmax to +tmax
G = NeuronGroup(N, 'tspike:second', threshold='t>tspike', refractory=100*ms)
H = NeuronGroup(N, 'tspike:second', threshold='t>tspike', refractory=100*ms)
G.tspike = 'i*tmax/(N-1)'
H.tspike = '(N-1-i)*tmax/(N-1)'

S = Synapses(G, H,
             '''
             w : 1
             dapre/dt = -apre/taupre : 1 (event-driven)
             dapost/dt = -apost/taupost : 1 (event-driven)
             ''',
             on_pre='''
             apre += Apre
             w = w+apost
             ''',
             on_post='''
             apost += Apost
             w = w+apre
             ''')
S.connect(j='i')

run(tmax+1*ms)

plot((H.tspike-G.tspike)/ms, S.w)
xlabel(r'$\Delta t$ (ms)')
ylabel(r'$\Delta w$')
ylim(-Apost, Apost)
axhline(0, ls='-', c='k');
_images/2-intro-to-brian-synapses_image_35_0.png

Can you see how this works?

End of tutorial

Interactive notebooks and files

[1]Formerly known as “IPython Notebooks”.

User’s guide

Importing Brian

After installation, Brian is available in the brian2 package. By doing a wildcard import from this package, i.e.:

from brian2 import *

you will not only get access to the brian2 classes and functions, but also to everything in the pylab package, which includes the plotting functions from matplotlib and everything included in numpy/scipy (e.g. functions such as arange, linspace, etc.).

The following topics are not essential for beginners.


Precise control over importing

If you want to use a wildcard import from Brian, but don’t want to import all the additional symbols provided by pylab, you can use:

from brian2.only import *

Note that whenever you use something different from the most general from brian2 import * statement, you should be aware that Brian overwrites some numpy functions with their unit-aware equivalents (see Units). If you combine multiple wildcard imports, the Brian import should therefore be the last import. Similarly, you should not import and call overwritten numpy functions directly, e.g. by using import numpy as np followed by np.sin since this will not use the unit-aware versions. To make this easier, Brian provides a brian2.numpy_ package that provides access to everything in numpy but overwrites certain functions. If you prefer to use prefixed names, the recommended way of doing the imports is therefore:

import brian2.numpy_ as np
import brian2.only as br2

Note that it is safe to use e.g. np.sin and numpy.sin after a from brian2 import *.

Dependency checks

Brian will check the dependency versions during import and raise an error for an outdated dependency. An outdated dependency does not necessarily mean that Brian cannot be run with it, it only means that Brian is untested on that version. If you want to force Brian to run despite the outdated dependency, set the core.outdated_dependency_error preference to False. Note that this cannot be done in a script, since you do not have access to the preferences before importing brian2. See Preferences for instructions how to set preferences in a file.

Physical units

Brian includes a system for defining physical units. These are defined by their standard SI unit names: amp, kilogram, second, metre/meter, mole and the derived units coulomb, farad, gram/gramme, hertz, joule, pascal, ohm, siemens, volt, watt, together with prefixed versions (e.g. msiemens = 0.001*siemens) using the prefixes p, n, u, m, k, M, G, T (two exceptions: kilogram is not imported with any prefixes, metre and meter are additionaly defined with the “centi” prefix, i.e. cmetre/cmeter). In addition a couple of useful standard abbreviations like “cm” (instead of cmetre/cmeter), “nS” (instead of nsiemens), “ms” (instead of msecond), “Hz” (instead of hertz), etc. are included.

Using units

You can generate a physical quantity by multiplying a scalar or vector value with its physical unit:

>>> tau = 20*ms
>>> print tau
20. ms
>>> rates = [10, 20, 30] * Hz
>>> print rates
[ 10.  20.  30.] Hz

Brian will check the consistency of operations on units and raise an error for dimensionality mismatches:

>>> tau += 1  # ms? second?  
Traceback (most recent call last):
...
DimensionMismatchError: Cannot calculate ... += 1, units do not match (units are second and 1).
>>> 3*kgram + 3*amp   
Traceback (most recent call last):
...
DimensionMismatchError: Cannot calculate 3. kg + 3. A, units do not match (units are kgramme and amp).

Most Brian functions will also complain about non-specified or incorrect units:

>>> G = NeuronGroup(10, 'dv/dt = -v/tau: volt', dt=0.5)   
Traceback (most recent call last):
...
DimensionMismatchError: Function "__init__" expected a quantitity with unit second for argument "dt" but got 0.5 (unit is 1).

Numpy functions have been overwritten to correctly work with units (see the developer documentation for more details):

>>> print mean(rates)
20. Hz
>>> print rates.repeat(2)
[ 10.  10.  20.  20.  30.  30.] Hz

Removing units

There are various options to remove the units from a value (e.g. to use it with analysis functions that do not correctly work with units)

  • Divide the value by its unit (most of the time the recommended option because it is clear about the scale)
  • Transform it to a pure numpy array in the base unit by calling asarray() (no copy) or array (copy)
  • Directly get the unitless value of a state variable by appending an underscore to the name
>>> tau/ms
20.0
>> asarray(rates)
array([ 10.,  20.,  30.])
>>> G = NeuronGroup(5, 'dv/dt = -v/tau: volt')
>>> print G.v_[:]
[ 0.,  0.,  0.,  0.,  0.]

The following topics are not essential for beginners.


Importing units

Brian generates standard names for units, combining the unit name (e.g. “siemens”) with a prefixes (e.g. “m”), and also generates squared and cubed versions by appending a number. For example, the units “msiemens”, “siemens2”, “usiemens3” are all predefined. You can import these units from the package brian2.units.allunits – accordingly, an from brian2.units.allunits import * will result in everything from Ylumen3 (cubed yotta lumen) to ymol (yocto mole) being imported.

A better choice is normally to do from brian2.units import * or import everything from brian2 import *, this imports only the base units amp, kilogram, second, metre/meter, mole and the derived units coulomb, farad, gram/gramme, hertz, joule, pascal, ohm, siemens, volt, watt, together with the prefixes p, n, u, m, k, M, G, T (two exceptions: kilogram is not imported with any prefixes, metre and meter are additionaly defined with the “centi” prefix, i.e. cmetre/cmeter).

In addition a couple of useful standard abbreviations like “cm” (instead of cmetre/cmeter), “nS” (instead of nsiemens), “ms” (instead of msecond), “Hz” (instead of hertz), etc. are added (they can be individually imported from brian2.units.stdunits).

In-place operations on quantities

In-place operations on quantity arrays change the underlying array, in the same way as for standard numpy arrays. This means, that any other variables referencing the same object will be affected as well:

>>> q = [1, 2] * mV
>>> r = q
>>> q += 1*mV
>>> q
array([ 2.,  3.]) * mvolt
>>> r
array([ 2.,  3.]) * mvolt

In contrast, scalar quantities will never change the underlying value but instead return a new value (in the same way as standard Python scalars):

>>> x = 1*mV
>>> y = x
>>> x *= 2
>>> x
2. * mvolt
>>> y
1. * mvolt

Models and neuron groups

Model equations

The core of every simulation is a NeuronGroup, a group of neurons that share the same equations defining their properties. The minimum NeuronGroup specification contains the number of neurons and the model description in the form of equations:

G = NeuronGroup(10, 'dv/dt = -v/(10*ms) : volt')

This defines a group of 10 leaky integrators. The model description can be directly given as a (possibly multi-line) string as above, or as an Equations object. For more details on the form of equations, see Equations. Note that model descriptions can make reference to physical units, but also to scalar variables declared outside of the model description itself:

tau = 10*ms
G = NeuronGroup(10, 'dv/dt = -v/tau : volt')

If a variable should be taken as a parameter of the neurons, i.e. if it should be possible to vary its value across neurons, it has to be declared as part of the model description:

G = NeuronGroup(10, '''dv/dt = -v/tau : volt
                       tau : second''')

To make complex model descriptions more readable, named subexpressions can be used:

G = NeuronGroup(10, '''dv/dt = I_leak / Cm : volt
                       I_leak = g_L*(E_L - v) : amp''')

Noise

In addition to ordinary differential equations, Brian allows you to introduce random noise by specifying a stochastic differential equation. Brian uses the physicists’ notation used in the Langevin equation, representing the “noise” as a term \(\xi(t)\), rather than the mathematicians’ stochastic differential \(\mathrm{d}W_t\). The following is an example of the Ornstein-Uhlenbeck process that is often used to model a leaky integrate-and-fire neuron with a stochastic current:

G = NeuronGroup(10, 'dv/dt = -v/tau + sigma*xi*tau**-0.5 : volt')

You can start by thinking of xi as just a Gaussian random variable with mean 0 and standard deviation 1. However, it scales in an unusual way with time and this gives it units of 1/sqrt(second). You don’t necessarily need to understand why this is, but it is possible to get a reasonably simple intuition for it by thinking about numerical integration: see below.

Threshold and reset

To emit spikes, neurons need a threshold. Threshold and reset are given as strings in the NeuronGroup constructor:

tau = 10*ms
G = NeuronGroup(10, 'dv/dt = -v/tau : volt', threshold='v > -50*mV',
                reset='v = -70*mV')

Whenever the threshold condition is fulfilled, the reset statements will be executed. Again, both threshold and reset can refer to physical units, external variables and parameters, in the same way as model descriptions:

v_r = -70*mV  # reset potential
G = NeuronGroup(10, '''dv/dt = -v/tau : volt
                       v_th : volt  # neuron-specific threshold''',
                threshold='v > v_th', reset='v = v_r')

You can also create non-spike events. See Custom events for more details.

Refractoriness

To make a neuron non-excitable for a certain time period after a spike, the refractory keyword can be used:

G = NeuronGroup(10, 'dv/dt = -v/tau : volt', threshold='v > -50*mV',
                reset='v = -70*mV', refractory=5*ms)

This will not allow any threshold crossing for a neuron for 5ms after a spike. The refractory keyword allows for more flexible refractoriness specifications, see Refractoriness for details.

State variables

Differential equations and parameters in model descriptions are stored as state variables of the NeuronGroup. They can be accessed and set as an attribute of the group. To get the values without physical units (e.g. for analysing data with external tools), use an underscore after the name:

>>> G = NeuronGroup(10, '''dv/dt = -v/tau : volt
...                        tau : second''')
>>> G.v = -70*mV
>>> G.v
<neurongroup.v: array([-70., -70., -70., -70., -70., -70., -70., -70., -70., -70.]) * mvolt>
>>> G.v_  # values without units
<neurongroup.v_: array([-0.07, -0.07, -0.07, -0.07, -0.07, -0.07, -0.07, -0.07, -0.07, -0.07])>

The value of state variables can also be set using string expressions that can refer to units and external variables, other state variables, mathematical functions, and a special variable i, the index of the neuron:

>>> G.tau = '5*ms + (1.0*i/N)*5*ms'
>>> G.tau
<neurongroup.tau: array([ 5. ,  5.5,  6. ,  6.5,  7. ,  7.5,  8. ,  8.5,  9. ,  9.5]) * msecond>

You can also set the value only if a condition holds, for example:

>>> G.v['tau>7.25*ms'] = -60*mV
>>> G.v
<neurongroup.v: array([-70., -70., -70., -70., -70., -60., -60., -60., -60., -60.]) * mvolt>

Subgroups

It is often useful to refer to a subset of neurons, this can be achieved using Python’s slicing syntax:

G = NeuronGroup(10, '''dv/dt = -v/tau : volt
                       tau : second''',
                threshold='v > -50*mV',
                reset='v = -70*mV')
# Create subgroups
G1 = G[:5]
G2 = G[5:]

# This will set the values in the main group, subgroups are just "views"
G1.tau = 10*ms
G2.tau = 20*ms

Here G1 refers to the first 5 neurons in G, and G2 to the second 5 neurons. In general G[i:j] refers to the neurons with indices from i to j-1, as in general in Python. Subgroups can be used in most places where regular groups are used, e.g. their state variables or spiking activity can be recorded using monitors, they can be connected via Synapses, etc. In such situations, indices (e.g. the indices of the neurons to record from in a StateMonitor) are relative to the subgroup, not to the main group

The following topics are not essential for beginners.


Shared variables

Sometimes it can also be useful to introduce shared variables or subexpressions, i.e. variables that have a common value for all neurons. In contrast to external variables (such as Cm above), such variables can change during a run, e.g. by using run_regularly(). This can be for example used for an external stimulus that changes in the course of a run:

G = NeuronGroup(10, '''shared_input : volt (shared)
                       dv/dt = (-v + shared_input)/tau : volt
                       tau : second''')

Note that there are several restrictions around the use of shared variables: they cannot be written to in contexts where statements apply only to a subset of neurons (e.g. reset statements, see below). If a code block mixes statements writing to shared and vector variables, then the shared statements have to come first.

By default, subexpressions are re-evaluated whenever they are used, i.e. using a subexpression is completely equivalent to substituting it. Sometimes it is useful to instead only evaluate a subexpression once and then use this value for the rest of the time step. This can be achieved by using the (constant over dt) flag. This flag is mandatory for subexpressions that refer to stateful functions like rand() which notably allows them to be recorded with a StateMonitor – otherwise the monitor would record a different instance of the random number than the one that was used in the equations.

For shared variables, setting by string expressions can only refer to shared values:

>>> G.shared_input = '(4.0/N)*mV'
>>> G.shared_input
<neurongroup.shared_input: 0.4 * mvolt>

Storing state variables

Sometimes it can be convenient to access multiple state variables at once, e.g. to set initial values from a dictionary of values or to store all the values of a group on disk. This can be done with the get_states() and set_states() methods:

>>> group = NeuronGroup(5, '''dv/dt = -v/tau : 1
...                           tau : second''')
>>> initial_values = {'v': [0, 1, 2, 3, 4],
...                   'tau': [10, 20, 10, 20, 10]*ms}
>>> group.set_states(initial_values)
>>> group.v[:]
array([ 0.,  1.,  2.,  3.,  4.])
>>> group.tau[:]
array([ 10.,  20.,  10.,  20.,  10.]) * msecond
>>> states = group.get_states()
>>> states['v']
array([ 0.,  1.,  2.,  3.,  4.])

The data (without physical units) can also be exported/imported to/from Pandas data frames (needs an installation of pandas):

>>> df = group.get_states(units=False, format='pandas')
>>> df
   N      dt  i    t   tau    v
0  5  0.0001  0  0.0  0.01  0.0
1  5  0.0001  1  0.0  0.02  1.0
2  5  0.0001  2  0.0  0.01  2.0
3  5  0.0001  3  0.0  0.02  3.0
4  5  0.0001  4  0.0  0.01  4.0
>>> df['tau']
0    0.01
1    0.02
2    0.01
3    0.02
4    0.01
Name: tau, dtype: float64
>>> df['tau'] *= 2
>>> group.set_states(df[['tau']], units=False, format='pandas')
>>> group.tau
<neurongroup.tau: array([ 20.,  40.,  20.,  40.,  20.]) * msecond>

Linked variables

A NeuronGroup can define parameters that are not stored in this group, but are instead a reference to a state variable in another group. For this, a group defines a parameter as linked and then uses linked_var() to specify the linking. This can for example be useful to model shared noise between cells:

inp = NeuronGroup(1, 'dnoise/dt = -noise/tau + tau**-0.5*xi : 1')

neurons = NeuronGroup(100, '''noise : 1 (linked)
                              dv/dt = (-v + noise_strength*noise)/tau : volt''')
neurons.noise = linked_var(inp, 'noise')

If the two groups have the same size, the linking will be done in a 1-to-1 fashion. If the source group has the size one (as in the above example) or if the source parameter is a shared variable, then the linking will be done as 1-to-all. In all other cases, you have to specify the indices to use for the linking explicitly:

# two inputs with different phases
inp = NeuronGroup(2, '''phase : 1
                        dx/dt = 1*mV/ms*sin(2*pi*100*Hz*t-phase) : volt''')
inp.phase = [0, pi/2]

neurons = NeuronGroup(100, '''inp : volt (linked)
                              dv/dt = (-v + inp) / tau : volt''')
# Half of the cells get the first input, other half gets the second
neurons.inp = linked_var(inp, 'x', index=repeat([0, 1], 50))

Time scaling of noise

Suppose we just had the differential equation

\(dx/dt=\xi\)

To solve this numerically, we could compute

\(x(t+\mathrm{d}t)=x(t)+\xi_1\)

where \(\xi_1\) is a normally distributed random number with mean 0 and standard deviation 1. However, what happens if we change the time step? Suppose we used a value of \(\mathrm{d}t/2\) instead of \(\mathrm{d}t\). Now, we compute

\(x(t+\mathrm{d}t)=x(t+\mathrm{d}t/2)+\xi_1=x(t)+\xi_2+\xi_1\)

The mean value of \(x(t+\mathrm{d}t)\) is 0 in both cases, but the standard deviations are different. The first method \(x(t+\mathrm{d}t)=x(t)+\xi_1\) gives \(x(t+\mathrm{d}t)\) a standard deviation of 1, whereas the second method \(x(t+\mathrm{d}t)=x(t+\mathrm{d}/2)+\xi_1=x(t)+\xi_2+\xi_1\) gives \(x(t)\) a variance of 1+1=2 and therefore a standard deviation of \(\sqrt{2}\).

In order to solve this problem, we use the rule \(x(t+\mathrm{d}t)=x(t)+\sqrt{\mathrm{d}t}\xi_1\), which makes the mean and standard deviation of the value at time \(t\) independent of \(\mathrm{d}t\). For this to make sense dimensionally, \(\xi\) must have units of 1/sqrt(second).

For further details, refer to a textbook on stochastic differential equations.

Numerical integration

By default, Brian chooses an integration method automatically, trying to solve the equations exactly first (for linear equations) and then resorting to numerical algorithms. It will also take care of integrating stochastic differential equations appropriately.

Note that in some cases, the automatic choice of integration method will not be appropriate, because of a choice of parameters that couldn’t be determined in advance. In this case, typically you will get nan (not a number) values in the results, or large oscillations. In this case, Brian will generate a warning to let you know, but will not raise an error.

Method choice

You will get an INFO message telling you which integration method Brian decided to use, together with information about how much time it took to apply the integration method to your equations. If other methods have been tried but were not applicable, you will also see the time it took to try out those other methods. In some cases, checking other methods (in particular the 'linear' method which attempts to solve the equations analytically) can take a considerable amount of time – to avoid wasting this time, you can always chose the integration method manually (see below). You can also suppress the message by raising the log level or by explicitly suppressing 'method_choice' log messages – for details, see Logging.

If you prefer to chose an integration algorithm yourself, you can do so using the method keyword for NeuronGroup, Synapses, or SpatialNeuron. The complete list of available methods is the following:

  • 'linear': exact integration for linear equations
  • 'independent': exact integration for a system of independent equations, where all the equations can be analytically solved independently
  • 'exponential_euler': exponential Euler integration for conditionally linear equations
  • 'euler': forward Euler integration (for additive stochastic differential equations using the Euler-Maruyama method)
  • 'rk2': second order Runge-Kutta method (midpoint method)
  • 'rk4': classical Runge-Kutta method (RK4)
  • 'heun': stochastic Heun method for solving Stratonovich stochastic differential equations with non-diagonal multiplicative noise.
  • 'milstein': derivative-free Milstein method for solving stochastic differential equations with diagonal multiplicative noise

The following topics are not essential for beginners.


Technical notes

Each class defines its own list of algorithms it tries to apply, NeuronGroup and Synapses will use the first suitable method out of the methods 'linear', 'euler' and 'heun' while SpatialNeuron objects will use 'linear', 'exponential_euler', 'rk2' or 'heun'.

You can also define your own numerical integrators, see State update for details.

Equations

Equation strings

Equations are used both in NeuronGroup and Synapses to:

  • define state variables
  • define continuous-updates on these variables, through differential equations

Equations are defined by multiline strings.

An Equation is a set of single lines in a string:

  1. dx/dt = f : unit (differential equation)
  2. x = f : unit (subexpression)
  3. x : unit (parameter)

Each equation may be spread out over multiple lines to improve formatting. Comments using # may also be included. Subunits are not allowed, i.e., one must write volt, not mV. This is to make it clear that the values are internally always saved in the basic units, so no confusion can arise when getting the values out of a NeuronGroup and discarding the units. Compound units are of course allowed as well (e.g. farad/meter**2). There are also three special “units” that can be used: 1 denotes a dimensionless floating point variable, boolean and integer denote dimensionless variables of the respective kind.

Some special variables are defined: t, dt (time) and xi (white noise). Variable names starting with an underscore and a couple of other names that have special meanings under certain circumstances (e.g. names ending in _pre or _post) are forbidden.

For stochastic equations with several xi values it is necessary to make clear whether they correspond to the same or different noise instantiations. To make this distinction, an arbitrary suffix can be used, e.g. using xi_1 several times refers to the same variable, xi_2 (or xi_inh, xi_alpha, etc.) refers to another. An error will be raised if you use more than one plain xi. Note that noise is always independent across neurons, you can only work around this restriction by defining your noise variable as a shared parameter and update it using a user-defined function (e.g. with run_regularly), or create a group that models the noise and link to its variable (see Linked variables).

External variables and functions

Equations defining neuronal or synaptic equations can contain references to external parameters or functions. These references are looked up at the time that the simulation is run. If you don’t specify where to look them up, it will look in the Python local/global namespace (i.e. the block of code where you call run()). If you want to override this, you can specify an explicit “namespace”. This is a Python dictionary with keys being variable names as they appear in the equations, and values being the desired value of that variable. This namespace can be specified either in the creation of the group or when you can the run() function using the namespace keyword argument.

The following three examples show the different ways of providing external variable values, all having the same effect in this case:

# Explicit argument to the NeuronGroup
G = NeuronGroup(1, 'dv/dt = -v / tau : 1', namespace={'tau': 10*ms})
net = Network(G)
net.run(10*ms)

# Explicit argument to the run function
G = NeuronGroup(1, 'dv/dt = -v / tau : 1')
net = Network(G)
net.run(10*ms, namespace={'tau': 10*ms})

# Implicit namespace from the context
G = NeuronGroup(1, 'dv/dt = -v / tau : 1')
net = Network(G)
tau = 10*ms
net.run(10*ms)

See Namespaces for more details.

The following topics are not essential for beginners.


Flags

A flag is a keyword in parentheses at the end of the line, which qualifies the equations. There are several keywords:

event-driven
this is only used in Synapses, and means that the differential equation should be updated only at the times of events. This implies that the equation is taken out of the continuous state update, and instead a event-based state update statement is generated and inserted into event codes (pre and post). This can only qualify differential equations of synapses. Currently, only one-dimensional linear equations can be handled (see below).
unless refractory
this means the variable is not updated during the refractory period. This can only qualify differential equations of neuron groups.
constant
this means the parameter will not be changed during a run. This allows optimizations in state updaters. This can only qualify parameters.
constant over dt
this means that the subexpression will be only evaluated once at the beginning of the time step. This can be useful to e.g. approximate a non-linear term as constant over a time step in order to use the linear numerical integration algorithm. It is also mandatory for subexpressions that refer to stateful functions like rand() to make sure that they are only evaluated once (otherwise e.g. recording the value with a StateMonitor would re-evaluate it and therefore not record the same values that are used in other places). This can only qualify subexpressions.
shared
this means that a parameter or subexpression is not neuron-/synapse-specific but rather a single value for the whole NeuronGroup or Synapses. A shared subexpression can only refer to other shared variables.
linked
this means that a parameter refers to a parameter in another NeuronGroup. See Linked variables for more details.

Multiple flags may be specified as follows:

dx/dt = f : unit (flag1,flag2)

List of special symbols

The following lists all of the special symbols that Brian uses in equations and code blocks, and their meanings.

dt
Time step width
i
Index of a neuron (NeuronGroup) or the pre-synaptic neuron of a synapse (Synapses)
j
Index of a post-synaptic neuron of a synapse
lastspike
Last time that the neuron spiked (for refractoriness)
lastupdate
Time of the last update of synaptic variables in event-driven equations.
N
Number of neurons (NeuronGroup) or synapses (Synapses). Use N_pre or N_post for the number of presynaptic or postsynaptic neurons in the context of Synapses.
not_refractory
Boolean variable that is normally true, and false if the neuron is currently in a refractory state
t
Current time
xi, xi_*
Stochastic differential in equations

Event-driven equations

Equations defined as event-driven are completely ignored in the state update. They are only defined as variables that can be externally accessed. There are additional constraints:

  • An event-driven variable cannot be used by any other equation that is not also event-driven.
  • An event-driven equation cannot depend on a differential equation that is not event-driven (directly, or indirectly through subexpressions). It can depend on a constant parameter.

Currently, automatic event-driven updates are only possible for one-dimensional linear equations, but this may be extended in the future.

Equation objects

The model definitions for NeuronGroup and Synapses can be simple strings or Equations objects. Such objects can be combined using the add operator:

eqs = Equations('dx/dt = (y-x)/tau : volt')
eqs += Equations('dy/dt = -y/tau: volt')

Equations allow for the specification of values in the strings, but does this by simple string replacement, e.g. you can do:

eqs = Equations('dx/dt = x/tau : volt', tau=10*ms)

but this is exactly equivalent to:

eqs = Equations('dx/dt = x/(10*ms) : volt')

The Equations object does some basic syntax checking and will raise an error if two equations defining the same variable are combined. It does not however do unit checking, checking for unknown identifiers or incorrect flags – all this will be done during the instantiation of a NeuronGroup or Synapses object.

Examples of Equation objects

Concatenating equations

>>> membrane_eqs = Equations('dv/dt = -(v + I)/ tau : volt')
>>> eqs1 = membrane_eqs + Equations('''I = sin(2*pi*freq*t) : volt
...                                    freq : Hz''')
>>> eqs2 = membrane_eqs + Equations('''I : volt''')
>>> print(eqs1)
I = sin(2*pi*freq*t)  : V
dv/dt = -(v + I)/ tau  : V
freq : Hz
>>> print(eqs2)
dv/dt = -(v + I)/ tau  : V
I : V

Substituting variable names

>>> general_equation = 'dg/dt = -g / tau : siemens'
>>> eqs_exc = Equations(general_equation, g='g_e', tau='tau_e')
>>> eqs_inh = Equations(general_equation, g='g_i', tau='tau_i')
>>> print(eqs_exc)
dg_e/dt = -g_e / tau_e  : S
>>> print(eqs_inh)
dg_i/dt = -g_i / tau_i  : S

Inserting values

>>> eqs = Equations('dv/dt = mu/tau + sigma/tau**.5*xi : volt',
...                  mu=-65*mV, sigma=3*mV, tau=10*ms)
>>> print(eqs)
dv/dt = (-65. * mvolt)/(10. * msecond) + (3. * mvolt)/(10. * msecond)**.5*xi  : V

Refractoriness

Brian allows you to model the absolute refractory period of a neuron in a flexible way. The definition of refractoriness consists of two components: the amount of time after a spike that a neuron is considered to be refractory, and what changes in the neuron during the refractoriness.

Defining the refractory period

The refractory period is specified by the refractory keyword in the NeuronGroup initializer. In the simplest case, this is simply a fixed time, valid for all neurons:

G = NeuronGroup(N, model='...', threshold='...', reset='...',
                refractory=2*ms)

Alternatively, it can be a string expression that evaluates to a time. This expression will be evaluated after every spike and allows for a changing refractory period. For example, the following will set the refractory period to a random duration between 1ms and 3ms after every spike:

G = NeuronGroup(N, model='...', threshold='...', reset='...',
                refractory='(1 + 2*rand())*ms')

In general, modelling a refractory period that varies across neurons involves declaring a state variable that stores the refractory period per neuron as a model parameter. The refractory expression can then refer to this parameter:

G = NeuronGroup(N, model='''...
                            refractory : second''', threshold='...',
                reset='...', refractory='refractory')
# Set the refractory period for each cell
G.refractory = ...

This state variable can also be a dynamic variable itself. For example, it can serve as an adaptation mechanism by increasing it after every spike and letting it relax back to a steady-state value between spikes:

refractory_0 = 2*ms
tau_refractory = 50*ms
G = NeuronGroup(N, model='''...
                            drefractory/dt = (refractory_0 - refractory) / tau_refractory : second''',
                threshold='...', refractory='refractory',
                reset='''...
                         refractory += 1*ms''')
G.refractory = refractory_0

In some cases, the condition for leaving the refractory period is not easily expressed as a certain time span. For example, in a Hodgkin-Huxley type model the threshold is only used for counting spikes and the refractoriness is used to prevent to count multiple spikes for a single threshold crossing (the threshold condition would evaluate to True for several time points). When a neuron should leave the refractory period is not easily expressed as a time span but more naturally as a condition that the neuron should remain refractory for as long as it stays above the threshold. This can be achieved by using a string expression for the refractory keyword that evaluates to a boolean condition:

G = NeuronGroup(N, model='...', threshold='v > -20*mV',
                refractory='v >= -20*mV')

The refractory keyword should be read as “stay refractory as long as the condition remains true”. In fact, specifying a time span for the refractoriness will be automatically transformed into a logical expression using the current time t and the time of the last spike lastspike. Specifying refractory=2*ms is equivalent to specifying refractory='(t - lastspike) <= 2*ms'.

Defining model behaviour during refractoriness

The refractoriness definition as described above only has a single effect by itself: threshold crossings during the refractory period are ignored. In the following model, the variable v continues to update during the refractory period but it does not elicit a spike if it crosses the threshold:

G = NeuronGroup(N, 'dv/dt = -v / tau : 1',
                threshold='v > 1', reset='v=0',
                refractory=2*ms)

There is also a second implementation of refractoriness that is supported by Brian, one or several state variables can be clamped during the refractory period. To model this kind of behaviour, variables that should stop being updated during refractoriness can be marked with the (unless refractory) flag:

G = NeuronGroup(N, '''dv/dt = -(v + w)/ tau_v : 1 (unless refractory)
                      dw/dt = -w / tau_w : 1''',
                threshold='v > 1', reset='v=0; w+=0.1', refractory=2*ms)

In the above model, the v variable is clamped at 0 for 2ms after a spike but the adaptation variable w continues to update during this time. In addition, a variable of a neuron that is in its refractory period is read-only: incoming synapses or other code will have no effect on the value of v until it leaves its refractory period.

The following topics are not essential for beginners.


Arbitrary refractoriness

In fact, arbitrary behaviours can be defined using Brian’s refractoriness mechanism.

Internally, a NeuronGroup with refractoriness has a boolean variable not_refractory added to the equations, and this is used to implement the refractoriness behaviour. Specifically, the threshold condition is replaced by threshold and not_refractory and differential equations that are marked as (unless refractory) are multiplied by int(not_refractory) (so that they have the value 0 when the neuron is refractory).

This not_refractory variable is also available to the user to define more sophisticated refractoriness behaviour. For example, the following code updates the w variable with a different time constant during refractoriness:

G = NeuronGroup(N, '''dv/dt = -(v + w)/ tau_v : 1 (unless refractory)
                      dw/dt = (-w / tau_active)*int(not_refractory) + (-w / tau_ref)*(1 - int(not_refractory)) : 1''',
                threshold='v > 1', reset='v=0; w+=0.1', refractory=2*ms)

Synapses

Defining synaptic models

The most simple synapse (adding a fixed amount to the target membrane potential on every spike) is described as follows:

w = 1*mV
S = Synapses(P, Q, on_pre='v += w')

This defines a set of synapses between NeuronGroup P and NeuronGroup Q. If the target group is not specified, it is identical to the source group by default. The on_pre keyword defines what happens when a presynaptic spike arrives at a synapse. In this case, the constant w is added to variable v. Because v is not defined as a synaptic variable, it is assumed by default that it is a postsynaptic variable, defined in the target NeuronGroup Q. Note that this does not does create synapses (see Creating Synapses), only the synaptic models.

To define more complex models, models can be described as string equations, similar to the models specified in NeuronGroup:

S = Synapses(P, Q, model='w : volt', on_pre='v += w')

The above specifies a parameter w, i.e. a synapse-specific weight.

Synapses can also specify code that should be executed whenever a postsynaptic spike occurs (keyword on_post) and a fixed (pre-synaptic) delay for all synapses (keyword delay).

When specifying equations or code for Synapses, there is a possible ambiguity about what a variable name refers to. For example, if both the Synapses object and the target NeuronGroup have a variable w, what would the code w += 1 do? The answer is that it will modify the synapse’s variable w. In general, it will first check if there is a synaptic variable of that name, then a variable of the post-synaptic neurons, and otherwise it will look for an external constant. To explicitly specify that a variable should be from a pre- or post-synaptic neuron, append the suffix _pre or _post, so in the situation above w_post += 1 would increase the post-synaptic neuron’s copy of w by 1, not the synapse’s variable w.

Model syntax

The model follows exactly the same syntax as for NeuronGroup. There can be parameters (e.g. synaptic variable w above), but there can also be named subexpressions and differential equations, describing the dynamics of synaptic variables. In all cases, synaptic variables are created, one value per synapse.

Event-driven updates

By default, differential equations are integrated in a clock-driven fashion, as for a NeuronGroup. This is potentially very time consuming, because all synapses are updated at every timestep and Brian will therefore emit a warning. If you are sure about integrating the equations at every timestep (e.g. because you want to record the values continuously), then you should specify the flag (clock-driven). To ask Brian 2 to simulate differential equations in an event-driven fashion use the flag (event-driven). A typical example is pre- and postsynaptic traces in STDP:

model='''w:1
         dApre/dt=-Apre/taupre : 1 (event-driven)
         dApost/dt=-Apost/taupost : 1 (event-driven)'''

Here, Brian updates the value of Apre for a given synapse only when this synapse receives a spike, whether it is presynaptic or postsynaptic. More precisely, the variables are updated every time either the on_pre or on_post code is called for the synapse, so that the values are always up to date when these codes are executed.

Automatic event-driven updates are only possible for a subset of equations, in particular for one-dimensional linear equations. These equations must also be independent of the other ones, that is, a differential equation that is not event-driven cannot depend on an event-driven equation (since the values are not continuously updated). In other cases, the user can write event-driven code explicitly in the update codes (see below).

Pre and post codes

The on_pre code is executed at each synapse receiving a presynaptic spike. For example:

on_pre='v+=w'

adds the value of synaptic variable w to postsynaptic variable v. Any sort of code can be executed. For example, the following code defines stochastic synapses, with a synaptic weight w and transmission probability p:

S=Synapses(input,neurons,model="""w : 1
                              p : 1""",
                         on_pre="v+=w*(rand()<p)")

The code means that w is added to v with probability p. The code may also include multiple lines.

Similarly, the on_post code is executed at each synapse where the postsynaptic neuron has fired a spike.

Creating synapses

Creating a Synapses instance does not create synapses, it only specifies their dynamics. The following command creates a synapse between neuron 5 in the source group and neuron 10 in the target group:

S.connect(i=5, j=10)

Multiple synaptic connections can be created in a single statement:

S.connect()
S.connect(i=[1, 2], j=[3, 4])
S.connect(i=numpy.arange(10), j=1)

The first statement connects all neuron pairs. The second statement creates synapses between neurons 1 and 3, and between neurons 2 and 4. The third statement creates synapses between the first ten neurons in the source group and neuron 1 in the target group.

Conditional

One can also create synapses by giving (as a string) the condition for a pair of neurons i and j to be connected by a synapse, e.g. you could connect neurons that are not very far apart with:

S.connect(condition='abs(i-j)<=5')

The string expressions can also refer to pre- or postsynaptic variables. This can be useful for example for spatial connectivity: assuming that the pre- and postsynaptic groups have parameters x and y, storing their location, the following statement connects all cells in a 250 um radius:

S.connect(condition='sqrt((x_pre-x_post)**2 + (y_pre-y_post)**2) < 250*umeter')
Probabilistic

Synapse creation can also be probabilistic by providing a p argument, providing the connection probability for each pair of synapses:

S.connect(p=0.1)

This connects all neuron pairs with a probability of 10%. Probabilities can also be given as expressions, for example to implement a connection probability that depends on distance:

S.connect(condition='i != j',
          p='p_max*exp(-(x_pre-x_post)**2+(y_pre-y_post)**2) / (2*(125*umeter)**2)')

If this statement is applied to a Synapses object that connects a group to itself, it prevents self-connections (i != j) and connects cells with a probability that is modulated according to a 2-dimensional Gaussian of the distance between the cells.

One-to-one

You can specify a mapping from i to any function f(i), e.g. the simplest way to give a 1-to-1 connection would be:

S.connect(j='i')

Accessing synaptic variables

Synaptic variables can be accessed in a similar way as NeuronGroup variables. They can be indexed with two indexes, corresponding to the indexes of pre and postsynaptic neurons, or with string expressions (referring to i and j as the pre-/post-synaptic indices, or to other state variables of the synapse or the connected neurons). Note that setting a synaptic variable always refers to the synapses that currently exist, i.e. you have to set them after the relevant Synapses.connect() call.

Here are a few examples:

S.w[2, 5] = 1*nS
S.w[1, :] = 2*nS
S.w = 1*nS # all synapses assigned
S.w[2, 3] = (1*nS, 2*nS)
S.w[group1, group2] = "(1+cos(i-j))*2*nS"
S.w[:, :] = 'rand()*nS'
S.w['abs(x_pre-x_post) < 250*umetre'] = 1*nS

Note that it is also possible to index synaptic variables with a single index (integer, slice, or array), but in this case synaptic indices have to be provided.

Delays

There is a special synaptic variable that is automatically created: delay. It is the propagation delay from the presynaptic neuron to the synapse, i.e., the presynaptic delay. This is just a convenience syntax for accessing the delay stored in the presynaptic pathway: pre.delay. When there is a postsynaptic code (keyword post), the delay of the postsynaptic pathway can be accessed as post.delay.

The delay variable(s) can be set and accessed in the same way as other synaptic variables. The same semantics as for other synaptic variables apply, which means in particular that the delay is only set for the synapses that have been already created with Synapses.connect(). If you want to set a global delay for all synapses of a Synapses object, you can directly specify that delay as part of the Synapses initializer:

synapses = Synapses(sources, targets, '...', on_pre='...', delay=1*ms)

When you use this syntax, you can still change the delay afterwards by setting synapses.delay, but you can only set it to another scalar value. If you need different delays across synapses, do not use this syntax but instead set the delay variable as any other synaptic variable (see above).

Monitoring synaptic variables

A StateMonitor object can be used to monitor synaptic variables. For example, the following statement creates a monitor for variable w for the synapses 0 and 1:

M = StateMonitor(S,'w',record=[0,1])

Note that these are synapse indices, not neuron indices. More convenient is to directly index the Synapses object, Brian will automatically calculate the indices for you in this case:

M = StateMonitor(S,'w',record=S[0, :])  # all synapses originating from neuron 0
M = StateMonitor(S,'w',record=S['i!=j'])  # all synapses excluding autapses
M = StateMonitor(S,'w',record=S['w>0'])  # all synapses with non-zero weights (at this time)

You can also record a synaptic variable for all synapses by passing record=True.

The recorded traces can then be accessed in the usual way, again with the possibility to index the Synapses object:

plot(M.t / ms, M[0].w / nS)  # first synapse
plot(M.t / ms, M[0, :].w / nS)  # all synapses originating from neuron 0
plot(M.t / ms, M['w>0'].w / nS)  # all synapses with non-zero weights (at this time)

Note (for users of Brian’s advanced standalone mode only): the use of the Synapses object for indexing and record=True only work in the default runtime modes. In standalone mode (see Standalone code generation), the synapses have not yet been created at this point, so Brian cannot calculate the indices.

The following topics are not essential for beginners.


Creating synapses with the generator syntax

The most general way of specifying a connection is using the generator syntax, e.g. to connect neuron i to all neurons j with 0<=j<=i:

S.connect(j='k for k in range(0, i+1)')

There are several parts to this syntax. The general form is:

j='EXPR for VAR in RANGE if COND'

Here EXPR can be any integer-valued expression. VAR is the name of the iteration variable (any name you like can be specified here). The if COND part is optional and lets you give an additional condition that has to be true for the synapse to be created. Finally, RANGE can be either:

  1. a Python range, e.g. range(N) is the integers from 0 to N-1, range(A, B) is the integers from A to B-1, range(low, high, step) is the integers from low to high-1 with steps of size step, or
  2. it can be a random sample sample(N, p=0.1) gives a random sample of integers from 0 to N-1 with 10% probability of each integer appearing in the sample. This can have extra arguments like range, e.g. sample(low, high, step, p=0.1) will give each integer in range(low, high, step) with probability 10%.

If you try to create an invalid synapse (i.e. connecting neurons that are outside the correct range) then you will get an error, e.g. you might like to try to do this to connect each neuron to its neighbours:

S.connect(j='i+(-1)**k for k in range(2)')

However this won’t work at for i=0 it gives j=-1 which is invalid. There is an option to just skip any synapses that are outside the valid range:

S.connect(j='i+(-1)**k for k in range(2)', skip_if_invalid=True)

Summed variables

In many cases, the postsynaptic neuron has a variable that represents a sum of variables over all its synapses. This is called a “summed variable”. An example is nonlinear synapses (e.g. NMDA):

neurons = NeuronGroup(1, model="""dv/dt=(gtot-v)/(10*ms) : 1
                                  gtot : 1""")
S=Synapses(input,neurons,
           model='''dg/dt=-a*g+b*x*(1-g) : 1
                    gtot_post = g : 1  (summed)
                    dx/dt=-c*x : 1
                    w : 1 # synaptic weight
                 ''',
           on_pre='x+=w')

Here, each synapse has a conductance g with nonlinear dynamics. The neuron’s total conductance is gtot. The line stating gtot_post = g : 1  (summed) specifies the link between the two: gtot in the postsynaptic group is the summer over all variables g of the corresponding synapses. What happens during the simulation is that at each time step, presynaptic conductances are summed for each neuron and the result is copied to the variable gtot. Another example is gap junctions:

neurons = NeuronGroup(N, model='''dv/dt=(v0-v+Igap)/tau : 1
                                  Igap : 1''')
S=Synapses(neurons,model='''w:1 # gap junction conductance
                            Igap_post = w*(v_pre-v_post): 1 (summed)''')

Here, Igap is the total gap junction current received by the postsynaptic neuron.

Creating multi-synapses

It is also possible to create several synapses for a given pair of neurons:

S.connect(i=numpy.arange(10), j=1, n=3)

This is useful for example if one wants to have multiple synapses with different delays. To distinguish multiple variables connecting the same pair of neurons in synaptic expressions and statements, you can create a variable storing the synapse index with the multisynaptic_index keyword:

syn = Synapses(source_group, target_group, model='w : 1', on_pre='v += w',
               multisynaptic_index='synapse_number')
syn.connect(i=numpy.arange(10), j=1, n=3)
syn.delay = '1*ms + synapse_number*2*ms'

This index can then be used to set/get synapse-specific values:

S.delay = '(synapse_number + 1)*ms)'  # Set delays between 1 and 10ms
S.w['synapse_number<5'] = 0.5
S.w['synapse_number>=5'] = 1

It also enables three-dimensional indexing, the following statement has the same effect as the last one above:

S.w[:, :, 5:] = 1

Multiple pathways

It is possible to have multiple pathways with different update codes from the same presynaptic neuron group. This may be interesting in cases when different operations must be applied at different times for the same presynaptic spike. To do this, specify a dictionary of pathway names and codes:

on_pre={'pre_transmission': 'ge+=w',
        'pre_plasticity': '''w=clip(w+Apost,0,inf)
                             Apre+=dApre'''}

This creates two pathways with the given names (in fact, specifying on_pre=code is just a shorter syntax for on_pre={'pre': code}) through which the delay variables can be accessed. The following statement, for example, sets the delay of the synapse between the first neurons of the source and target groups in the pre_plasticity pathway:

S.pre_plasticity.delay[0,0] = 3*ms

As mentioned above, pre pathways are generally executed before post pathways. The order of execution of several pre (or post) pathways is however arbitrary, and simply based on the alphabetical ordering of their names (i.e. pre_plasticity will be executed before pre_transmission). To explicitly specify the order, set the order attribute of the pathway, e.g.:

S.pre_transmission.order = -2

will make sure that the pre_transmission code is executed before the pre_plasticity code in each time step.

Numerical integration

Differential equation flags

For the integration of differential equations, one can use the same keywords as for NeuronGroup.

Note

Declaring a subexpression as (constant over dt) means that it will be evaluated each timestep for all synapses, potentially a very costly operation.

Explicit event-driven updates

As mentioned above, it is possible to write event-driven update code for the synaptic variables. For this, two special variables are provided: t is the current time when the code is executed, and lastupdate is the last time when the synapse was updated (either through on_pre or on_post code). An example is short-term plasticity (in fact this could be done automatically with the use of the (event-driven) keyword mentioned above):

S=Synapses(input,neuron,
           model='''x : 1
                    u : 1
                    w : 1''',
           on_pre='''u=U+(u-U)*exp(-(t-lastupdate)/tauf)
                  x=1+(x-1)*exp(-(t-lastupdate)/taud)
                  i+=w*u*x
                  x*=(1-u)
                  u+=U*(1-u)''')

By default, the pre pathway is executed before the post pathway (both are executed in the 'synapses' scheduling slot, but the pre pathway has the order attribute -1, wheras the post pathway has order 1. See Scheduling for more details).

Technical notes

How connection arguments are interpreted

If conditions for connecting neurons are combined with both the n (number of synapses to create) and the p (probability of a synapse) keywords, they are interpreted in the following way:

For every pair i, j:
if condition(i, j) is fulfilled:
Evaluate p(i, j)
If uniform random number between 0 and 1 < p(i, j):
Create n(i, j) synapses for (i, j)

With the generator syntax j='EXPR for VAR in RANGE if COND', the interpretation is:

For every i:
for every VAR in RANGE:
j = EXPR
if COND:
Create n(i, j) synapses for (i, j)

Note that the arguments in RANGE can only depend on i and the values of presynaptic variables. Similarly, the expression for j, EXPR can depend on i, presynaptic variables, and on the iteration variable VAR. The condition COND can depend on anything (presynaptic and postsynaptic variables).

With the 1-to-1 mapping syntax j='EXPR' the interpretation is:

For every i:
j = EXPR
Create n(i, j) synapses for (i, j)
Efficiency considerations

If you are connecting a single pair of neurons, the direct form connect(i=5, j=10) is the most efficient. However, if you are connecting a number of neurons, it will usually be more efficient to construct an array of i and j values and have a single connect(i=i, j=j) call.

For large connections, you should use one of the string based syntaxes where possible as this will generate compiled low-level code that will be typically much faster than equivalent Python code.

If you are expecting a majority of pairs of neurons to be connected, then using the condition-based syntax is optimal, e.g. connect(condition='i!=j'). However, if relatively few neurons are being connected then the 1-to-1 mapping or generator syntax will be better. For 1-to-1, connect(j='i') will always be faster than connect(condition='i==j') because the latter has to evaluate all N**2 pairs (i, j) and check if the condition is true, whereas the former only has to do O(N) operations.

One tricky problem is how to efficiently generate connectivity with a probability p(i, j) that depends on both i and j, since this requires N*N computations even if the expected number of synapses is proportional to N. Some tricks for getting around this are shown in Example: efficient_gaussian_connectivity.

Input stimuli

There are various ways of providing “external” input to a network.

Poisson inputs

For generating spikes according to a Poisson point process, PoissonGroup can be used, e.g.:

P = PoissonGroup(100, np.arange(100)*Hz + 10*Hz)
G = NeuronGroup(100, 'dv/dt = -v / (10*ms) : 1')
S = Synapses(P, G, on_pre='v+=0.1')
S.connect(j='i')

See More on Poisson inputs below for further information.

For simulations where the individually generated spikes are just used as a source of input to a neuron, the PoissonInput class provides a more efficient alternative: see Efficient Poisson inputs via PoissonInput below for details.

Spike generation

You can also generate an explicit list of spikes given via arrays using SpikeGeneratorGroup. This object behaves just like a NeuronGroup in that you can connect it to other groups via a Synapses object, but you specify three bits of information: N the number of neurons in the group; indices an array of the indices of the neurons that will fire; and times an array of the same length as indices with the times that the neurons will fire a spike. The indices and times arrays are matching, so for example indices=[0,2,1] and times=[1*ms,2*ms,3*ms] means that neuron 0 fires at time 1 ms, neuron 2 fires at 2 ms and neuron 1 fires at 3 ms. Example use:

indices = array([0, 2, 1])
times = array([1, 2, 3])*ms
G = SpikeGeneratorGroup(3, indices, times)

The spikes that will be generated by SpikeGeneratorGroup can be changed between runs with the set_spikes method. This can be useful if the input to a system should depend on its previous output or when running multiple trials with different input:

inp = SpikeGeneratorGroup(N, indices, times)
G = NeuronGroup(N, '...')
feedforward = Synapses(inp, G, '...', on_pre='...')
feedforward.connect(j='i')
recurrent = Synapses(G, G, '...', on_pre='...')
recurrent.connect('i!=j')
spike_mon = SpikeMonitor(G)
# ...
run(runtime)
# Replay the previous output of group G as input into the group
inp.set_spikes(spike_mon.i, spike_mon.t + runtime)
run(runtime)

Explicit equations

If the input can be explicitly expressed as a function of time (e.g. a sinusoidal input current), then its description can be directly included in the equations of the respective group:

G = NeuronGroup(100, '''dv/dt = (-v + I)/(10*ms) : 1
                        rates : Hz  # each neuron's input has a different rate
                        size : 1  # and a different amplitude
                        I = size*sin(2*pi*rates*t) : 1''')
G.rates = '10*Hz + i*Hz'
G.size = '(100-i)/100. + 0.1'

Timed arrays

If the time dependence of the input cannot be expressed in the equations in the way shown above, it is possible to create a TimedArray. This acts as a function of time where the values at given time points are given explicitly. This can be especially useful to describe non-continuous stimulation. For example, the following code defines a TimedArray where stimulus blocks consist of a constant current of random strength for 30ms, followed by no stimulus for 20ms. Note that in this particular example, numerical integration can use exact methods, since it can assume that the TimedArray is a constant function of time during a single integration time step.

Note

The semantics of TimedArray changed slightly compared to Brian 1: for TimedArray([x1, x2, ...], dt=my_dt), the value x1 will be returned for all 0<=t<my_dt, x2 for my_dt<=t<2*my_dt etc., whereas Brian1 returned x1 for 0<=t<0.5*my_dt, x2 for 0.5*my_dt<=t<1.5*my_dt, etc.

stimulus = TimedArray(np.hstack([[c, c, c, 0, 0]
                                 for c in np.random.rand(1000)]),
                                dt=10*ms)
G = NeuronGroup(100, 'dv/dt = (-v + stimulus(t))/(10*ms) : 1',
                threshold='v>1', reset='v=0')
G.v = '0.5*rand()'  # different initial values for the neurons

TimedArray can take a one-dimensional value array (as above) and therefore return the same value for all neurons or it can take a two-dimensional array with time as the first and (neuron/synapse/...-)index as the second dimension.

In the following, this is used to implement shared noise between neurons, all the “even neurons” get the first noise instantiation, all the “odd neurons” get the second:

runtime = 1*second
stimulus = TimedArray(np.random.rand(int(runtime/defaultclock.dt), 2),
                      dt=defaultclock.dt)
G = NeuronGroup(100, 'dv/dt = (-v + stimulus(t, i % 2))/(10*ms) : 1',
                threshold='v>1', reset='v=0')

Regular operations

An alternative to specifying a stimulus in advance is to run explicitly specified code at certain points during a simulation. This can be achieved with run_regularly(). One can think of these statements as equivalent to reset statements but executed unconditionally (i.e. for all neurons) and possibly on a different clock than the rest of the group. The following code changes the stimulus strength of half of the neurons (randomly chosen) to a new random value every 50ms. Note that the statement uses logical expressions to have the values only updated for the chosen subset of neurons (where the newly introduced auxiliary variable change equals 1):

G = NeuronGroup(100, '''dv/dt = (-v + I)/(10*ms) : 1
                        I : 1  # one stimulus per neuron''')
G.run_regularly('''change = int(rand() < 0.5)
                   I = change*(rand()*2) + (1-change)*I''',
                dt=50*ms)

The following topics are not essential for beginners.


More on Poisson inputs

Setting rates for Poisson inputs

PoissonGroup takes either a constant rate, an array of rates (one rate per neuron, as in the example above), or a string expression evaluating to a rate as an argument.

If the given value for rates is a constant, then using PoissonGroup(N, rates) is equivalent to:

NeuronGroup(N, 'rates : Hz', threshold='rand()<rates*dt')

and setting the group’s rates attribute.

If rates is a string, then this is equivalent to:

NeuronGroup(N, 'rates = ... : Hz', threshold='rand()<rates*dt')

with the respective expression for the rates. This expression will be evaluated at every time step and therefore allows the use of time-dependent rates, i.e. inhomogeneous Poisson processes. For example, the following code (see also Timed arrays) uses a TimedArray to define the rates of a PoissonGroup as a function of time, resulting in five 100ms blocks of 100 Hz stimulation, followed by 100ms of silence:

stimulus = TimedArray(np.tile([100., 0.], 5)*Hz, dt=100.*ms)
P = PoissonGroup(1, rates='stimulus(t)')

Note that, as can be seen in its equivalent NeuronGroup formulation, a PoissonGroup does not work for high rates where more than one spike might fall into a single timestep. Use several units with lower rates in this case (e.g. use PoissonGroup(10, 1000*Hz) instead of PoissonGroup(1, 10000*Hz)).

Efficient Poisson inputs via PoissonInput

For simulations where the PoissonGroup is just used as a source of input to a neuron (i.e., the individually generated spikes are not important, just their impact on the target cell), the PoissonInput class provides a more efficient alternative: instead of generating spikes, PoissonInput directly updates a target variable based on the sum of independent Poisson processes:

G = NeuronGroup(100, 'dv/dt = -v / (10*ms) : 1')
P = PoissonInput(G, 'v', 100, 100*Hz, weight=0.1)

The PoissonInput class is however more restrictive than PoissonGroup, it only allows for a constant rate across all neurons (but you can create several PoissonInput objects, targeting different subgroups). It internally uses BinomialFunction which will draw a random number each time step, either from a binomial distribution or from a normal distribution as an approximation to the binomial distribution if \(n p > 5 \wedge n (1 - p) > 5\), where \(n\) is the number of inputs and \(p = dt \cdot rate\) the spiking probability for a single input.

Arbitrary Python code (network operations)

If none of the above techniques is general enough to fulfill the requirements of a simulation, Brian allows you to write a NetworkOperation, an arbitrary Python function that is executed every time step (possible on a different clock than the rest of the simulation). This function can do arbitrary operations, use conditional statements etc. and it will be executed as it is (i.e. as pure Python code even if weave code generation is active). Note that one cannot use network operations in combination with the C++ standalone mode. Network operations are particularly useful when some condition or calculation depends on operations across neurons, which is currently not possible to express in abstract code. The following code switches input on for a randomly chosen single neuron every 50 ms:

G = NeuronGroup(10, '''dv/dt = (-v + active*I)/(10*ms) : 1
                       I = sin(2*pi*100*Hz*t) : 1 (shared) #single input
                       active : 1  # will be set in the network operation''')
@network_operation(dt=50*ms)
def update_active():
    index = np.random.randint(10)  # index for the active neuron
    G.active_ = 0  # the underscore switches off unit checking
    G.active_[index] = 1

Note that the network operation (in the above example: update_active) has to be included in the Network object if one is constructed explicitly.

Only functions with zero or one arguments can be used as a NetworkOperation. If the function has one argument then it will be passed the current time t:

@network_operation(dt=1*ms)
def update_input(t):
    if t>50*ms and t<100*ms:
        pass # do something

Note that this is preferable to accessing defaultclock.t from within the function – if the network operation is not running on the defaultclock itself, then that value is not guaranteed to be correct.

Instance methods can be used as network operations as well, however in this case they have to be constructed explicitly, the network_operation() decorator cannot be used:

class Simulation(object):
    def __init__(self, data):
        self.data = data
        self.group = NeuronGroup(...)
        self.network_op = NetworkOperation(self.update_func, dt=10*ms)
        self.network = Network(self.group, self.network_op)

    def update_func(self):
        pass # do something

    def run(self, runtime):
        self.network.run(runtime)

Recording during a simulation

Recording variables during a simulation is done with “monitor” objects. Specifically, spikes are recorded with SpikeMonitor, the time evolution of variables with StateMonitor and the firing rate of a population of neurons with PopulationRateMonitor.

Recording spikes

To record spikes from a group G simply create a SpikeMonitor via SpikeMonitor(G). After the simulation, you can access the attributes i, t, num_spikes and count of the monitor. The i and t attributes give the array of neuron indices and times of the spikes. For example, if M.i==[0, 2, 1] and M.t==[1*ms, 2*ms, 3*ms] it means that neuron 0 fired a spike at 1 ms, neuron 2 fired a spike at 2 ms, and neuron 1 fired a spike at 3 ms. Alternatively, you can also call the spike_trains method to get a dictionary mapping neuron indices to arrays of spike times, i.e. in the above example, spike_trains = M.spike_trains(); spike_trains[1] would return array([  3.]) * msecond. The num_spikes attribute gives the total number of spikes recorded, and count is an array of the length of the recorded group giving the total number of spikes recorded from each neuron.

Example:

G = NeuronGroup(N, model='...')
M = SpikeMonitor(G)
run(runtime)
plot(M.t/ms, M.i, '.')

If you are only interested in summary statistics but not the individual spikes, you can set the record argument to False. You will then not have access to i and t but you can still get the count and the total number of spikes (num_spikes).

Recording variables at spike time

By default, a SpikeMonitor only records the time of the spike and the index of the neuron that spiked. Sometimes it can be useful to addtionaly record other variables, e.g. the membrane potential for models where the threshold is not at a fixed value. This can be done by providing an extra variables argument, the recorded variable can then be accessed as an attribute of the SpikeMonitor, e.g.:

G = NeuronGroup(10, 'v : 1', threshold='rand()<100*Hz*dt')
G.run_regularly('v = rand()')
M = SpikeMonitor(G, variables=['v'])
run(100*ms)
plot(M.t/ms, M.v, '.')

To conveniently access the values of a recorded variable for a single neuron, the SpikeMonitor.values() method can be used that returns a dictionary with the values for each neuron.:

G = NeuronGroup(N, '''dv/dt = (1-v)/(10*ms) : 1
                      v_th : 1''',
                threshold='v > v_th',
                # randomly change the threshold after a spike:
                reset='''v=0
                         v_th = clip(v_th + rand()*0.2 - 0.1, 0.1, 0.9)''')
G.v_th = 0.5
spike_mon = SpikeMonitor(G, variables='v')
run(1*second)
v_values = spike_mon.values('v')
print('Threshold crossing values for neuron 0: {}'.format(v_values[0]))
hist(spike_mon.v, np.arange(0, 1, .1))
show()

Note

Spikes are not the only events that can trigger recordings, see Custom events.

Recording variables continuously

To record how a variable evolves over time, use a StateMonitor, e.g. to record the variable v at every time step and plot it for neuron 0:

G = NeuronGroup(...)
M = StateMonitor(G, 'v', record=True)
run(...)
plot(M.t/ms, M.v[0]/mV)

In general, you specify the group, variables and indices you want to record from. You specify the variables with a string or list of strings, and the indices either as an array of indices or True to record all indices (but beware because this may take a lot of memory).

After the simulation, you can access these variables as attributes of the monitor. They are 2D arrays with shape (num_indices, num_times). The special attribute t is an array of length num_times with the corresponding times at which the values were recorded.

Note that you can also use StateMonitor to record from Synapses where the indices are the synapse indices rather than neuron indices.

In this example, we record two variables v and u, and record from indices 0, 10 and 100. Afterwards, we plot the recorded values of v and u from neuron 0:

G = NeuronGroup(...)
M = StateMonitor(G, ('v', 'u'), record=[0, 10, 100])
run(...)
plot(M.t/ms, M.v[0]/mV, label='v')
plot(M.t/ms, M.u[0]/mV, label='u')

There are two subtly different ways to get the values for specific neurons: you can either index the 2D array stored in the attribute with the variable name (as in the example above) or you can index the monitor itself. The former will use an index relative to the recorded neurons (e.g. M.v[1] will return the values for the second recorded neuron which is the neuron with the index 10 whereas M.v[10] would raise an error because only three neurons have been recorded), whereas the latter will use an absolute index corresponding to the recorded group (e.g. M[1].v will raise an error because the neuron with the index 1 has not been recorded and M[10].v will return the values for the neuron with the index 10). If all neurons have been recorded (e.g. with record=True) then both forms give the same result.

Note that for plotting all recorded values at once, you have to transpose the variable values:

plot(M.t/ms, M.v.T/mV)

Note

In contrast to Brian 1, the values are recorded at the beginning of a time step and not at the end (you can set the when argument when creating a StateMonitor, details about scheduling can be found here: Scheduling and custom progress reporting).

Recording population rates

To record the time-varying firing rate of a population of neurons use PopulationRateMonitor. After the simulation the monitor will have two attributes t and rate, the latter giving the firing rate at each time step corresponding to the time in t. For example:

G = NeuronGroup(...)
M = PopulationRateMonitor(G)
run(...)
plot(M.t/ms, M.rate/Hz)

To get a smoother version of the rate, use PopulationRateMonitor.smooth_rate().

The following topics are not essential for beginners.


Getting all data

Note that all monitors are implement as “groups”, so you can get all the stored values in a monitor with the Group.get_states() method, which can be useful to dump all recorded data to disk, for example.

Running a simulation

To run a simulation, one either constructs a new Network object and calls its Network.run() method, or uses the “magic” system and a plain run() call, collecting all the objects in the current namespace.

Note that Brian has several different ways of running the actual computations, and choosing the right one can make orders of magnitude of difference in terms of simplicity and efficiency. See Computational methods and efficiency for more details.

Networks

In most straightforward simulations, you do not have to explicitly create a Network object but instead can simply call run() to run a simulation. This is what is called the “magic” system, because Brian figures out automatically what you want to do.

When calling run(), Brian runs the collect() function to gather all the objects in the current context. It will include all the objects that are “visible”, i.e. that you could refer to with an explicit name:

G = NeuronGroup(10, 'dv/dt = -v / tau : volt')
S = Synapses(G, G, model='w:1', on_pre='v+=w')
S.connect('i!=j')
mon = SpikeMonitor(G)

run(10*ms)  # will include G, S, mon

Note that it will not automatically include objects that are “hidden” in containers, e.g. if you store several monitors in a list. Use an explicit Network object in this case. It might be convenient to use the collect() function when creating the Network object in that case:

G = NeuronGroup(10, 'dv/dt = -v / tau : volt')
S = Synapses(G, G, model='w:1', on_pre='v+=w')
S.connect('i!=j')
monitors = [SpikeMonitor(G), StateMonitor(G, 'v', record=True)]

# a simple run would not include the monitors
net = Network(collect())  # automatically include G and S
net.add(monitors)  # manually add the monitors

net.run(10*ms)

Setting the simulation time step

To set the simulation time step for every simulated object, set the dt attribute of the defaultclock which is used by all objects that do not explicitly specify a clock or dt value during construction:

defaultclock.dt = 0.05*ms

If some objects should use a different clock (e.g. to record values with a StateMonitor not at every time step in a long running simulation), you can provide a dt argument to the respective object:

s_mon = StateMonitor(group, 'v', record=True, dt=1*ms)

To sum up:

  • Set defaultclock.dt to the time step that should be used by most (or all) of your objects.
  • Set dt explicitly when creating objects that should use a different time step.

Behind the scenes, a new Clock object will be created for each object that defines its own dt value.

Progress reporting

Especially for long simulations it is useful to get some feedback about the progress of the simulation. Brian offers a few built-in options and an extensible system to report the progress of the simulation. In the Network.run() or run() call, two arguments determine the output: report and report_period. When report is set to 'text' or 'stdout', the progress will be printed to the standard output, when it is set to 'stderr', it will be printed to “standard error”. There will be output at the start and the end of the run, and during the run in report_period intervals. It is also possible to do custom progress reporting.

Continuing/repeating simulations

To store the current state of the simulation, call store() (use the Network.store() method for a Network). You can store more than one snapshot of a system by providing a name for the snapshot; if store() is called without a specified name, 'default' is used as the name. To restore the state, use restore().

The following simple example shows how this system can be used to run several trials of an experiment:

# set up the network
G = NeuronGroup(...)
...
spike_monitor = SpikeMonitor(G)

# Snapshot the state
store()

# Run the trials
spike_counts = []
for trial in range(3):
    restore()  # Restore the initial state
    run(...)
    # store the results
    spike_counts.append(spike_monitor.count)

The following schematic shows how multiple snapshots can be used to run a network with a separate “train” and “test” phase. After training, the test is run several times based on the trained network. The whole process of training and testing is repeated several times as well:

# set up the network
G = NeuronGroup(..., '''...
                     test_input : amp
                     ...''')
S = Synapses(..., '''...
                     plastic : boolean (shared)
                     ...''')
G.v = ...
S.connect(...)
S.w = ...

# First snapshot at t=0
store('initialized')

# Run 3 complete trials
for trial in range(3):
    # Simulate training phase
    restore('initialized')
    S.plastic = True
    run(...)

    # Snapshot after learning
    store('after_learning')

    # Run 5 tests after the training
    for test_number in range(5):
        restore('after_learning')
        S.plastic = False  # switch plasticity off
        G.test_input = test_inputs[test_number]
        # monitor the activity now
        spike_mon = SpikeMonitor(G)
        run(...)
        # Do something with the result
        # ...

The following topics are not essential for beginners.


Multiple magic runs

When you use more than a single run() statement, the magic system tries to detect which of the following two situations applies:

  1. You want to continue a previous simulation
  2. You want to start a new simulation

For this, it uses the following heuristic: if a simulation consists only of objects that have not been run, it will start a new simulation starting at time 0 (corresponding to the creation of a new Network object). If a simulation only consists of objects that have been simulated in the previous run() call, it will continue that simulation at the previous time.

If neither of these two situations apply, i.e., the network consists of a mix of previously run objects and new objects, an error will be raised. If this is not a mistake but intended (e.g. when a new input source and synapses should be added to a network at a later stage), use an explicit Network object.

In these checks, “non-invalidating” objects (i.e. objects that have BrianObject.invalidates_magic_network set to False) are ignored, e.g. creating new monitors is always possible.

Changing the simulation time step

You can change the simulation time step after objects have been created or even after a simulation has been run:

defaultclock.dt = 0.1*ms
# Set the network
# ...
run(initial_time)
defaultclock.dt = 0.01*ms
run(full_time - initial_time)

To change the time step between runs for objects that do not use the defaultclock, you cannot directly change their dt attribute (which is read-only) but instead you have to change the dt of the clock attribute. If you want to change the dt value of several objects at the same time (but not for all of them, i.e. when you cannot use defaultclock.dt) then you might consider creating a Clock object explicitly and then passing this clock to each object with the clock keyword argument (instead of dt). This way, you can later change the dt for several objects at once by assigning a new value to Clock.dt.

Profiling

To get an idea which parts of a simulation take the most time, Brian offers a basic profiling mechanism. If a simulation is run with the profile=True keyword argument, it will collect information about the total simulation time for each CodeObject. This information can then be retrieved from Network.profiling_info, which contains a list of (name, time) tuples or a string summary can be obtained by calling profiling_summary(). The following example shows profiling output after running the CUBA example (where the neuronal state updates take up the most time):

>>> profiling_summary(show=5)  # show the 5 objects that took the longest
Profiling summary
=================
neurongroup_stateupdater    5.54 s    61.32 %
synapses_pre                1.39 s    15.39 %
synapses_1_pre              1.03 s    11.37 %
spikemonitor                0.59 s     6.55 %
neurongroup_thresholder     0.33 s     3.66 %

Scheduling

Every simulated object in Brian has three attributes that can be specified at object creation time: dt, when, and order. The time step of the simulation is determined by dt, if it is specified, or otherwise by defaultclock.dt. Changing this will therefore change the dt of all objects that don’t specify one.

During a single time step, objects are updated in an order according first to their when argument’s position in the schedule. This schedule is determined by Network.schedule which is a list of strings, determining “execution slots” and their order. It defaults to: ['start', 'groups', 'thresholds', 'synapses', 'resets', 'end']. In addition to the names provided in the schedule, names such as before_thresholds or after_synapses can be used that are understood as slots in the respective positions. The default for the when attribute is a sensible value for most objects (resets will happen in the reset slot, etc.) but sometimes it make sense to change it, e.g. if one would like a StateMonitor, which by default records in the end slot, to record the membrane potential before a reset is applied (otherwise no threshold crossings will be observed in the membrane potential traces).

Finally, if during a time step two objects fall in the same execution slot, they will be updated in ascending order according to their order attribute, an integer number defaulting to 0. If two objects have the same when and order attribute then they will be updated in an arbitrary but reproducible order (based on the lexicographical order of their names).

Every new Network starts a simulation at time 0; Network.t is a read-only attribute, to go back to a previous moment in time (e.g. to do another trial of a simulation with a new noise instantiation) use the mechanism described below.

For more details, including finer control over the scheduling of operations and changing the value of dt between runs see Scheduling and custom progress reporting.

Store/restore

Note that Network.run(), Network.store() and Network.restore() (or run(), store(), restore()) are the only way of affecting the time of the clocks. In contrast to Brian1, it is no longer necessary (nor possible) to directly set the time of the clocks or call a reinit function.

The state of a network can also be stored on disk with the optional filename argument of Network.store()/store(). This way, you can run the initial part of a simulation once, store it to disk, and then continue from this state later. Note that the store()/restore() mechanism does not re-create the network as such, you still need to construct all the NeuronGroup, Synapses, StateMonitor, ... objects, restoring will only restore all the state variable values (membrane potential, conductances, synaptic connections/weights/delays, ...). This restoration does however restore the internal state of the objects as well, e.g. spikes that have not been delivered yet because of synaptic delays will be delivered correctly.

Multicompartment models

It is possible to create neuron models with a spatially extended morphology, using the SpatialNeuron class. A SpatialNeuron is a single neuron with many compartments. Essentially, it works as a NeuronGroup where elements are compartments instead of neurons.

A SpatialNeuron is specified by a morphology (see Creating a neuron morphology) and a set of equations for transmembrane currents (see Creating a spatially extended neuron).

Creating a neuron morphology

Schematic morphologies

Morphologies can be created combining geometrical objects:

soma = Soma(diameter=30*um)
cylinder = Cylinder(diameter=1*um, length=100*um, n=10)

The first statement creates a single iso-potential compartment (i.e. with no axial resistance within the compartment), with its area calculated as the area of a sphere with the given diameter. The second one specifies a cylinder consisting of 10 compartments with identical diameter and the given total length.

For more precise control over the geometry, you can specify the length and diameter of each individual compartment, including the diameter at the start of the section (i.e. for n compartments: n length and n+1 diameter values) in a Section object:

section = Section(diameter=[6, 5, 4, 3, 2, 1]*um, length=[10, 10, 10, 5, 5]*um, n=5)

The individual compartments are modeled as truncated cones, changing the diameter linearly between the given diameters over the length of the compartment. Note that the diameter argument specifies the values at the nodes between the compartments, but accessing the diameter attribute of a Morphology object will return the diameter at the center of the compartment (see the note below).

The following table summarizes the different options to create schematic morphologies (the black compartment before the start of the section represents the parent compartment with diameter 15 μm, not specified in the code below):

  Example
Soma
# Soma always has a single compartment
Soma(diameter=30*um)
_images/soma.svg
Cylinder
# Each compartment has fixed length and diameter
Cylinder(5, diameter=10*um, length=50*um)
_images/cylinder.svg
Section
# Length and diameter individually defined for each compartment (at start
# and end)
Section(5, diameter=[15, 5, 10, 5, 10, 5]*um,
        length=[10, 20, 5, 5, 10]*um)
_images/section.svg

Note

For a Section, the diameter argument specifies the diameter between the compartments (and at the beginning/end of the first/last compartment). the corresponding values can therefore be later retrieved from the Morphology via the start_diameter and end_diameter attributes. The diameter attribute of a Morphology does correspond to the diameter at the midpoint of the compartment. For a Cylinder, start_diameter, diameter, and end_diameter are of course all identical.

The tree structure of a morphology is created by attaching Morphology objects together:

morpho = Soma(diameter=30*um)
morpho.axon = Cylinder(length=100*um, diameter=1*um, n=10)
morpho.dendrite = Cylinder(length=50*um, diameter=2*um, n=5)

These statements create a morphology consisting of a cylindrical axon and a dendrite attached to a spherical soma. Note that the names axon and dendrite are arbitrary and chosen by the user. For example, the same morphology can be created as follows:

morpho = Soma(diameter=30*um)
morpho.output_process = Cylinder(length=100*um, diameter=1*um, n=10)
morpho.input_process = Cylinder(length=50*um, diameter=2*um, n=5)

The syntax is recursive, for example two sections can be added at the end of the dendrite as follows:

morpho.dendrite.branch1 = Cylinder(length=50*um, diameter=1*um, n=3)
morpho.dendrite.branch2 = Cylinder(length=50*um, diameter=1*um, n=3)

Equivalently, one can use an indexing syntax:

morpho['dendrite']['branch1'] = Cylinder(length=50*um, diameter=1*um, n=3)
morpho['dendrite']['branch2'] = Cylinder(length=50*um, diameter=1*um, n=3)

The names given to sections are completely up to the user. However, names that consist of a single digit (1 to 9) or the letters L (for left) and R (for right) allow for a special short syntax: they can be joined together directly, without the needs for dots (or dictionary syntax) and therefore allow to quickly navigate through the morphology tree (e.g. morpho.LRLLR is equivalent to morpho.L.R.L.L.R). This short syntax can also be used to create trees:

morpho = Soma(diameter=30*um)
morpho.L = Cylinder(length=10*um, diameter=1*um, n=3)
morpho.L1 = Cylinder(length=5*um, diameter=1*um, n=3)
morpho.L2 = Cylinder(length=5*um, diameter=1*um, n=3)
morpho.L3 = Cylinder(length=5*um, diameter=1*um, n=3)
morpho.R = Cylinder(length=10*um, diameter=1*um, n=3)
morpho.RL = Cylinder(length=5*um, diameter=1*um, n=3)
morpho.RR = Cylinder(length=5*um, diameter=1*um, n=3)

The above instructions create a dendritic tree with two main sections, three sections attached to the first section and two to the second. This can be verified with the Morphology.topology() method:

>>> morpho.topology()
( )  [root]
   `---|  .L
        `---|  .L.1
        `---|  .L.2
        `---|  .L.3
   `---|  .R
        `---|  .R.L
        `---|  .R.R

Note that an expression such as morpho.L will always refer to the entire subtree. However, accessing the attributes (e.g. diameter) will only return the values for the given section.

Note

To avoid ambiguities, do not use names for sections that can be interpreted in the abbreviated way detailed above. For example, do not name a child section L1 (which will be interpreted as the first child of the child L)

The number of compartments in a section can be accessed with morpho.n (or morpho.L.n, etc.), the number of total sections and compartments in a subtree can be accessed with morpho.total_sections and morpho.total_compartments respectively.

Adding coordinates

For plotting purposes, it can be useful to add coordinates to a Morphology that was created using the “schematic” approach described above. This can be done by calling the generate_coordinates method on a morphology, which will return an identical morphology but with additional 2D or 3D coordinates. By default, this method creates a morphology according to a deterministic algorithm in 2D:

new_morpho = morpho.generate_coordinates()
_images/morphology_deterministic_coords.png

To get more “realistic” morphologies, this function can also be used to create morphologies in 3D where the orientation of each section differs from the orientation of the parent section by a random amount:

new_morpho = morpho.generate_coordinates(section_randomness=25)
_images/morphology_random_section_1.png _images/morphology_random_section_2.png _images/morphology_random_section_3.png

This algorithm will base the orientation of each section on the orientation of the parent section and then randomly perturb this orientation. More precisely, the algorithm first chooses a random vector orthogonal to the orientation of the parent section. Then, the section will be rotated around this orthogonal vector by a random angle, drawn from an exponential distribution with the \(\beta\) parameter (in degrees) given by section_randomness. This \(\beta\) parameter specifies both the mean and the standard deviation of the rotation angle. Note that no maximum rotation angle is enforced, values for section_randomness should therefore be reasonably small (e.g. using a section_randomness of 45 would already lead to a probability of ~14% that the section will be rotated by more than 90 degrees, therefore making the section go “backwards”).

In addition, also the orientation of each compartment within a section can be randomly varied:

new_morpho = morpho.generate_coordinates(section_randomness=25,
                                         compartment_randomness=15)
_images/morphology_random_section_compartment_1.png _images/morphology_random_section_compartment_2.png _images/morphology_random_section_compartment_3.png

The algorithm is the same as the one presented above, but applied individually to each compartment within a section (still based on the orientation on the parent section, not on the orientation of the previous compartment).

Complex morphologies

Morphologies can also be created from information about the compartment coordinates in 3D space. Such morphologies can be loaded from a .swc file (a standard format for neuronal morphologies; for a large database of morphologies in this format see http://neuromorpho.org):

morpho = Morphology.from_file('corticalcell.swc')

To manually create a morphology from a list of points in a similar format to SWC files, see Morphology.from_points.

Morphologies that are created in such a way will use standard names for the sections that allow for the short syntax shown in the previous sections: if a section has one or two child sections, then they will be called L and R, otherwise they will be numbered starting at 1.

Morphologies with coordinates can also be created section by section, following the same syntax as for “schematic” morphologies:

soma = Soma(diameter=30*um, x=50*um, y=20*um)
cylinder = Cylinder(10, x=[0, 100]*um, diameter=1*um)
section = Section(5,
                  x=[0, 10, 20, 30, 40, 50]*um,
                  y=[0, 10, 20, 30, 40, 50]*um,
                  z=[0, 10, 10, 10, 10, 10]*um,
                  diameter=[6, 5, 4, 3, 2, 1])*um

Note that the x, y, z attributes of Morphology and SpatialNeuron will return the coordinates at the midpoint of each compartment (as for all other attributes that vary over the length of a compartment, e.g. diameter or distance), but during construction the coordinates refer to the start and end of the section (Cylinder), respectively to the coordinates of the nodes between the compartments (Section).

A few additional remarks:

  1. In the majority of simulations, coordinates are not used in the neuronal equations, therefore the coordinates are purely for visualization purposes and do not affect the simulation results in any way.
  2. Coordinate specification cannot be combined with length specification – lengths are automatically calculated from the coordinates.
  3. The coordinate specification can also be 1- or 2-dimensional (as in the first two examples above), the unspecified coordinate will use 0 μm.
  4. All coordinates are interpreted relative to the parent compartment, i.e. the point (0 μm, 0 μm, 0 μm) refers to the end point of the previous compartment. Most of the time, the first element of the coordinate specification is therefore 0 μm, to continue a section where the previous one ended. However, it can be convenient to use a value different from 0 μm for sections connecting to the Soma to make them (visually) connect to a point on the sphere surface instead of the center of the sphere.

Creating a spatially extended neuron

A SpatialNeuron is a spatially extended neuron. It is created by specifying the morphology as a Morphology object, the equations for transmembrane currents, and optionally the specific membrane capacitance Cm and intracellular resistivity Ri:

gL = 1e-4*siemens/cm**2
EL = -70*mV
eqs = '''
Im=gL * (EL - v) : amp/meter**2
I : amp (point current)
'''
neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=1*uF/cm**2, Ri=100*ohm*cm)
neuron.v = EL + 10*mV

Several state variables are created automatically: the SpatialNeuron inherits all the geometrical variables of the compartments (length, diameter, area, volume), as well as the distance variable that gives the distance to the soma. For morphologies that use coordinates, the x, y and z variables are provided as well. Additionally, a state variable Cm is created. It is initialized with the value given at construction, but it can be modified on a compartment per compartment basis (which is useful to model myelinated axons). The membrane potential is stored in state variable v.

Note that for all variable values that vary across a compartment (e.g. distance, x, y, z, v), the value that is reported is the value at the midpoint of the compartment.

The key state variable, which must be specified at construction, is Im. It is the total transmembrane current, expressed in units of current per area. This is a mandatory line in the definition of the model. The rest of the string description may include other state variables (differential equations or subexpressions) or parameters, exactly as in NeuronGroup. At every timestep, Brian integrates the state variables, calculates the transmembrane current at every point on the neuronal morphology, and updates v using the transmembrane current and the diffusion current, which is calculated based on the morphology and the intracellular resistivity. Note that the transmembrane current is a surfacic current, not the total current in the compartement. This choice means that the model equations are independent of the number of compartments chosen for the simulation. The space and time constants can obtained for any point of the neuron with the space_constant respectively time_constant attributes:

l = neuron.space_constant[0]
tau = neuron.time_constant[0]

The calculation is based on the local total conductance (not just the leak conductance), therefore, it can potentially vary during a simulation (e.g. decrease during an action potential). The reported value is only correct for compartments with a cylindrical geometry, though, it does not give reasonable values for compartments with strongly varying diameter.

To inject a current I at a particular point (e.g. through an electrode or a synapse), this current must be divided by the area of the compartment when inserted in the transmembrane current equation. This is done automatically when the flag point current is specified, as in the example above. This flag can apply only to subexpressions or parameters with amp units. Internally, the expression of the transmembrane current Im is simply augmented with +I/area. A current can then be injected in the first compartment of the neuron (generally the soma) as follows:

neuron.I[0] = 1*nA

State variables of the SpatialNeuron include all the compartments of that neuron (including subtrees). Therefore, the statement neuron.v = EL + 10*mV sets the membrane potential of the entire neuron at -60 mV.

Subtrees can be accessed by attribute (in the same way as in Morphology objects):

neuron.axon.gNa = 10*gL

Note that the state variables correspond to the entire subtree, not just the main section. That is, if the axon had branches, then the above statement would change gNa on the main section and all the sections in the subtree. To access the main section only, use the attribute main:

neuron.axon.main.gNa = 10*gL

A typical use case is when one wants to change parameter values at the soma only. For example, inserting an electrode current at the soma is done as follows:

neuron.main.I = 1*nA

A part of a section can be accessed as follows:

initial_segment = neuron.axon[10*um:50*um]
Synaptic inputs

There are two methods to have synapses on SpatialNeuron. The first one to insert synaptic equations directly in the neuron equations:

eqs='''
Im = gL * (EL - v) : amp/meter**2
Is = gs * (Es - v) : amp (point current)
dgs/dt = -gs/taus : siemens
'''
neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=1*uF/cm**2, Ri=100*ohm*cm)

Note that, as for electrode stimulation, the synaptic current must be defined as a point current. Then we use a Synapses object to connect a spike source to the neuron:

S = Synapses(stimulation, neuron, on_pre='gs += w')
S.connect(i=0, j=50)
S.connect(i=1, j=100)

This creates two synapses, on compartments 50 and 100. One can specify the compartment number with its spatial position by indexing the morphology:

S.connect(i=0, j=morpho[25*um])
S.connect(i=1, j=morpho.axon[30*um])

In this method for creating synapses, there is a single value for the synaptic conductance in any compartment. This means that it will fail if there are several synapses onto the same compartment and synaptic equations are nonlinear. The second method, which works in such cases, is to have synaptic equations in the Synapses object:

eqs='''
Im = gL * (EL - v) : amp/meter**2
Is = gs * (Es - v) : amp (point current)
gs : siemens
'''
neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=1 * uF / cm ** 2, Ri=100 * ohm * cm)
S = Synapses(stimulation, neuron, model='''dg/dt = -g/taus : siemens
                                           gs_post = g : siemens (summed)''',
             on_pre='g += w')

Here each synapse (instead of each compartment) has an associated value g, and all values of g for each compartment (i.e., all synapses targeting that compartment) are collected into the compartmental variable gs.

Detecting spikes

To detect and record spikes, we must specify a threshold condition, essentially in the same way as for a NeuronGroup:

neuron = SpatialNeuron(morphology=morpho, model=eqs, threshold='v > 0*mV', refractory='v > -10*mV')

Here spikes are detected when the membrane potential v reaches 0 mV. Because there is generally no explicit reset in this type of model (although it is possible to specify one), v remains above 0 mV for some time. To avoid detecting spikes during this entire time, we specify a refractory period. In this case no spike is detected as long as v is greater than -10 mV. Another possibility could be:

neuron = SpatialNeuron(morphology=morpho, model=eqs, threshold='m > 0.5', refractory='m > 0.4')

where m is the state variable for sodium channel activation (assuming this has been defined in the model). Here a spike is detected when half of the sodium channels are open.

With the syntax above, spikes are detected in all compartments of the neuron. To detect them in a single compartment, use the threshold_location keyword:

neuron = SpatialNeuron(morphology=morpho, model=eqs, threshold='m > 0.5', threshold_location=30,
                       refractory='m > 0.4')

In this case, spikes are only detecting in compartment number 30. Reset then applies locally to that compartment (if a reset statement is defined). Again the location of the threshold can be specified with spatial position:

neuron = SpatialNeuron(morphology=morpho, model=eqs, threshold='m > 0.5',
                       threshold_location=morpho.axon[30*um],
                       refractory='m > 0.4')

Computational methods and efficiency

Brian has several different methods for running the computations in a simulation. The default mode is Runtime code generation, which runs the simulation loop in Python but compiles and executes the modules doing the actual simulation work (numerical integration, synaptic propagation, etc.) in a defined target language. Brian will select the best available target language automatically. On Windows, to ensure that you get the advantages of compiled code, read the instructions on installing a suitable compiler in Windows. Runtime mode has the advantage that you can combine the computations performed by Brian with arbitrary Python code specified as NetworkOperation.

The fact that the simulation is run in Python means that there is a (potentially big) overhead for each simulated time step. An alternative is to run Brian in with Standalone code generation – this is in general faster (for certain types of simulations much faster) but cannot be used for all kinds of simulations. To enable this mode, add the following line after your Brian import, but before your simulation code:

set_device('cpp_standalone')

For detailed control over the compilation process (both for runtime and standalone code generation), you can change the Compiler settings that are used.

The following topics are not essential for beginners.


Runtime code generation

Code generation means that Brian takes the Python code and strings in your model and generates code in one of several possible different languages and actually executes that. The target language for this code generation process is set in the codegen.target preference. By default, this preference is set to 'auto', meaning that it will chose a compiled language target if possible and fall back to Python otherwise (it will also raise a warning in this case, set codegen.target to 'numpy' explicitly to avoid this warning). There are two compiled language targets for Python 2.x, 'weave' (needing a working installation of a C++ compiler) and 'cython' (needing the Cython package in addition); for Python 3.x, only 'cython' is available. If you want to chose a code generation target explicitly (e.g. because you want to get rid of the warning that only the Python fallback is available), set the preference to 'numpy', 'weave' or 'cython' at the beginning of your script:

from brian2 import *
prefs.codegen.target = 'numpy'  # use the Python fallback

See Preferences for different ways of setting preferences.

Warning

Do not use the weave code generation targets when running multiple simulations in parallel. See Known issues for more details.

You might find that running simulations in weave or Cython modes won’t work or is not as efficient as you were expecting. This is probably because you’re using Python functions which are not compatible with weave or Cython. For example, if you wrote something like this it would not be efficient:

from brian2 import *
prefs.codegen.target = 'cython'
def f(x):
    return abs(x)
G = NeuronGroup(10000, 'dv/dt = -x*f(x) : 1')

The reason is that the function f(x) is a Python function and so cannot be called from C++ directly. To solve this problem, you need to provide an implementation of the function in the target language. See Functions.

Standalone code generation

Brian supports generating standalone code for multiple devices. In this mode, running a Brian script generates source code in a project tree for the target device/language. This code can then be compiled and run on the device, and modified if needed. At the moment, the only “device” supported is standalone C++ code. In some cases, the speed gains can be impressive, in particular for smaller networks with complicated spike propagation rules (such as STDP).

To use the C++ standalone mode, you only have to make very small changes to your script. The exact change depends on whether your script has only a single run() (or Network.run()) call, or several of them:

Single run call

At the beginning of the script, i.e. after the import statements, add:

set_device('cpp_standalone')

The CPPStandaloneDevice.build function will be automatically called with default arguments right after the run() call. If you need non-standard arguments then you can specify them as part of the set_device() call:

set_device('cpp_standalone', directory='my_directory', debug=True)
Multiple run calls

At the beginning of the script, i.e. after the import statements, add:

set_device('cpp_standalone', build_on_run=False)

After the last run() call, call device.build() explicitly:

device.build(directory='output', compile=True, run=True, debug=False)

The build function has several arguments to specify the output directory, whether or not to compile and run the project after creating it and whether or not to compile it with debugging support or not.

Multiple builds

To run multiple full simulations (i.e. multiple device.build calls, not just multiple run() calls as discussed above), you have to reinitialize the device again:

device.reinit()
device.activate()

Note that the device “forgets” about all previously set build options provided to set_device() (most importantly the build_on_run option, but also e.g. the directory), you’ll have to specify them as part of the Device.activate call. Also, Device.activate will reset the defaultclock, you’ll therefore have to set its dt after the activate call if you want to use a non-default value.

Limitations

Not all features of Brian will work with C++ standalone, in particular Python based network operations and some array based syntax such as S.w[0, :] = ... will not work. If possible, rewrite these using string based syntax and they should work. Also note that since the Python code actually runs as normal, code that does something like this may not behave as you would like:

results = []
for val in vals:
    # set up a network
    run()
    results.append(result)

The current C++ standalone code generation only works for a fixed number of run statements, not with loops. If you need to do loops or other features not supported automatically, you can do so by inspecting the generated C++ source code and modifying it, or by inserting code directly into the main loop as follows:

device.insert_code('main', '''
cout << "Testing direct insertion of code." << endl;
''')
Variables

After a simulation has been run (after the run() call if set_device() has been called with build_on_run set to True or after the Device.build call with run set to True), state variables and monitored variables can be accessed using standard syntax, with a few exceptions (e.g. string expressions for indexing).

Multi-threading with OpenMP

Warning

OpenMP code has not yet been well tested and so may be inaccurate.

When using the C++ standalone mode, you have the opportunity to turn on multi-threading, if your C++ compiler is compatible with OpenMP. By default, this option is turned off and only one thread is used. However, by changing the preferences of the codegen.cpp_standalone object, you can turn it on. To do so, just add the following line in your python script:

prefs.devices.cpp_standalone.openmp_threads = XX

XX should be a positive value representing the number of threads that will be used during the simulation. Note that the speedup will strongly depend on the network, so there is no guarantee that the speedup will be linear as a function of the number of threads. However, this is working fine for networks with not too small timestep (dt > 0.1ms), and results do not depend on the number of threads used in the simulation.

Compiler settings

If using C++ code generation (either via weave, cython or standalone), the compiler settings can make a big difference for the speed of the simulation. By default, Brian uses a set of compiler settings that switches on various optimizations and compiles for running on the same architecture where the code is compiled. This allows the compiler to make use of as many advanced instructions as possible, but reduces portability of the generated executable (which is not usually an issue).

If there are any issues with these compiler settings, for example because you are using an older version of the C++ compiler or because you want to run the generated code on a different architecture, you can change the settings by manually specifying the codegen.cpp.extra_compile_args preference (or by using codegen.cpp.extra_compile_args_gcc or codegen.cpp.extra_compile_args_msvc if you want to specify the settings for either compiler only).

Advanced guide

This section has additional information on details not covered in the User’s guide.

Functions

All equations, expressions and statements in Brian can make use of mathematical functions. However, functions have to be prepared for use with Brian for two reasons: 1) Brian is strict about checking the consistency of units, therefore every function has to specify how it deals with units; 2) functions need to be implemented differently for different code generation targets.

Brian provides a number of default functions that are already prepared for use with numpy and C++ and also provides a mechanism for preparing new functions for use (see below).

Default functions

The following functions (stored in the DEFAULT_FUNCTIONS dictionary) are ready for use:

  • Random numbers: rand(), randn() (Note that these functions should be called without arguments, the code generation process will take care of generating an array of numbers for numpy).
  • Elementary functions: sqrt, exp, log, log10, abs, sign
  • Trigonometric functions: sin, cos, tan, sinh, cosh, tanh, arcsin, arccos, arctan
  • General utility functions: clip, floor, ceil

Brian also provides a special purpose function int, which can be used to convert a an expression or variable into an integer value. This is especially useful for boolean values (which will be converted into 0 or 1), for example to have a conditional evaluation as part of an equation or statement which sometimes allows to circumvent the lack of an if statement. For example, the following reset statement resets the variable v to either v_r1 or v_r2, depending on the value of w: 'v = v_r1 * int(w <= 0.5) + v_r2 * int(w > 0.5)'

User-provided functions

Python code generation

If a function is only used in contexts that use Python code generation, preparing a function for use with Brian only means specifying its units. The simplest way to do this is to use the check_units() decorator:

@check_units(x1=meter, y1=meter, x2=meter, y2=meter, result=meter)
def distance(x1, y1, x2, y2):
    return sqrt((x1 - x2)**2 + (y1 - y2)**2)

Another option is to wrap the function in a Function object:

def distance(x1, y1, x2, y2):
    return sqrt((x1 - x2)**2 + (y1 - y2)**2)
# wrap the distance function
distance = Function(distance, arg_units=[meter, meter, meter, meter],
                    return_unit=meter)

The use of Brian’s unit system has the benefit of checking the consistency of units for every operation but at the expense of performance. Consider the following function, for example:

@check_units(I=amp, result=Hz)
def piecewise_linear(I):
    return clip((I-1*nA) * 50*Hz/nA, 0*Hz, 100*Hz)

When Brian runs a simulation, the state variables are stored and passed around without units for performance reasons. If the above function is used, however, Brian adds units to its input argument so that the operations inside the function do not fail with dimension mismatches. Accordingly, units are removed from the return value so that the function output can be used with the rest of the code. For better performance, Brian can alter the namespace of the function when it is executed as part of the simulation and remove all the units, then pass values without units to the function. In the above example, this means making the symbol nA refer to 1e-9 and Hz to 1. To use this mechanism, add the decorator implementation() with the discard_units keyword:

@implementation('numpy', discard_units=True)
@check_units(I=amp, result=Hz)
def piecewise_linear(I):
    return clip((I-1*nA) * 50*Hz/nA, 0*Hz, 100*Hz)

Note that the use of the function outside of simulation runs is not affected, i.e. using piecewise_linear still requires a current in Ampere and returns a rate in Hertz. The discard_units mechanism does not work in all cases, e.g. it does not work if the function refers to units as brian2.nA instead of nA, if it uses imports inside the function (e.g. from brian2 import nA), etc. The discard_units can also be switched on for all functions without having to use the implementation() decorator by setting the codegen.runtime.numpy.discard_units preference.

Other code generation targets

To make a function available for other code generation targets (e.g. C++), implementations for these targets have to be added. This can be achieved using the implementation() decorator. The form of the code (e.g. a simple string or a dictionary of strings) necessary is target-dependent, for C++ both options are allowed, a simple string will be interpreted as filling the 'support_code' block. Note that both 'cpp' and 'weave' can be used to provide C++ implementations, the first should be used for generic C++ implementations, and the latter if weave-specific code is necessary. An implementation for the C++ target could look like this:

@implementation('cpp', '''
     double piecewise_linear(double I) {
        if (I < 1e-9)
            return 0;
        if (I > 3e-9)
            return 100;
        return (I/1e-9 - 1) * 50;
     }
     ''')
@check_units(I=amp, result=Hz)
def piecewise_linear(I):
    return clip((I-1*nA) * 50*Hz/nA, 0*Hz, 100*Hz)

Alternatively, FunctionImplementation objects can be added to the Function object.

The same sort of approach as for C++ works for Cython using the 'cython' target. The example above would look like this:

@implementation('cython', '''
    cdef double piecewise_linear(double I):
        if I<1e-9:
            return 0.0
        elif I>3e-9:
            return 100.0
        return (I/1e-9-1)*50
    ''')
@check_units(I=amp, result=Hz)
def piecewise_linear(I):
    return clip((I-1*nA) * 50*Hz/nA, 0*Hz, 100*Hz)
Arrays vs. scalar values in user-provided functions

Equations, expressions and abstract code statements are always implicitly referring to all the neurons in a NeuronGroup, all the synapses in a Synapses object, etc. Therefore, function calls also apply to more than a single value. The way in which this is handled differs between code generation targets that support vectorized expressions (e.g. the numpy target) and targets that don’t (e.g. the weave target or the cpp_standalone mode). If the code generation target supports vectorized expressions, it will receive an array of values. For example, in the piecewise_linear example above, the argument I will be an array of values and the function returns an array of values. For code generation without support for vectorized expressions, all code will be executed in a loop (over neurons, over synapses, ...), the function will therefore be called several times with a single value each time.

In both cases, the function will only receive the “relevant” values, meaning that if for example a function is evaluated as part of a reset statement, it will only receive values for the neurons that just spiked.

Additional namespace

Some functions need additional data to compute a result, e.g. a TimedArray needs access to the underlying array. For the numpy target, a function can simply use a reference to an object defined outside the function, there is no need to explicitly pass values in a namespace. For the other code language targets, values can be passed in the namespace argument of the implementation() decorator or the add_implementation method. The namespace values are then accessible in the function code under the given name, prefixed with _namespace. Note that this mechanism should only be used for numpy arrays or general objects (e.g. function references to call Python functions from weave or Cython code). Scalar values should be directly included in the function code, by using a “dynamic implemention” (see add_dynamic_implementation).

See TimedArray and BinomialFunction for examples that use this mechanism.

Data types

By default, functions are assumed to take any type of argument, and return a floating point value. If you want to put a restriction on the type of an argument, or specify that the return type should be something other than float, either declare it as a Function (and see its documentation on specifying types) or use the declare_types() decorator, e.g.:

@check_units(a=1, b=1, result=1)
@declare_types(a='integer', result='highest')
def f(a, b):
    return a*b

This is potentially important if you have functions that return integer or boolean values, because Brian’s code generation optimisation step will make some potentially incorrect simplifications if it assumes that the return type is floating point.

Preferences

Brian has a system of global preferences that affect how certain objects behave. These can be set either in scripts by using the prefs object or in a file. Each preference looks like codegen.c.compiler, i.e. dotted names.

Accessing and setting preferences

Preferences can be accessed and set either keyword-based or attribute-based. The following are equivalent:

prefs['codegen.c.compiler'] = 'gcc'
prefs.codegen.c.compiler = 'gcc'

Using the attribute-based form can be particulary useful for interactive work, e.g. in ipython, as it offers autocompletion and documentation. In ipython, prefs.codegen.c? would display a docstring with all the preferences available in the codegen.c category.

Preference files

Preferences are stored in a hierarchy of files, with the following order (each step overrides the values in the previous step but no error is raised if one is missing):

  • The global defaults are stored in the installation directory.
  • The user default are stored in ~/.brian/user_preferences (which works on Windows as well as Linux). The ~ symbol refers to the user directory.
  • The file brian_preferences in the current directory.

The preference files are of the following form:

a.b.c = 1
# Comment line
[a]
b.d = 2
[a.b]
b.e = 3

This would set preferences a.b.c=1, a.b.d=2 and a.b.e=3.

List of preferences

Brian itself defines the following preferences (including their default values):

codegen

Code generation preferences

codegen.loop_invariant_optimisations = True

Whether to pull out scalar expressions out of the statements, so that they are only evaluated once instead of once for every neuron/synapse/... Can be switched off, e.g. because it complicates the code (and the same optimisation is already performed by the compiler) or because the code generation target does not deal well with it. Defaults to True.

codegen.string_expression_target = 'numpy'

Default target for the evaluation of string expressions (e.g. when indexing state variables). Should normally not be changed from the default numpy target, because the overhead of compiling code is not worth the speed gain for simple expressions.

Accepts the same arguments as codegen.target, except for 'auto'

codegen.target = 'auto'

Default target for code generation.

Can be a string, in which case it should be one of:

  • 'auto' the default, automatically chose the best code generation target available.
  • 'weave' uses scipy.weave to generate and compile C++ code, should work anywhere where gcc is installed and available at the command line.
  • 'cython', uses the Cython package to generate C++ code. Needs a working installation of Cython and a C++ compiler.
  • 'numpy' works on all platforms and doesn’t need a C compiler but is often less efficient.

Or it can be a CodeObject class.

codegen.cpp

C++ compilation preferences

codegen.cpp.compiler = ''

Compiler to use (uses default if empty)

Should be gcc or msvc.

codegen.cpp.define_macros = []

List of macros to define; each macro is defined using a 2-tuple, where ‘value’ is either the string to define it to or None to define it without a particular value (equivalent of “#define FOO” in source or -DFOO on Unix C compiler command line).

codegen.cpp.extra_compile_args = None

Extra arguments to pass to compiler (if None, use either extra_compile_args_gcc or extra_compile_args_msvc).

codegen.cpp.extra_compile_args_gcc = ['-w', '-O3', '-ffast-math', '-fno-finite-math-only', '-march=native']

Extra compile arguments to pass to GCC compiler

codegen.cpp.extra_compile_args_msvc = ['/Ox', '/w', '/arch:SSE2']

Extra compile arguments to pass to MSVC compiler (the default /arch: flag is determined based on the processor architecture)
Any extra platform- and compiler-specific information to use when linking object files together.

codegen.cpp.headers = []

A list of strings specifying header files to use when compiling the code. The list might look like [“<vector>”,“‘my_header’”]. Note that the header strings need to be in a form than can be pasted at the end of a #include statement in the C++ code.

codegen.cpp.include_dirs = []

Include directories to use. Note that $prefix/include will be appended to the end automatically, where $prefix is Python’s site-specific directory prefix as returned by sys.prefix.

codegen.cpp.libraries = []

List of library names (not filenames or paths) to link against.

codegen.cpp.library_dirs = []

List of directories to search for C/C++ libraries at link time. Note that $prefix/lib will be appended to the end automatically, where $prefix is Python’s site-specific directory prefix as returned by sys.prefix.

codegen.cpp.msvc_architecture = ''

MSVC architecture name (or use system architectue by default).

Could take values such as x86, amd64, etc.

codegen.cpp.msvc_vars_location = ''

Location of the MSVC command line tool (or search for best by default).

codegen.cpp.runtime_library_dirs = []

List of directories to search for C/C++ libraries at run time.

codegen.generators

Codegen generator preferences (see subcategories for individual languages)

codegen.generators.cpp

C++ codegen preferences

codegen.generators.cpp.flush_denormals = False

Adds code to flush denormals to zero.

The code is gcc and architecture specific, so may not compile on all platforms. The code, for reference is:

#define CSR_FLUSH_TO_ZERO         (1 << 15)
unsigned csr = __builtin_ia32_stmxcsr();
csr |= CSR_FLUSH_TO_ZERO;
__builtin_ia32_ldmxcsr(csr);

Found at http://stackoverflow.com/questions/2487653/avoiding-denormal-values-in-c.

codegen.generators.cpp.restrict_keyword = '__restrict'

The keyword used for the given compiler to declare pointers as restricted.

This keyword is different on different compilers, the default works for gcc and MSVS.

codegen.runtime

Runtime codegen preferences (see subcategories for individual targets)

codegen.runtime.cython

Cython runtime codegen preferences

codegen.runtime.cython.cache_dir = None

Location of the cache directory for Cython files. By default, will be stored in a brian_extensions subdirectory where Cython inline stores its temporary files (the result of get_cython_cache_dir()).

codegen.runtime.cython.multiprocess_safe = True

Whether to use a lock file to prevent simultaneous write access to cython .pyx and .so files.

codegen.runtime.numpy

Numpy runtime codegen preferences

codegen.runtime.numpy.discard_units = False

Whether to change the namespace of user-specifed functions to remove units.
core

Core Brian preferences

core.default_float_dtype = float64

Default dtype for all arrays of scalars (state variables, weights, etc.).

Currently, this is not supported (only float64 can be used).

core.default_integer_dtype = int32

Default dtype for all arrays of integer scalars.

core.outdated_dependency_error = True

Whether to raise an error for outdated dependencies (True) or just a warning (False).

core.network

Network preferences

core.network.default_schedule = ['start', 'groups', 'thresholds', 'synapses', 'resets', 'end']

Default schedule used for networks that don’t specify a schedule.
devices

Device preferences

devices.cpp_standalone

C++ standalone preferences

devices.cpp_standalone.openmp_spatialneuron_strategy = None

Which strategy to chose for solving the three tridiagonal systems with OpenMP: 'branches' means to solve the three systems sequentially, but for all the branches in parallel, 'systems' means to solve the three systems in parallel, but all the branches within each system sequentially. The 'branches' approach is usually better for morphologies with many branches and a large number of threads, while the 'systems' strategy should be better for morphologies with few branches (e.g. cables) and/or simulations with no more than three threads. If not specified (the default), the 'systems' strategy will be used when using no more than three threads or when the morphology has less than three branches in total.

devices.cpp_standalone.openmp_threads = 0

The number of threads to use if OpenMP is turned on. By default, this value is set to 0 and the C++ code is generated without any reference to OpenMP. If greater than 0, then the corresponding number of threads are used to launch the simulation.
logging

Logging system preferences

logging.console_log_level = 'INFO'

What log level to use for the log written to the console.

Has to be one of CRITICAL, ERROR, WARNING, INFO, DEBUG or DIAGNOSTIC.

logging.delete_log_on_exit = True

Whether to delete the log and script file on exit.

If set to True (the default), log files (and the copy of the main script) will be deleted after the brian process has exited, unless an uncaught exception occured. If set to False, all log files will be kept.

logging.file_log = True

Whether to log to a file or not.

If set to True (the default), logging information will be written to a file. The log level can be set via the logging.file_log_level preference.

logging.file_log_level = 'DIAGNOSTIC'

What log level to use for the log written to the log file.

In case file logging is activated (see logging.file_log), which log level should be used for logging. Has to be one of CRITICAL, ERROR, WARNING, INFO, DEBUG or DIAGNOSTIC.

logging.save_script = True

Whether to save a copy of the script that is run.

If set to True (the default), a copy of the currently run script is saved to a temporary location. It is deleted after a successful run (unless logging.delete_log_on_exit is False) but is kept after an uncaught exception occured. This can be helpful for debugging, in particular when several simulations are running in parallel.

logging.std_redirection = True

Whether or not to redirect stdout/stderr to null at certain places.

This silences a lot of annoying compiler output, but will also hide error messages making it harder to debug problems. You can always temporarily switch it off when debugging. If logging.std_redirection_to_file is set to True as well, then the output is saved to a file and if an error occurs the name of this file will be printed.

logging.std_redirection_to_file = True

Whether to redirect stdout/stderr to a file.

If both logging.std_redirection and this preference are set to True, all standard output/error (most importantly output from the compiler) will be stored in files and if an error occurs the name of this file will be printed. If logging.std_redirection is True and this preference is False, then all standard output/error will be completely suppressed, i.e. neither be displayed nor stored in a file.

The value of this preference is ignore if logging.std_redirection is set to False.

Logging

Brian uses a logging system to display warnings and general information messages to the user, as well as writing them to a file with more detailed information, useful for debugging. Each log message has one of the following “log levels”:

ERROR
Only used when an exception is raised, i.e. an error occurs and the current operation is interrupted. Example: You use a variable name in an equation that Brian does not recognize.
WARNING
Brian thinks that something is most likely a bug, but it cannot be sure. Example: You use a Synapses object without any synapses in your simulation.
INFO
Brian wants to make the user aware of some automatic choice that it did for the user. Example: You did not specify an integration method for a NeuronGroup and therefore Brian chose an appropriate method for you.
DEBUG
Additional information that might be useful when a simulation is not working as expected. Example: The integration timestep used during the simulation.
DIAGNOSTIC
Additional information useful when tracking down bugs in Brian itself. Example: The generated code for a CodeObject.

By default, all messages are written to the log file and all messages of level INFO and above are displayed on the console. To change what messages are displayed, see below.

Note

By default, the log file is deleted after a successful simulation run, i.e. when the simulation exited without an error. To keep the log around, set the logging.delete_log_on_exit preference to False.

Showing/hiding log messages

If you want to change what messages are displayed on the console, you can call a method of the method of BrianLogger:

BrianLogger.log_level_debug() # now also display debug messages

It is also possible to suppress messages for certain sub-hierarchies by using BrianLogger.suppress_hierarchy:

# Suppress code generation messages on the console
BrianLogger.suppress_hierarchy('brian2.codegen')
# Suppress preference messages even in the log file
BrianLogger.suppress_hierarchy('brian2.core.preferences',
                               filter_log_file=True)

Similarly, messages ending in a certain name can be suppressed with BrianLogger.suppress_name:

# Suppress resolution conflict warnings
BrianLogger.suppress_name('resolution_conflict')

These functions should be used with care, as they suppresses messages independent of the level, i.e. even warning and error messages.

Preferences

You can also change details of the logging system via Brian’s Preferences system. With this mechanism, you can switch the logging to a file off completely (by setting logging.file_log to False) or have it log less messages (by setting logging.file_log_level to a level higher than DIAGNOSTIC) – this can be important for long-running simulations where the log might otherwise take up a lot of disk space. For a list of all preferences related to logging, see the documentation of the brian2.utils.logger module.

Warning

Most of the logging preferences are only taken into account during the initialization of the logging system which takes place as soon as brian2 is imported. Therefore, if you use e.g. prefs.logging.file_log = False in your script, this will not have the intended effect! Instead, set these preferences in a file (see Preferences).

Namespaces

Equations can contain references to external parameters or functions. During the initialisation of a NeuronGroup or a Synapses object, this namespace can be provided as an argument. This is a group-specific namespace that will only be used for names in the context of the respective group. Note that units and a set of standard functions are always provided and should not be given explicitly. This namespace does not necessarily need to be exhaustive at the time of the creation of the NeuronGroup/Synapses, entries can be added (or modified) at a later stage via the namespace attribute (e.g. G.namespace['tau'] = 10*ms).

At the point of the call to the Network.run() namespace, any group-specific namespace will be augmented by the “run namespace”. This namespace can be either given explicitly as an argument to the run method or it will be taken from the locals and globals surrounding the call. A warning will be emitted if a name is defined in more than one namespace.

To summarize: an external identifier will be looked up in the context of an object such as NeuronGroup or Synapses. It will follow the following resolution hierarchy:

  1. Default unit and function names.
  2. Names defined in the explicit group-specific namespace.
  3. Names in the run namespace which is either explicitly given or the implicit namespace surrounding the run call.

Note that if you completely specify your namespaces at the Group level, you should probably pass an empty dictionary as the namespace argument to the run call – this will completely switch off the “implicit namespace” mechanism.

The following three examples show the different ways of providing external variable values, all having the same effect in this case:

# Explicit argument to the NeuronGroup
G = NeuronGroup(1, 'dv/dt = -v / tau : 1', namespace={'tau': 10*ms})
net = Network(G)
net.run(10*ms)

# Explicit argument to the run function
G = NeuronGroup(1, 'dv/dt = -v / tau : 1')
net = Network(G)
net.run(10*ms, namespace={'tau': 10*ms})

# Implicit namespace from the context
G = NeuronGroup(1, 'dv/dt = -v / tau : 1')
net = Network(G)
tau = 10*ms
net.run(10*ms)

External variables are free to change between runs (but not during one run), the value at the time of the run() call is used in the simulation.

Scheduling and custom progress reporting

Scheduling

Every simulated object in Brian has three attributes that can be specified at object creation time: dt, when, and order. The time step of the simulation is determined by dt, if it is specified, a new Clock with the given dt will be created for the object. Alternatively, a clock object can be specified directly, this can be useful if a clock should be shared between several objects – under most circumstances, however, a user should not have to deal with the creation of Clock objects and just define dt. If neither a dt nor a clock argument is specified, the object will use the defaultclock. Setting defaultclock.dt will therefore change the dt of all objects that use the defaultclock.

Note that directly changing the dt attribute of an object is not allowed, neither it is possible to assign to dt in abstract code statements. To change dt between runs, change the dt attribute of the respective Clock object (which is also accessible as the clock attribute of each BrianObject). The when and the order attributes can be changed by setting the respective attributes of a BrianObject.

During a single time step, objects are updated according to their when argument’s position in the schedule. This schedule is determined by Network.schedule which is a list of strings, determining “execution slots” and their order. It defaults to: ['start', 'groups', 'thresholds', 'synapses', 'resets', 'end']. In addition to the names provided in the schedule, names such as before_thresholds or after_synapses can be used that are understood as slots in the respective positions. The default for the when attribute is a sensible value for most objects (resets will happen in the reset slot, etc.) but sometimes it make sense to change it, e.g. if one would like a StateMonitor, which by default records in the end slot, to record the membrane potential before a reset is applied (otherwise no threshold crossings will be observed in the membrane potential traces). Note that you can also add new slots to the schedule and refer to them in the when argument of an object.

Finally, if during a time step two objects fall in the same execution slot, they will be updated in ascending order according to their order attribute, an integer number defaulting to 0. If two objects have the same when and order attribute then they will be updated in an arbitrary but reproducible order (based on the lexicographical order of their names).

Note that objects that don’t do any computation by themselves but only act as a container for other objects (e.g. a NeuronGroup which contains a StateUpdater, a Resetter and a Thresholder), don’t have any value for when, but pass on the given values for dt and order to their containing objects.

Every new Network starts a simulation at time 0; Network.t is a read-only attribute, to go back to a previous moment in time (e.g. to do another trial of a simulation with a new noise instantiation) use the mechanism described below.

Note that while it is allowed to change the dt of an object between runs (e.g. to simulate/monitor an initial phase with a bigger time step than a later phase), this change has to be compatible with the internal representation of clocks as an integer value (the number of elapsed time steps). For example, you can simulate an object for 100ms with a time step of 0.1ms (i.e. for 1000 steps) and then switch to a dt of 0.5ms, the time will then be internally represented as 200 steps. You cannot, however, switch to a dt of 0.3ms, because 100ms are not an integer multiple of 0.3ms.

Progress reporting

For custom progress reporting (e.g. graphical output, writing to a file, etc.), the report keyword accepts a callable (i.e. a function or an object with a __call__ method) that will be called with four parameters:

  • elapsed: the total (real) time since the start of the run
  • completed: the fraction of the total simulation that is completed, i.e. a value between 0 and 1
  • start: The start of the simulation (in biological time)
  • duration: the total duration (in biological time) of the simulation

The function will be called every report_period during the simulation, but also at the beginning and end with completed equal to 0.0 and 1.0, respectively.

For the C++ standalone mode, the same standard options are available. It is also possible to implement custom progress reporting by directly passing the code (as a multi-line string) to the report argument. This code will be filled into a progress report function template, it should therefore only contain a function body. The simplest use of this might look like:

net.run(duration, report='std::cout << (int)(completed*100.) << "% completed" << std::endl;')
Examples of custom reporting

Progress printed to a file

from brian2.core.network import TextReport
report_file = open('report.txt', 'w')
file_reporter = TextReport(report_file)
net.run(duration, report=file_reporter)
report_file.close()

“Graphical” output on the console

This needs a “normal” Linux console, i.e. it might not work in an integrated console in an IDE.

Adapted from http://stackoverflow.com/questions/3160699/python-progress-bar

import sys

class ProgressBar(object):
    def __init__(self, toolbar_width):
        self.toolbar_width = toolbar_width
        self.ticks = 0

    def __call__(self, elapsed, complete, start, duration):
        if complete == 0.0:
            # setup toolbar
            sys.stdout.write("[%s]" % (" " * self.toolbar_width))
            sys.stdout.flush()
            sys.stdout.write("\b" * (self.toolbar_width + 1)) # return to start of line, after '['
        else:
            ticks_needed = int(round(complete * 40))
            if self.ticks < ticks_needed:
                sys.stdout.write("-" * (ticks_needed-self.ticks))
                sys.stdout.flush()
                self.ticks = ticks_needed
        if complete == 1.0:
            sys.stdout.write("\n")

net.run(duration, report=progress_bar, report_period=1*second)

Random numbers

Brian provides two basic functions to generate random numbers that can be used in model code and equations: rand(), to generate uniformly generated random numbers between 0 and 1, and randn(), to generate random numbers from a standard normal distribution (i.e. normally distributed numbers with a mean of 0 and a standard deviation of 1). All other stochastic elements of a simulation (probabilistic connections, Poisson-distributed input generated by PoissonGroup or PoissonInput, differential equations using the noise term xi, ...) will internally make use of these two basic functions.

For Runtime code generation, random numbers are generated by numpy.random.rand and numpy.random.randn respectively, which uses a Mersenne-Twister pseudorandom number generator. When the numpy code generation target is used, these functions are called directly, but for weave and cython, Brian uses a internal buffers for uniformly and normally distributed random numbers and calls the numpy functions whenever all numbers from this buffer have been used. This avoids the overhead of switching between C code and Python code for each random number. For Standalone code generation, the random number generation is based on “randomkit”, the same Mersenne-Twister implementation that is used by numpy. The source code of this implementation will be included in every generated standalone project.

Seeding and reproducibility

Runtime mode

As explained above, Runtime code generation makes use of numpy’s random number generator. In principle, using numpy.random.seed therefore permits reproducing a stream of random numbers. However, for weave and cython, Brian’s buffer complicates the matter a bit: if a simulation sets numpy’s seed, uses 10000 random numbers, and then resets the seed, the following 10000 random numbers (assuming the current size of the buffer) will come out of the pre-generated buffer before numpy’s random number generation functions are called again and take into account the seed set by the user. Instead, users should use the seed() function provided by Brian 2 itself, this will take care of setting numpy’s random seed and empty Brian’s internal buffers. This function also has the advantage that it will continue to work when the simulation is switched to standalone code generation (see below). Note that random numbers are not guaranteed to be reproducible across different code generation targets or different versions of Brian, especially if several sources of randomness are used in the same CodeObject (e.g. two noise variables in the equations of a NeuronGroup). This is because Brian does not guarantee the order of certain operations (e.g. should it first generate all random numbers for the first noise variable for all neurons, followed by the random numbers for the second noise variable for all neurons or rather first the random numbers for all noice variables of the first neuron, then for the second neuron, etc.) Since all random numbers are coming from the same stream of random numbers, the order of getting the numbers out of this stream matter.

Standalone mode

For Standalone code generation, Brian’s seed() function will insert code to set the random number generator seed into the generated code. The code will be generated at the position where the seed() call was made, allowing detailed control over the seeding. For example the following code would generate identical initial conditions every time it is run, but the noise generated by the xi variable would differ:

G = NeuronGroup(10, 'dv/dt = -v/(10*ms) + 0.1*xi/sqrt(ms) : 1')
seed(4321)
G.v = 'rand()'
seed()
run(100*ms)

Note

In standalone mode, seed() will not set numpy’s random number generator. If you use random numbers in the Python script itself (e.g. to generate a list of synaptic connections that will be passed to the standalone code as a pre-calculated array), then you have to explicitly call numpy.random.seed yourself to make these random numbers reproducible.

Note

Seeding should lead to reproducible random numbers even when using OpenMP with multiple threads (for repeated simulations with the same number of threads), but this has not been rigorously tested. Use at your own risk.

Custom events

In most simulations, a NeuronGroup defines a threshold on its membrane potential that triggers a spike event. This event can be monitored by a SpikeMonitor, it is used in synaptic interactions, and in integrate-and-fire models it also leads to the execution of one or more reset statements.

Sometimes, it can be useful to define additional events, e.g. when an ion concentration in the cell crosses a certain threshold. This can be done with the events keyword in the NeuronGroup initializer:

group = NeuronGroup(N, '...', threshold='...', reset='...',
                    events={'custom_event': 'x > x_th'})

In this example, we define an event with the name custom_event that is triggered when the x variable crosses the threshold x_th. Such events can be recorded with an EventMonitor:

event_mon = EventMonitor(group, 'custom_event')

Such an EventMonitor can be used in the same way as a SpikeMonitor – in fact, creating the SpikeMonitor is basically identical to recording the spike event with an EventMonitor. An EventMonitor is not limited to record the event time/neuron index, it can also record other variables of the model:

event_mon = EventMonitor(group, 'custom_event', variables['var1', 'var2'])

If the event should trigger a series of statements (i.e. the equivalent of reset statements), this can be added by calling run_on_event:

group.run_on_event('custom_event', 'x=0')

When neurons are connected by Synapses, the pre and post pathways are triggered by spike events by default. It is possible to change this by providing an on_event keyword that either specifies which event to use for all pathways, or a specific event for each pathway (where non-specified pathways use the default spike event):

synapse_1 = Synapses(group, another_group, '...', on_pre='...', on_event='custom_event')
synapse_2 = Synapses(group, another_group, '...', on_pre='...', on_post='...',
                     on_event={'pre': 'custom_event'})

Scheduling

By default, custom events are checked after the spiking threshold (in the after_thresholds slots) and statements are executed after the reset (in the after_resets slots). The slot for the execution of custom event-triggered statements can be changed when it is added with the usual when and order keyword arguments (see Scheduling and custom progress reporting for details). To change the time when the condition is checked, use NeuronGroup.set_event_schedule().

State update

In Brian, a state updater transforms a set of equations into an abstract state update code (and therefore is automatically target-independent). In general, any function (or callable object) that takes an Equations object and returns abstract code (as a string) can be used as a state updater and passed to the NeuronGroup constructor as a method argument.

The more common use case is to specify no state updater at all or chose one by name, see Choice of state updaters below.

Explicit state update

Explicit state update schemes can be specified in mathematical notation, using the ExplicitStateUpdater class. A state updater scheme contains of a series of statements, defining temporary variables and a final line (starting with x_new =), giving the updated value for the state variable. The description can make reference to t (the current time), dt (the size of the time step), x (value of the state variable), and f(x, t) (the definition of the state variable x, assuming dx/dt = f(x, t). In addition, state updaters supporting stochastic equations additionally make use of dW (a normal distributed random variable with variance dt) and g(x, t), the factor multiplied with the noise variable, assuming dx/dt = f(x, t) + g(x, t) * xi.

Using this notation, simple forward Euler integration is specified as:

x_new = x + dt * f(x, t)

A Runge-Kutta 2 (midpoint) method is specified as:

k = dt * f(x,t)
x_new = x + dt * f(x +  k/2, t + dt/2)

When creating a new state updater using ExplicitStateUpdater, you can specify the stochastic keyword argument, determining whether this state updater does not support any stochastic equations (None, the default), stochastic equations with additive noise only ('additive'), or arbitrary stochastic equations ('multiplicative'). The provided state updaters use the Stratonovich interpretation for stochastic equations (which is the correct interpretation if the white noise source is seen as the limit of a coloured noise source with a short time constant). As a result of this, the simple Euler-Maruyama scheme (x_new = x + dt*f(x, t) + dW*g(x, t)) will only be used for additive noise.

An example for a general state updater that handles arbitrary multiplicative noise (under Stratonovich interpretation) is the derivative-free Milstein method:

x_support = x + dt*f(x, t) + dt**.5 * g(x, t)
g_support = g(x_support, t)
k = 1/(2*dt**.5)*(g_support - g(x, t))*(dW**2)
x_new = x + dt*f(x,t) + g(x, t) * dW + k

Note that a single line in these descriptions is only allowed to mention g(x, t), respectively f(x, t) only once (and you are not allowed to write, for example, g(f(x, t), t)). You can work around these restrictions by using intermediate steps, defining temporary variables, as in the above examples for milstein and rk2.

Choice of state updaters

As mentioned in the beginning, you can pass arbitrary callables to the method argument of a NeuronGroup, as long as this callable converts an Equations object into abstract code. The best way to add a new state updater, however, is to register it with brian and provide a method to determine whether it is appropriate for a given set of equations. This way, it can be automatically chosen when no method is specified and it can be referred to with a name (i.e. you can pass a string like 'euler' to the method argument instead of importing euler and passing a reference to the object itself).

If you create a new state updater using the ExplicitStateUpdater class, you have to specify what kind of stochastic equations it supports. The keyword argument stochastic takes the values None (no stochastic equation support, the default), 'additive' (support for stochastic equations with additive noise), 'multiplicative' (support for arbitrary stochastic equations).

After creating the state updater, it has to be registered with StateUpdateMethod:

new_state_updater = ExplicitStateUpdater('...', stochastic='additive')
StateUpdateMethod.register('mymethod', new_state_updater)

The preferred way to do write new general state updaters (i.e. state updaters that cannot be described using the explicit syntax described above) is to extend the StateUpdateMethod class (but this is not strictly necessary, all that is needed is an object that implements a __call__ method that operates on an Equations object and a dictionary of variables). Optionally, the state updater can be registered with StateUpdateMethod as shown above.

Implicit state updates

Note

All of the following is just here for future reference, it’s not implemented yet.

Implicit schemes often use Newton-Raphson or fixed point iterations. These can also be defined by mathematical statements, but the number of iterations is dynamic and therefore not easily vectorised. However, this might not be a big issue in C, GPU or even with Numba.

Backward Euler

Backward Euler is defined as follows:

x(t+dt)=x(t)+dt*f(x(t+dt),t+dt)

This is not a executable statement because the RHS depends on the future. A simple way is to perform fixed point iterations:

x(t+dt)=x(t)
x(t+dt)=x(t)+dt*dx=f(x(t+dt),t+dt)    until increment<tolerance

This includes a loop with a different number of iterations depending on the neuron.

How Brian works

In this section we will briefly cover some of the internals of how Brian works. This is included here to understand the general process that Brian goes through in running a simulation, but it will not be sufficient to understand the source code of Brian itself or to extend it to do new things. For a more detailed view of this, see the documentation in the Developer’s guide.

Clock-driven versus event-driven

Brian is a clock-driven simulator. This means that the simulation time is broken into an equally spaced time grid, 0, dt, 2*dt, 3*dt, .... At each time step t, the differential equations specifying the models are first integrated giving the values at time t+dt. Spikes are generated when a condition such as v>vt is satisfied, and spikes can only occur on the time grid.

The advantage of clock driven simulation is that it is very flexible (arbitrary differential equations can be used) and computationally efficient. However, the time grid approximation can lead to an overestimate of the amount of synchrony that is present in a network. This is usually not a problem, and can be managed by reducing the time step dt, but it can be an issue for some models.

Note that the inaccuracy introduced by the spike time approximation is of order O(dt), so the total accuracy of the simulation is of order O(dt) per time step. This means that in many cases, there is no need to use a higher order numerical integration method than forward Euler, as it will not improve the order of the error beyond O(dt). See State update for more details of numerical integration methods.

Some simulators use an event-driven method. With this method, spikes can occur at arbitrary times instead of just on the grid. This method can be more accurate than a clock-driven simulation, but it is usually substantially more computationally expensive (especially for larger networks). In addition, they are usually more restrictive in terms of the class of differential equations that can be solved.

For a review of some of the simulation strategies that have been used, see Brette et al. 2007.

Code overview

The user-visible part of Brian consists of a number of objects such as NeuronGroup, Synapses, Network, etc. These are all written in pure Python and essentially work to translate the user specified model into the computational engine. The end state of this translation is a collection of short blocks of code operating on a namespace, which are called in a sequence by the Network. Examples of these short blocks of code are the “state updaters” which perform numerical integration, or the synaptic propagation step. The namespaces consist of a mapping from names to values, where the possible values can be scalar values, fixed-length or dynamically sized arrays, and functions.

Syntax layer

The syntax layer consists of everything that is independent of the way the final simulation is computed (i.e. the language and device it is running on). This includes things like NeuronGroup, Synapses, Network, Equations, etc.

The user-visible part of this is documented fully in the User’s guide and the Advanced guide. In particular, things such as the analysis of equations and assignment of numerical integrators. The end result of this process, which is passed to the computational engine, is a specification of the simulation consisting of the following data:

  • A collection of variables which are scalar values, fixed-length arrays, dynamically sized arrays, and functions. These are handled by Variable objects detailed in Variables and indices. Examples: each state variable of a NeuronGroup is assigned an ArrayVariable; the list of spike indices stored by a SpikeMonitor is assigned a DynamicArrayVariable; etc.
  • A collection of code blocks specified via an “abstract code block” and a template name. The “abstract code block” is a sequence of statements such as v = vr which are to be executed. In the case that say, v and vr are arrays, then the statement is to be executed for each element of the array. These abstract code blocks are either given directly by the user (in the case of neuron threshold and reset, and synaptic pre and post codes), or generated from differential equations combined with a numerical integrator. The template name is one of a small set (around 20 total) which give additional context. For example, the code block a = b when considered as part of a “state update” means execute that for each neuron index. In the context of a reset statement, it means execute it for each neuron index of a neuron that has spiked. Internally, these templates need to be implemented for each target language/device, but there are relatively few of them.
  • The order of execution of these code blocks, as defined by the Network.

Computational engine

The computational engine covers everything from generating to running code in a particular language or on a particular device. It starts with the abstract definition of the simulation resulting from the syntax layer described above.

The computational engine is described by a Device object. This is used for allocating memory, generating and running code. There are two types of device, “runtime” and “standalone”. In runtime mode, everything is managed by Python, even if individual code blocks are in a different language. Memory is managed using numpy arrays (which can be passed as pointers to use in other languages). In standalone mode, the output of the process (after calling Device.build) is a complete source code project that handles everything, including memory management, and is independent of Python.

For both types of device, one of the key steps that works in the same way is code generation, the creation of a compilable and runnable block of code from an abstract code block and a collection of variables. This happens in two stages: first of all, the abstract code block is converted into a code snippet, which is a syntactically correct block of code in the target language, but not one that can run on its own (it doesn’t handle accessing the variables from memory, etc.). This code snippet typically represents the inner loop code. This step is handled by a CodeGenerator object. In some cases it will involve a syntax translation (e.g. the Python syntax x**y in C++ should be pow(x, y)). The next step is to insert this code snippet into a template to form a compilable code block. This code block is then passed to a runtime CodeObject. In the case of standalone mode, this doesn’t do anything, but for runtime devices it handles compiling the code and then running the compiled code block in the given namespace.

Interfacing with external code

Some neural simulations benefit from a direct connections to external libraries, e.g. to support real-time input from a sensor (but note that Brian currently does not offer facilities to assure real-time processing) or to perform complex calculations during a simulation run.

If the external library is written in Python (or is a library with Python bindings), then the connection can be made either using the mechanism for User-provided functions, or using a network operation.

In case of C/C++ libraries, only the User-provided functions mechanism can be used. On the other hand, such simulations can use the same user-provided C++ code to run both with the runtime weave target and with the Standalone code generation mode. In addition to that code, one generally needs to include additional header files and use compiler/linker options to interface with the external code. For this, several preferences can be used that will be taken into account for weave, cython and the cpp_standalone device. These preferences are mostly equivalent to the respective keyword arguments for Python’s distutils.core.Extension class, see the documentation of the cpp_prefs module for more details.

Examples

Example: COBAHH

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

This is an implementation of a benchmark described in the following review paper:

Simulation of networks of spiking neurons: A review of tools and strategies (2006). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, Natschläger, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience

Benchmark 3: random network of HH neurons with exponential synaptic conductances

Clock-driven implementation (no spike time interpolation)

  1. Brette - Dec 2007
from brian2 import *

# Parameters
area = 20000*umetre**2
Cm = (1*ufarad*cm**-2) * area
gl = (5e-5*siemens*cm**-2) * area

El = -60*mV
EK = -90*mV
ENa = 50*mV
g_na = (100*msiemens*cm**-2) * area
g_kd = (30*msiemens*cm**-2) * area
VT = -63*mV
# Time constants
taue = 5*ms
taui = 10*ms
# Reversal potentials
Ee = 0*mV
Ei = -80*mV
we = 6*nS  # excitatory synaptic weight
wi = 67*nS  # inhibitory synaptic weight

# The model
eqs = Equations('''
dv/dt = (gl*(El-v)+ge*(Ee-v)+gi*(Ei-v)-
         g_na*(m*m*m)*h*(v-ENa)-
         g_kd*(n*n*n*n)*(v-EK))/Cm : volt
dm/dt = alpha_m*(1-m)-beta_m*m : 1
dn/dt = alpha_n*(1-n)-beta_n*n : 1
dh/dt = alpha_h*(1-h)-beta_h*h : 1
dge/dt = -ge*(1./taue) : siemens
dgi/dt = -gi*(1./taui) : siemens
alpha_m = 0.32*(mV**-1)*(13*mV-v+VT)/
         (exp((13*mV-v+VT)/(4*mV))-1.)/ms : Hz
beta_m = 0.28*(mV**-1)*(v-VT-40*mV)/
        (exp((v-VT-40*mV)/(5*mV))-1)/ms : Hz
alpha_h = 0.128*exp((17*mV-v+VT)/(18*mV))/ms : Hz
beta_h = 4./(1+exp((40*mV-v+VT)/(5*mV)))/ms : Hz
alpha_n = 0.032*(mV**-1)*(15*mV-v+VT)/
         (exp((15*mV-v+VT)/(5*mV))-1.)/ms : Hz
beta_n = .5*exp((10*mV-v+VT)/(40*mV))/ms : Hz
''')

P = NeuronGroup(4000, model=eqs, threshold='v>-20*mV', refractory=3*ms,
                method='exponential_euler')
Pe = P[:3200]
Pi = P[3200:]
Ce = Synapses(Pe, P, on_pre='ge+=we')
Ci = Synapses(Pi, P, on_pre='gi+=wi')
Ce.connect(p=0.02)
Ci.connect(p=0.02)

# Initialization
P.v = 'El + (randn() * 5 - 5)*mV'
P.ge = '(randn() * 1.5 + 4) * 10.*nS'
P.gi = '(randn() * 12 + 20) * 10.*nS'

# Record a few traces
trace = StateMonitor(P, 'v', record=[1, 10, 100])
run(1 * second, report='text')
plot(trace.t/ms, trace[1].v/mV)
plot(trace.t/ms, trace[10].v/mV)
plot(trace.t/ms, trace[100].v/mV)
xlabel('t (ms)')
ylabel('v (mV)')
show()
_images/COBAHH.1.png

Example: CUBA

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

This is a Brian script implementing a benchmark described in the following review paper:

Simulation of networks of spiking neurons: A review of tools and strategies (2007). Brette, Rudolph, Carnevale, Hines, Beeman, Bower, Diesmann, Goodman, Harris, Zirpe, Natschlager, Pecevski, Ermentrout, Djurfeldt, Lansner, Rochel, Vibert, Alvarez, Muller, Davison, El Boustani and Destexhe. Journal of Computational Neuroscience 23(3):349-98

Benchmark 2: random network of integrate-and-fire neurons with exponential synaptic currents.

Clock-driven implementation with exact subthreshold integration (but spike times are aligned to the grid).

from brian2 import *

taum = 20*ms
taue = 5*ms
taui = 10*ms
Vt = -50*mV
Vr = -60*mV
El = -49*mV

eqs = '''
dv/dt  = (ge+gi-(v-El))/taum : volt (unless refractory)
dge/dt = -ge/taue : volt
dgi/dt = -gi/taui : volt
'''

P = NeuronGroup(4000, eqs, threshold='v>Vt', reset='v = Vr', refractory=5*ms,
                method='linear')
P.v = 'Vr + rand() * (Vt - Vr)'
P.ge = 0*mV
P.gi = 0*mV

we = (60*0.27/10)*mV # excitatory synaptic weight (voltage)
wi = (-20*4.5/10)*mV # inhibitory synaptic weight
Ce = Synapses(P, P, on_pre='ge += we')
Ci = Synapses(P, P, on_pre='gi += wi')
Ce.connect('i<3200', p=0.02)
Ci.connect('i>=3200', p=0.02)

s_mon = SpikeMonitor(P)

run(1 * second)

plot(s_mon.t/ms, s_mon.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
show()
_images/CUBA.1.png

Example: IF_curve_Hodgkin_Huxley

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Input-Frequency curve of a HH model. Network: 100 unconnected Hodgin-Huxley neurons with an input current I. The input is set differently for each neuron.

This simulation should use exponential Euler integration.

from brian2 import *

num_neurons = 100
duration = 2*second

# Parameters
area = 20000*umetre**2
Cm = 1*ufarad*cm**-2 * area
gl = 5e-5*siemens*cm**-2 * area
El = -65*mV
EK = -90*mV
ENa = 50*mV
g_na = 100*msiemens*cm**-2 * area
g_kd = 30*msiemens*cm**-2 * area
VT = -63*mV

# The model
eqs = Equations('''
dv/dt = (gl*(El-v) - g_na*(m*m*m)*h*(v-ENa) - g_kd*(n*n*n*n)*(v-EK) + I)/Cm : volt
dm/dt = 0.32*(mV**-1)*(13.*mV-v+VT)/
    (exp((13.*mV-v+VT)/(4.*mV))-1.)/ms*(1-m)-0.28*(mV**-1)*(v-VT-40.*mV)/
    (exp((v-VT-40.*mV)/(5.*mV))-1.)/ms*m : 1
dn/dt = 0.032*(mV**-1)*(15.*mV-v+VT)/
    (exp((15.*mV-v+VT)/(5.*mV))-1.)/ms*(1.-n)-.5*exp((10.*mV-v+VT)/(40.*mV))/ms*n : 1
dh/dt = 0.128*exp((17.*mV-v+VT)/(18.*mV))/ms*(1.-h)-4./(1+exp((40.*mV-v+VT)/(5.*mV)))/ms*h : 1
I : amp
''')
# Threshold and refractoriness are only used for spike counting
group = NeuronGroup(num_neurons, eqs,
                    threshold='v > -40*mV',
                    refractory='v > -40*mV',
                    method='exponential_euler')
group.v = El
group.I = '0.7*nA * i / num_neurons'

monitor = SpikeMonitor(group)

run(duration)

plot(group.I/nA, monitor.count / duration)
xlabel('I (nA)')
ylabel('Firing rate (sp/s)')
show()
_images/IF_curve_Hodgkin_Huxley.1.png

Example: IF_curve_LIF

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Input-Frequency curve of a IF model. Network: 1000 unconnected integrate-and-fire neurons (leaky IF) with an input parameter v0. The input is set differently for each neuron.

from brian2 import *

n = 1000
duration = 1*second
tau = 10*ms
eqs = '''
dv/dt = (v0 - v) / tau : volt (unless refractory)
v0 : volt
'''
group = NeuronGroup(n, eqs, threshold='v > 10*mV', reset='v = 0*mV',
                    refractory=5*ms, method='linear')
group.v = 0*mV
group.v0 = '20*mV * i / (n-1)'

monitor = SpikeMonitor(group)

run(duration)
plot(group.v0/mV, monitor.count / duration)
xlabel('v0 (mV)')
ylabel('Firing rate (sp/s)')
show()
_images/IF_curve_LIF.1.png

Example: adaptive_threshold

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

A model with adaptive threshold (increases with each spike)

from brian2 import *

eqs = '''
dv/dt = -v/(10*ms) : volt
dvt/dt = (10*mV-vt)/(15*ms) : volt
'''

reset = '''
v = 0*mV
vt += 3*mV
'''

IF = NeuronGroup(1, model=eqs, reset=reset, threshold='v>vt',
                 method='linear')
IF.vt = 10*mV
PG = PoissonGroup(1, 500 * Hz)

C = Synapses(PG, IF, on_pre='v += 3*mV')
C.connect()

Mv = StateMonitor(IF, 'v', record=True)
Mvt = StateMonitor(IF, 'vt', record=True)
# Record the value of v when the threshold is crossed
M_crossings = SpikeMonitor(IF, variables='v')
run(2*second, report='text')

subplot(1, 2, 1)
plot(Mv.t / ms, Mv[0].v / mV)
plot(Mvt.t / ms, Mvt[0].vt / mV)
ylabel('v (mV)')
xlabel('t (ms)')
# zoom in on the first 100ms
xlim(0, 100)
subplot(1, 2, 2)
hist(M_crossings.v / mV, bins=np.arange(10, 20, 0.5))
xlabel('v at threshold crossing (mV)')
show()
_images/adaptive_threshold.1.png

Example: non_reliability

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Reliability of spike timing. See e.g. Mainen & Sejnowski (1995) for experimental results in vitro.

Here: a constant current is injected in all trials.

from brian2 import *

N = 25
tau = 20*ms
sigma = .015
eqs_neurons = '''
dx/dt = (1.1 - x) / tau + sigma * (2 / tau)**.5 * xi : 1 (unless refractory)
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold='x > 1', reset='x = 0',
                      refractory=5*ms, method='euler')
spikes = SpikeMonitor(neurons)

run(500*ms)
plot(spikes.t/ms, spikes.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
show()
_images/non_reliability.1.png

Example: phase_locking

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Phase locking of IF neurons to a periodic input.

from brian2 import *

tau = 20*ms
n = 100
b = 1.2 # constant current mean, the modulation varies
freq = 10*Hz

eqs = '''
dv/dt = (-v + a * sin(2 * pi * freq * t) + b) / tau : 1
a : 1
'''
neurons = NeuronGroup(n, model=eqs, threshold='v > 1', reset='v = 0',
                      method='euler')
neurons.v = 'rand()'
neurons.a = '0.05 + 0.7*i/n'
S = SpikeMonitor(neurons)
trace = StateMonitor(neurons, 'v', record=50)

run(1000*ms)
subplot(211)
plot(S.t/ms, S.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
subplot(212)
plot(trace.t/ms, trace.v.T)
xlabel('Time (ms)')
ylabel('v')
show()
_images/phase_locking.1.png

Example: reliability

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Reliability of spike timing. See e.g. Mainen & Sejnowski (1995) for experimental results in vitro.

from brian2 import *

# The common noisy input
N = 25
tau_input = 5*ms
input = NeuronGroup(1, 'dx/dt = -x / tau_input + (2 /tau_input)**.5 * xi : 1')

# The noisy neurons receiving the same input
tau = 10*ms
sigma = .015
eqs_neurons = '''
dx/dt = (0.9 + .5 * I - x) / tau + sigma * (2 / tau)**.5 * xi : 1
I : 1 (linked)
'''
neurons = NeuronGroup(N, model=eqs_neurons, threshold='x > 1',
                      reset='x = 0', refractory=5*ms, method='euler')
neurons.x = 'rand()'
neurons.I = linked_var(input, 'x') # input.x is continuously fed into neurons.I
spikes = SpikeMonitor(neurons)

run(500*ms)
plt.plot(spikes.t/ms, spikes.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
show()
_images/reliability.1.png

advanced

Example: opencv_movie

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

An example that uses a function from external C library (OpenCV in this case). Works for all C-based code generation targets (i.e. for weave and cpp_standalone device) and for numpy (using the Python bindings).

This example needs a working installation of OpenCV2 and its Python bindings. It has been tested on Ubuntu 14.04 with OpenCV 2.4.8 (libopencv-dev and python-opencv packages).

import os
import urllib2
import cv2  # Import OpenCV2
import cv2.cv as cv  # Import the cv subpackage, needed for some constants

from brian2 import *

defaultclock.dt = 1*ms
prefs.codegen.target = 'weave'
prefs.logging.std_redirection = False
set_device('cpp_standalone')
filename = os.path.abspath('Megamind.avi')

if not os.path.exists(filename):
    print('Downloading the example video file')
    response = urllib2.urlopen('http://docs.opencv.org/2.4/_downloads/Megamind.avi')
    data = response.read()
    with open(filename, 'wb') as f:
        f.write(data)

video = cv2.VideoCapture(filename)
width, height, frame_count = (int(video.get(cv.CV_CAP_PROP_FRAME_WIDTH)),
                              int(video.get(cv.CV_CAP_PROP_FRAME_HEIGHT)),
                              int(video.get(cv.CV_CAP_PROP_FRAME_COUNT)))
fps = 24
time_between_frames = 1*second/fps

# Links the necessary libraries
prefs.codegen.cpp.libraries += ['opencv_core',
                                'opencv_highgui']

# Includes the header files in all generated files
prefs.codegen.cpp.headers += ['<opencv2/core/core.hpp>',
                              '<opencv2/highgui/highgui.hpp>']

# Pass in values as macros
# Note that in general we could also pass in the filename this way, but to get
# the string quoting right is unfortunately quite difficult
prefs.codegen.cpp.define_macros += [('VIDEO_WIDTH', width),
                                    ('VIDEO_HEIGHT', height)]
@implementation('cpp', '''
double* get_frame(bool new_frame)
{
    // The following initializations will only be executed once
    static cv::VideoCapture source("VIDEO_FILENAME");
    static cv::Mat frame;
    static double* grayscale_frame = (double*)malloc(VIDEO_WIDTH*VIDEO_HEIGHT*sizeof(double));
    if (new_frame)
    {
        source >> frame;
        double mean_value = 0;
        for (int row=0; row<VIDEO_HEIGHT; row++)
            for (int col=0; col<VIDEO_WIDTH; col++)
            {
                const double grayscale_value = (frame.at<cv::Vec3b>(row, col)[0] +
                                                frame.at<cv::Vec3b>(row, col)[1] +
                                                frame.at<cv::Vec3b>(row, col)[2])/(3.0*128);
                mean_value += grayscale_value / (VIDEO_WIDTH * VIDEO_HEIGHT);
                grayscale_frame[row*VIDEO_WIDTH + col] = grayscale_value;
            }
        // subtract the mean
        for (int i=0; i<VIDEO_HEIGHT*VIDEO_WIDTH; i++)
            grayscale_frame[i] -= mean_value;
    }
    return grayscale_frame;
}

double video_input(const int x, const int y)
{
    // Get the current frame (or a new frame in case we are asked for the first
    // element
    double *frame = get_frame(x==0 && y==0);
    return frame[y*VIDEO_WIDTH + x];
}
'''.replace('VIDEO_FILENAME', filename))
@check_units(x=1, y=1, result=1)
def video_input(x, y):
    # we assume this will only be called in the custom operation (and not for
    # example in a reset or synaptic statement), so we don't need to do indexing
    # but we can directly return the full result
    _, frame = video.read()
    grayscale = frame.mean(axis=2)
    grayscale /= 128.  # scale everything between 0 and 2
    return grayscale.ravel() - grayscale.ravel().mean()


N = width * height
tau, tau_th = 10*ms, time_between_frames
G = NeuronGroup(N, '''dv/dt = (-v + I)/tau : 1
                      dv_th/dt = -v_th/tau_th : 1
                      row : integer (constant)
                      column : integer (constant)
                      I : 1 # input current''',
                threshold='v>v_th', reset='v=0; v_th = 3*v_th + 1.0',
                method='linear')
G.v_th = 1
G.row = 'i/width'
G.column = 'i%width'

G.run_regularly('I = video_input(column, row)',
                dt=time_between_frames)
mon = SpikeMonitor(G)
runtime = frame_count*time_between_frames
run(runtime, report='text')
device.build(compile=True, run=True)

# Avoid going through the whole Brian2 indexing machinery too much
i, t, row, column = mon.i[:], mon.t[:], G.row[:], G.column[:]

import matplotlib.animation as animation

# TODO: Use overlapping windows
stepsize = 100*ms
def next_spikes():
    step = next_spikes.step
    if step*stepsize > runtime:
        next_spikes.step=0
        raise StopIteration()
    spikes = i[(t>=step*stepsize) & (t<(step+1)*stepsize)]
    next_spikes.step += 1
    yield column[spikes], row[spikes]
next_spikes.step = 0

fig, ax = plt.subplots()
dots, = ax.plot([], [], 'k.', markersize=2, alpha=.25)
ax.set_xlim(0, width)
ax.set_ylim(0, height)
ax.invert_yaxis()
def run(data):
    x, y = data
    dots.set_data(x, y)

ani = animation.FuncAnimation(fig, run, next_spikes, blit=False, repeat=True,
                              repeat_delay=1000)
plt.show()

Example: stochastic_odes

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Demonstrate the correctness of the “derivative-free Milstein method” for multiplicative noise.

from brian2 import *
# We only get exactly the same random numbers for the exact solution and the
# simulation if we use the numpy code generation target
prefs.codegen.target = 'numpy'

# setting a random seed makes all variants use exactly the same Wiener process
seed = 12347

X0 = 1
mu = 0.5/second # drift
sigma = 0.1/second #diffusion

runtime = 1*second


def simulate(method, dt):
    '''
    simulate geometrical Brownian with the given method
    '''
    np.random.seed(seed)
    G = NeuronGroup(1, 'dX/dt = (mu - 0.5*second*sigma**2)*X + X*sigma*xi*second**.5: 1',
                    dt=dt, method=method)
    G.X = X0
    mon = StateMonitor(G, 'X', record=True)
    net = Network(G, mon)
    net.run(runtime)
    return mon.t_[:], mon.X.flatten()


def exact_solution(t, dt):
    '''
    Return the exact solution for geometrical Brownian motion at the given
    time points
    '''
    # Remove units for simplicity
    my_mu = float(mu)
    my_sigma = float(sigma)
    dt = float(dt)
    t = asarray(t)

    np.random.seed(seed)
    # We are calculating the values at the *start* of a time step, as when using
    # a StateMonitor. Therefore the Brownian motion starts with zero
    brownian = np.hstack([0, cumsum(sqrt(dt) * np.random.randn(len(t)-1))])

    return (X0 * exp((my_mu - 0.5*my_sigma**2)*(t+dt) + my_sigma*brownian))

figure(1, figsize=(16, 7))
figure(2, figsize=(16, 7))

methods = ['milstein', 'heun']
dts = [1*ms, 0.5*ms, 0.2*ms, 0.1*ms, 0.05*ms, 0.025*ms, 0.01*ms, 0.005*ms]

rows = floor(sqrt(len(dts)))
cols = ceil(1.0 * len(dts) / rows)
errors = dict([(method, zeros(len(dts))) for method in methods])
for dt_idx, dt in enumerate(dts):
    print('dt: %s' % dt)
    trajectories = {}
    # Test the numerical methods
    for method in methods:
        t, trajectories[method] = simulate(method, dt)
    # Calculate the exact solution
    exact = exact_solution(t, dt)

    for method in methods:
        # plot the trajectories
        figure(1)
        subplot(rows, cols, dt_idx+1)
        plot(t, trajectories[method], label=method, alpha=0.75)

        # determine the mean absolute error
        errors[method][dt_idx] = mean(abs(trajectories[method] - exact))
        # plot the difference to the real trajectory
        figure(2)
        subplot(rows, cols, dt_idx+1)
        plot(t, trajectories[method] - exact, label=method, alpha=0.75)

    figure(1)
    plot(t, exact, color='gray', lw=2, label='exact', alpha=0.75)
    title('dt = %s' % str(dt))
    xticks([])

figure(1)
legend(frameon=False, loc='best')
tight_layout()

figure(2)
legend(frameon=False, loc='best')
tight_layout()

figure(3)
for method in methods:
    plot(array(dts) / ms, errors[method], 'o', label=method)
legend(frameon=False, loc='best')
xscale('log')
yscale('log')
xlabel('dt (ms)')
ylabel('Mean absolute error')
tight_layout()

show()
_images/advanced.stochastic_odes.1.png _images/advanced.stochastic_odes.2.png _images/advanced.stochastic_odes.3.png

compartmental

Example: bipolar_cell

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

A pseudo MSO neuron, with two dendrites and one axon (fake geometry).

from brian2 import *

# Morphology
morpho = Soma(30*um)
morpho.axon = Cylinder(diameter=1*um, length=300*um, n=100)
morpho.L = Cylinder(diameter=1*um, length=100*um, n=50)
morpho.R = Cylinder(diameter=1*um, length=150*um, n=50)

# Passive channels
gL = 1e-4*siemens/cm**2
EL = -70*mV
eqs='''
Im = gL * (EL - v) : amp/meter**2
I : amp (point current)
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs,
                       Cm=1*uF/cm**2, Ri=100*ohm*cm, method='exponential_euler')
neuron.v = EL
neuron.I = 0*amp

# Monitors
mon_soma = StateMonitor(neuron, 'v', record=[0])
mon_L = StateMonitor(neuron.L, 'v', record=True)
mon_R = StateMonitor(neuron, 'v', record=morpho.R[75*um])

run(1*ms)
neuron.I[morpho.L[50*um]] = 0.2*nA  # injecting in the left dendrite
run(5*ms)
neuron.I = 0*amp
run(50*ms, report='text')

subplot(211)
plot(mon_L.t/ms, mon_soma[0].v/mV, 'k')
plot(mon_L.t/ms, mon_L[morpho.L[50*um]].v/mV, 'r')
plot(mon_L.t/ms, mon_R[morpho.R[75*um]].v/mV, 'b')
ylabel('v (mV)')
subplot(212)
for x in linspace(0*um, 100*um, 10, endpoint=False):
    plot(mon_L.t/ms, mon_L[morpho.L[x]].v/mV)
xlabel('Time (ms)')
ylabel('v (mV)')
show()
_images/compartmental.bipolar_cell.1.png

Example: bipolar_with_inputs

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

A pseudo MSO neuron, with two dendrites (fake geometry). There are synaptic inputs.

from brian2 import *

# Morphology
morpho = Soma(30*um)
morpho.L = Cylinder(diameter=1*um, length=100*um, n=50)
morpho.R = Cylinder(diameter=1*um, length=100*um, n=50)

# Passive channels
gL = 1e-4*siemens/cm**2
EL = -70*mV
Es = 0*mV
eqs='''
Im = gL*(EL-v) : amp/meter**2
Is = gs*(Es-v) : amp (point current)
gs : siemens
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs,
                       Cm=1*uF/cm**2, Ri=100*ohm*cm, method='exponential_euler')
neuron.v = EL

# Regular inputs
stimulation = NeuronGroup(2, 'dx/dt = 300*Hz : 1', threshold='x>1', reset='x=0',
                          method='euler')
stimulation.x = [0, 0.5]  # Asynchronous

# Synapses
taus = 1*ms
w = 20*nS
S = Synapses(stimulation, neuron, model='''dg/dt = -g/taus : siemens (clock-driven)
                                           gs_post = g : siemens (summed)''',
             on_pre='g += w', method='linear')

S.connect(i=0, j=morpho.L[-1])
S.connect(i=1, j=morpho.R[-1])

# Monitors
mon_soma = StateMonitor(neuron, 'v', record=[0])
mon_L = StateMonitor(neuron.L, 'v', record=True)
mon_R = StateMonitor(neuron.R, 'v',
                     record=morpho.R[-1])

run(50*ms, report='text')

subplot(211)
plot(mon_L.t/ms, mon_soma[0].v/mV, 'k')
plot(mon_L.t/ms, mon_L[morpho.L[-1]].v/mV, 'r')
plot(mon_L.t/ms, mon_R[morpho.R[-1]].v/mV, 'b')
ylabel('v (mV)')
subplot(212)
for x in linspace(0*um, 100*um, 10, endpoint=False):
    plot(mon_L.t/ms, mon_L[morpho.L[x]].v/mV)
xlabel('Time (ms)')
ylabel('v (mV)')
show()
_images/compartmental.bipolar_with_inputs.1.png

Example: bipolar_with_inputs2

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

A pseudo MSO neuron, with two dendrites (fake geometry). There are synaptic inputs. Second method.

from brian2 import *

# Morphology
morpho = Soma(30*um)
morpho.L = Cylinder(diameter=1*um, length=100*um, n=50)
morpho.R = Cylinder(diameter=1*um, length=100*um, n=50)

# Passive channels
gL = 1e-4*siemens/cm**2
EL = -70*mV
Es = 0*mV
taus = 1*ms
eqs='''
Im = gL*(EL-v) : amp/meter**2
Is = gs*(Es-v) : amp (point current)
dgs/dt = -gs/taus : siemens
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs,
                       Cm=1*uF/cm**2, Ri=100*ohm*cm, method='exponential_euler')
neuron.v = EL

# Regular inputs
stimulation = NeuronGroup(2, 'dx/dt = 300*Hz : 1', threshold='x>1', reset='x=0',
                          method='euler')
stimulation.x = [0, 0.5] # Asynchronous

# Synapses
w = 20*nS
S = Synapses(stimulation, neuron,on_pre='gs += w')
S.connect(i=0, j=morpho.L[99.9*um])
S.connect(i=1, j=morpho.R[99.9*um])

# Monitors
mon_soma = StateMonitor(neuron, 'v', record=[0])
mon_L = StateMonitor(neuron.L, 'v', record=True)
mon_R = StateMonitor(neuron, 'v', record=morpho.R[99.9*um])

run(50*ms, report='text')

subplot(211)
plot(mon_L.t/ms, mon_soma[0].v/mV, 'k')
plot(mon_L.t/ms, mon_L[morpho.L[99.9*um]].v/mV, 'r')
plot(mon_L.t/ms, mon_R[morpho.R[99.9*um]].v/mV, 'b')
ylabel('v (mV)')
subplot(212)
for i in [0, 5, 10, 15, 20, 25, 30, 35, 40, 45]:
    plot(mon_L.t/ms, mon_L.v[i, :]/mV)
xlabel('Time (ms)')
ylabel('v (mV)')
show()
_images/compartmental.bipolar_with_inputs2.1.png

Example: cylinder

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

A short cylinder with constant injection at one end.

from brian2 import *

defaultclock.dt = 0.01*ms

# Morphology
diameter = 1*um
length = 300*um
Cm = 1*uF/cm**2
Ri = 150*ohm*cm
N = 200
morpho = Cylinder(diameter=diameter, length=length, n=N)

# Passive channels
gL = 1e-4*siemens/cm**2
EL = -70*mV
eqs = '''
Im = gL * (EL - v) : amp/meter**2
I : amp (point current)
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=Cm, Ri=Ri,
                       method='exponential_euler')
neuron.v = EL

la = neuron.space_constant[0]
print("Electrotonic length: %s" % la)

neuron.I[0] = 0.02*nA # injecting at the left end
run(100*ms, report='text')

plot(neuron.distance/um, neuron.v/mV, 'kx')
# Theory
x = neuron.distance
ra = la * 4 * Ri / (pi * diameter**2)
theory = EL + ra * neuron.I[0] * cosh((length - x) / la) / sinh(length / la)
plot(x/um, theory/mV, 'r')
xlabel('x (um)')
ylabel('v (mV)')
show()
_images/compartmental.cylinder.1.png

Example: hh_with_spikes

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Hodgkin-Huxley equations (1952). Spikes are recorded along the axon, and then velocity is calculated.

from brian2 import *
from scipy import stats

defaultclock.dt = 0.01*ms

morpho = Cylinder(length=10*cm, diameter=2*238*um, n=1000, type='axon')

El = 10.613*mV
ENa = 115*mV
EK = -12*mV
gl = 0.3*msiemens/cm**2
gNa0 = 120*msiemens/cm**2
gK = 36*msiemens/cm**2

# Typical equations
eqs = '''
# The same equations for the whole neuron, but possibly different parameter values
# distributed transmembrane current
Im = gl * (El-v) + gNa * m**3 * h * (ENa-v) + gK * n**4 * (EK-v) : amp/meter**2
I : amp (point current) # applied current
dm/dt = alpham * (1-m) - betam * m : 1
dn/dt = alphan * (1-n) - betan * n : 1
dh/dt = alphah * (1-h) - betah * h : 1
alpham = (0.1/mV) * (-v+25*mV) / (exp((-v+25*mV) / (10*mV)) - 1)/ms : Hz
betam = 4 * exp(-v/(18*mV))/ms : Hz
alphah = 0.07 * exp(-v/(20*mV))/ms : Hz
betah = 1/(exp((-v+30*mV) / (10*mV)) + 1)/ms : Hz
alphan = (0.01/mV) * (-v+10*mV) / (exp((-v+10*mV) / (10*mV)) - 1)/ms : Hz
betan = 0.125*exp(-v/(80*mV))/ms : Hz
gNa : siemens/meter**2
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, method="exponential_euler",
                       refractory="m > 0.4", threshold="m > 0.5",
                       Cm=1*uF/cm**2, Ri=35.4*ohm*cm)
neuron.v = 0*mV
neuron.h = 1
neuron.m = 0
neuron.n = .5
neuron.I = 0*amp
neuron.gNa = gNa0
M = StateMonitor(neuron, 'v', record=True)
spikes = SpikeMonitor(neuron)

run(50*ms, report='text')
neuron.I[0] = 1*uA # current injection at one end
run(3*ms)
neuron.I = 0*amp
run(50*ms, report='text')

# Calculation of velocity
slope, intercept, r_value, p_value, std_err = stats.linregress(spikes.t/second,
                                                neuron.distance[spikes.i]/meter)
print("Velocity = %.2f m/s" % slope)

subplot(211)
for i in range(10):
    plot(M.t/ms, M.v.T[:, i*100]/mV)
ylabel('v')
subplot(212)
plot(spikes.t/ms, spikes.i*neuron.length[0]/cm, '.k')
plot(spikes.t/ms, (intercept+slope*(spikes.t/second))/cm, 'r')
xlabel('Time (ms)')
ylabel('Position (cm)')
show()
_images/compartmental.hh_with_spikes.1.png

Example: hodgkin_huxley_1952

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Hodgkin-Huxley equations (1952).

from brian2 import *

morpho = Cylinder(length=10*cm, diameter=2*238*um, n=1000, type='axon')

El = 10.613*mV
ENa = 115*mV
EK = -12*mV
gl = 0.3*msiemens/cm**2
gNa0 = 120*msiemens/cm**2
gK = 36*msiemens/cm**2

# Typical equations
eqs = '''
# The same equations for the whole neuron, but possibly different parameter values
# distributed transmembrane current
Im = gl * (El-v) + gNa * m**3 * h * (ENa-v) + gK * n**4 * (EK-v) : amp/meter**2
I : amp (point current) # applied current
dm/dt = alpham * (1-m) - betam * m : 1
dn/dt = alphan * (1-n) - betan * n : 1
dh/dt = alphah * (1-h) - betah * h : 1
alpham = (0.1/mV) * (-v+25*mV) / (exp((-v+25*mV) / (10*mV)) - 1)/ms : Hz
betam = 4 * exp(-v/(18*mV))/ms : Hz
alphah = 0.07 * exp(-v/(20*mV))/ms : Hz
betah = 1/(exp((-v+30*mV) / (10*mV)) + 1)/ms : Hz
alphan = (0.01/mV) * (-v+10*mV) / (exp((-v+10*mV) / (10*mV)) - 1)/ms : Hz
betan = 0.125*exp(-v/(80*mV))/ms : Hz
gNa : siemens/meter**2
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=1*uF/cm**2,
                       Ri=35.4*ohm*cm, method="exponential_euler")
neuron.v = 0*mV
neuron.h = 1
neuron.m = 0
neuron.n = .5
neuron.I = 0
neuron.gNa = gNa0
neuron[5*cm:10*cm].gNa = 0*siemens/cm**2
M = StateMonitor(neuron, 'v', record=True)

run(50*ms, report='text')
neuron.I[0] = 1*uA  # current injection at one end
run(3*ms)
neuron.I = 0*amp
run(100*ms, report='text')
for i in range(75, 125, 1):
    plot(cumsum(neuron.length)/cm, i+(1./60)*M.v[:, i*5]/mV, 'k')
yticks([])
ylabel('Time [major] v (mV) [minor]')
xlabel('Position (cm)')
axis('tight')
show()
_images/compartmental.hodgkin_huxley_1952.1.png

Example: infinite_cable

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

An (almost) infinite cable with pulse injection in the middle.

from brian2 import *

defaultclock.dt = 0.001*ms

# Morphology
diameter = 1*um
Cm = 1*uF/cm**2
Ri = 100*ohm*cm
N = 500
morpho = Cylinder(diameter=diameter, length=3*mm, n=N)

# Passive channels
gL = 1e-4*siemens/cm**2
EL = -70*mV
eqs = '''
Im = gL * (EL-v) : amp/meter**2
I : amp (point current)
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=Cm, Ri=Ri,
                       method = 'exponential_euler')
neuron.v = EL

taum = Cm  /gL  # membrane time constant
print("Time constant: %s" % taum)
la = neuron.space_constant[0]
print("Characteristic length: %s" % la)

# Monitors
mon = StateMonitor(neuron, 'v', record=range(0, N//2, 20))

neuron.I[len(neuron) // 2] = 1*nA  # injecting in the middle
run(0.02*ms)
neuron.I = 0*amp
run(10*ms, report='text')

t = mon.t
plot(t/ms, mon.v.T/mV, 'k')
# Theory (incorrect near cable ends)
for i in range(0, len(neuron)//2, 20):
    x = (len(neuron)/2 - i) * morpho.length[0]
    theory = (1/(la*Cm*pi*diameter) * sqrt(taum / (4*pi*(t + defaultclock.dt))) *
              exp(-(t+defaultclock.dt)/taum -
                  taum / (4*(t+defaultclock.dt))*(x/la)**2))
    theory = EL + theory * 1*nA * 0.02*ms
    plot(t/ms, theory/mV, 'r')
xlabel('Time (ms)')
ylabel('v (mV')
show()
_images/compartmental.infinite_cable.1.png

Example: lfp

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Hodgkin-Huxley equations (1952)

We calculate the extracellular field potential at various places.

from brian2 import *
defaultclock.dt = 0.01*ms
morpho = Cylinder(x=[0, 10]*cm, diameter=2*238*um, n=1000, type='axon')

El = 10.613* mV
ENa = 115*mV
EK = -12*mV
gl = 0.3*msiemens/cm**2
gNa0 = 120*msiemens/cm**2
gK = 36*msiemens/cm**2

# Typical equations
eqs = '''
# The same equations for the whole neuron, but possibly different parameter values
# distributed transmembrane current
Im = gl * (El-v) + gNa * m**3 * h * (ENa-v) + gK * n**4 * (EK-v) : amp/meter**2
I : amp (point current) # applied current
dm/dt = alpham * (1-m) - betam * m : 1
dn/dt = alphan * (1-n) - betan * n : 1
dh/dt = alphah * (1-h) - betah * h : 1
alpham = (0.1/mV) * (-v+25*mV) / (exp((-v+25*mV) / (10*mV)) - 1)/ms : Hz
betam = 4 * exp(-v/(18*mV))/ms : Hz
alphah = 0.07 * exp(-v/(20*mV))/ms : Hz
betah = 1/(exp((-v+30*mV) / (10*mV)) + 1)/ms : Hz
alphan = (0.01/mV) * (-v+10*mV) / (exp((-v+10*mV) / (10*mV)) - 1)/ms : Hz
betan = 0.125*exp(-v/(80*mV))/ms : Hz
gNa : siemens/meter**2
previous_v : volt
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=1*uF/cm**2,
                       Ri=35.4*ohm*cm, method="exponential_euler")
neuron.v = 0*mV
neuron.h = 1
neuron.m = 0
neuron.n = .5
neuron.I = 0
neuron.gNa = gNa0
neuron[5*cm:10*cm].gNa = 0*siemens/cm**2
M = StateMonitor(neuron, 'v', record=True)

neuron.run_regularly('previous_v = v', when='start')

# LFP recorder
Ne = 5 # Number of electrodes
sigma = 0.3*siemens/meter # Resistivity of extracellular field (0.3-0.4 S/m)
lfp = NeuronGroup(Ne,model='''v : volt
                              x : meter
                              y : meter
                              z : meter''')
lfp.x = 7*cm # Off center (to be far from stimulating electrode)
lfp.y = [1*mm, 2*mm, 4*mm, 8*mm, 16*mm]
# Synapses are normally executed after state update, so v-previous_v = dv
S = Synapses(neuron,lfp,model='''w : ohm*meter**2 (constant) # Weight in the LFP calculation
                                 v_post = w*(Cm_pre*(v_pre-previous_v_pre)/dt-Im_pre) : volt (summed)''')
S.summed_updaters['v_post'].when = 'after_groups'  # otherwise v and previous_v would be identical
S.connect()
S.w = 'area_pre/(4*pi*sigma)/((x_pre-x_post)**2+(y_pre-y_post)**2+(z_pre-z_post)**2)**.5'

Mlfp = StateMonitor(lfp,'v',record=True)

run(50*ms, report='text')
neuron.I[0] = 1*uA  # current injection at one end
run(3*ms)
neuron.I = 0*amp
run(100*ms, report='text')

subplot(211)
for i in range(10):
    plot(M.t/ms,M.v[i*100]/mV)
ylabel('$V_m$ (mV)')
subplot(212)
for i in range(5):
    plot(M.t/ms,Mlfp.v[i]/mV)
ylabel('LFP (mV)')
xlabel('Time (ms)')
show()
_images/compartmental.lfp.1.png

Example: morphotest

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

from brian2 import *

# Morphology
morpho = Soma(30*um)
morpho.L = Cylinder(diameter=1*um, length=100*um, n=5)
morpho.LL = Cylinder(diameter=1*um, length=20*um, n=2)
morpho.R = Cylinder(diameter=1*um, length=100*um, n=5)

# Passive channels
gL = 1e-4*siemens/cm**2
EL = -70*mV
eqs = '''
Im = gL * (EL-v) : amp/meter**2
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs,
                       Cm=1*uF/cm**2, Ri=100*ohm*cm, method='exponential_euler')
neuron.v = arange(0, 13)*volt

print(neuron.v)
print(neuron.L.v)
print(neuron.LL.v)
print(neuron.L.main.v)

Example: rall

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

A cylinder plus two branches, with diameters according to Rall’s formula

from brian2 import *

defaultclock.dt = 0.01*ms

# Passive channels
gL = 1e-4*siemens/cm**2
EL = -70*mV

# Morphology
diameter = 1*um
length = 300*um
Cm = 1*uF/cm**2
Ri = 150*ohm*cm
N = 500
rm = 1 / (gL * pi * diameter)  # membrane resistance per unit length
ra = (4 * Ri)/(pi * diameter**2)  # axial resistance per unit length
la = sqrt(rm / ra) # space length
morpho = Cylinder(diameter=diameter, length=length, n=N)
d1 = 0.5*um
L1 = 200*um
rm = 1 / (gL * pi * d1) # membrane resistance per unit length
ra = (4 * Ri) / (pi * d1**2) # axial resistance per unit length
l1 = sqrt(rm / ra) # space length
morpho.L = Cylinder(diameter=d1, length=L1, n=N)
d2 = (diameter**1.5 - d1**1.5)**(1. / 1.5)
rm = 1/(gL * pi * d2) # membrane resistance per unit length
ra = (4 * Ri) / (pi * d2**2) # axial resistance per unit length
l2 = sqrt(rm / ra) # space length
L2 = (L1 / l1) * l2
morpho.R = Cylinder(diameter=d2, length=L2, n=N)

eqs='''
Im = gL * (EL-v) : amp/meter**2
I : amp (point current)
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=Cm, Ri=Ri,
                       method='exponential_euler')
neuron.v = EL

neuron.I[0] = 0.02*nA # injecting at the left end
run(100*ms, report='text')

plot(neuron.main.distance/um, neuron.main.v/mV, 'k')
plot(neuron.L.distance/um, neuron.L.v/mV, 'k')
plot(neuron.R.distance/um, neuron.R.v/mV, 'k')
# Theory
x = neuron.main.distance
ra = la * 4 * Ri/(pi * diameter**2)
l = length/la + L1/l1
theory = EL + ra*neuron.I[0]*cosh(l - x/la)/sinh(l)
plot(x/um, theory/mV, 'r')
x = neuron.L.distance
theory = (EL+ra*neuron.I[0]*cosh(l - neuron.main.distance[-1]/la -
                                 (x - neuron.main.distance[-1])/l1)/sinh(l))
plot(x/um, theory/mV, 'r')
x = neuron.R.distance
theory = (EL+ra*neuron.I[0]*cosh(l - neuron.main.distance[-1]/la -
                                 (x - neuron.main.distance[-1])/l2)/sinh(l))
plot(x/um, theory/mV, 'r')
xlabel('x (um)')
ylabel('v (mV)')
show()
_images/compartmental.rall.1.png

Example: spike_initiation

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Ball and stick with Na and K channels

from brian2 import *

defaultclock.dt = 0.025*ms

# Morphology
morpho = Soma(30*um)
morpho.axon = Cylinder(diameter=1*um, length=300*um, n=100)

# Channels
gL = 1e-4*siemens/cm**2
EL = -70*mV
ENa = 50*mV
ka = 6*mV
ki = 6*mV
va = -30*mV
vi = -50*mV
EK = -90*mV
vk = -20*mV
kk = 8*mV
eqs = '''
Im = gL*(EL-v)+gNa*m*h*(ENa-v)+gK*n*(EK-v) : amp/meter**2
dm/dt = (minf-m)/(0.3*ms) : 1 # simplified Na channel
dh/dt = (hinf-h)/(3*ms) : 1 # inactivation
dn/dt = (ninf-n)/(5*ms) : 1 # K+
minf = 1/(1+exp((va-v)/ka)) : 1
hinf = 1/(1+exp((v-vi)/ki)) : 1
ninf = 1/(1+exp((vk-v)/kk)) : 1
I : amp (point current)
gNa : siemens/meter**2
gK : siemens/meter**2
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs,
                       Cm=1*uF/cm**2, Ri=100*ohm*cm, method='exponential_euler')
neuron.v = -65*mV
neuron.I = 0*amp
neuron.axon[30*um:60*um].gNa = 700*gL
neuron.axon[30*um:60*um].gK = 700*gL

# Monitors
mon=StateMonitor(neuron, 'v', record=True)

run(1*ms)
neuron.main.I = 0.15*nA
run(50*ms)
neuron.I = 0*amp
run(95*ms, report='text')

plot(mon.t/ms, mon.v[0]/mV, 'r')
plot(mon.t/ms, mon.v[20]/mV, 'g')
plot(mon.t/ms, mon.v[40]/mV, 'b')
plot(mon.t/ms, mon.v[60]/mV, 'k')
plot(mon.t/ms, mon.v[80]/mV, 'y')
xlabel('Time (ms)')
ylabel('v (mV)')
show()
_images/compartmental.spike_initiation.1.png

frompapers

Example: Brette_2004

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Phase locking in leaky integrate-and-fire model

Fig. 2A from: Brette R (2004). Dynamics of one-dimensional spiking neuron models. J Math Biol 48(1): 38-56.

This shows the phase-locking structure of a LIF driven by a sinusoidal current. When the current crosses the threshold (a<3), the model almost always phase locks (in a measure-theoretical sense).

from brian2 import *

# defaultclock.dt = 0.01*ms  # for a more precise picture
N = 2000
tau = 100*ms
freq = 1/tau

eqs = '''
dv/dt = (-v + a + 2*sin(2*pi*t/tau))/tau : 1
a : 1
'''

neurons = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', method='euler')
neurons.a = linspace(2, 4, N)

run(5*second, report='text')  # discard the first spikes (wait for convergence)
S = SpikeMonitor(neurons)
run(5*second, report='text')

i, t = S.it
plot((t % tau)/tau, neurons.a[i], '.')
xlabel('Spike phase')
ylabel('Parameter a')
show()
_images/frompapers.Brette_2004.1.png

Example: Brette_Gerstner_2005

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Adaptive exponential integrate-and-fire model. http://www.scholarpedia.org/article/Adaptive_exponential_integrate-and-fire_model

Introduced in Brette R. and Gerstner W. (2005), Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity, J. Neurophysiol. 94: 3637 - 3642.

from brian2 import *

# Parameters
C = 281 * pF
gL = 30 * nS
taum = C / gL
EL = -70.6 * mV
VT = -50.4 * mV
DeltaT = 2 * mV
Vcut = VT + 5 * DeltaT

# Pick an electrophysiological behaviour
tauw, a, b, Vr = 144*ms, 4*nS, 0.0805*nA, -70.6*mV # Regular spiking (as in the paper)
#tauw,a,b,Vr=20*ms,4*nS,0.5*nA,VT+5*mV # Bursting
#tauw,a,b,Vr=144*ms,2*C/(144*ms),0*nA,-70.6*mV # Fast spiking

eqs = """
dvm/dt = (gL*(EL - vm) + gL*DeltaT*exp((vm - VT)/DeltaT) + I - w)/C : volt
dw/dt = (a*(vm - EL) - w)/tauw : amp
I : amp
"""

neuron = NeuronGroup(1, model=eqs, threshold='vm>Vcut',
                     reset="vm=Vr; w+=b", method='euler')
neuron.vm = EL
trace = StateMonitor(neuron, 'vm', record=0)
spikes = SpikeMonitor(neuron)

run(20 * ms)
neuron.I = 1*nA
run(100 * ms)
neuron.I = 0*nA
run(20 * ms)

# We draw nicer spikes
vm = trace[0].vm[:]
for t in spikes.t:
    i = int(t / defaultclock.dt)
    vm[i] = 20*mV

plot(trace.t / ms, vm / mV)
xlabel('time (ms)')
ylabel('membrane potential (mV)')
show()
_images/frompapers.Brette_Gerstner_2005.1.png

Example: Brette_Guigon_2003

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Reliability of spike timing

Adapted from Fig. 10D,E of Brette R and E Guigon (2003). Reliability of Spike Timing Is a General Property of Spiking Model Neurons. Neural Computation 15, 279-308.

This shows that reliability of spike timing is a generic property of spiking neurons, even those that are not leaky. This is a non-physiological model which can be leaky or anti-leaky depending on the sign of the input I.

All neurons receive the same fluctuating input, scaled by a parameter p that varies across neurons. This shows:

  1. reproducibility of spike timing
  2. robustness with respect to deterministic changes (parameter)
  3. increased reproducibility in the fluctuation-driven regime (input crosses the threshold)
from brian2 import *

N = 500
tau = 33*ms
taux = 20*ms
sigma = 0.02

eqs_input = '''
dx/dt = -x/taux + (2/taux)**.5*xi : 1
'''

eqs = '''
dv/dt = (v*I + 1)/tau + sigma*(2/tau)**.5*xi : 1
I = 0.5 + 3*p*B : 1
B = 2./(1 + exp(-2*x)) - 1 : 1 (shared)
p : 1
x : 1 (linked)
'''

input = NeuronGroup(1, eqs_input, method='euler')
neurons = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', method='euler')
neurons.p = '1.0*i/N'
neurons.v = 'rand()'
neurons.x = linked_var(input, 'x')

M = StateMonitor(neurons, 'B', record=0)
S = SpikeMonitor(neurons)

run(1000*ms, report='text')

subplot(211)  # The input
plot(M.t/ms, M[0].B)
xticks([])
title('shared input')
subplot(212)
plot(S.t/ms, neurons.p[S.i], '.')
plot([0, 1000], [.5, .5], 'r')
xlabel('time (ms)')
ylabel('p')
title('spiking activity')
show()
_images/frompapers.Brette_Guigon_2003.1.png

Example: Brunel_Hakim_1999

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Dynamics of a network of sparsely connected inhibitory current-based integrate-and-fire neurons. Individual neurons fire irregularly at low rate but the network is in an oscillatory global activity regime where neurons are weakly synchronized.

Reference:
“Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates” Nicolas Brunel & Vincent Hakim Neural Computation 11, 1621-1671 (1999)
from brian2 import *

N = 5000
Vr = 10*mV
theta = 20*mV
tau = 20*ms
delta = 2*ms
taurefr = 2*ms
duration = .1*second
C = 1000
sparseness = float(C)/N
J = .1*mV
muext = 25*mV
sigmaext = 1*mV

eqs = """
dV/dt = (-V+muext + sigmaext * sqrt(tau) * xi)/tau : volt
"""

group = NeuronGroup(N, eqs, threshold='V>theta',
                    reset='V=Vr', refractory=taurefr, method='euler')
group.V = Vr
conn = Synapses(group, group, on_pre='V += -J', delay=delta)
conn.connect(p=sparseness)
M = SpikeMonitor(group)
LFP = PopulationRateMonitor(group)

run(duration)

subplot(211)
plot(M.t/ms, M.i, '.')
xlim(0, duration/ms)

subplot(212)
plot(LFP.t/ms, LFP.smooth_rate(window='flat', width=0.5*ms)/Hz)
xlim(0, duration/ms)

show()
_images/frompapers.Brunel_Hakim_1999.1.png

Example: Clopath_et_al_2010_homeostasis

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

This code contains an adapted version of the voltage-dependent triplet STDP rule from: Clopath et al., Connectivity reflects coding: a model of voltage-based STDP with homeostasis, Nature Neuroscience, 2010 (http://dx.doi.org/10.1038/nn.2479)

The plasticity rule is adapted for a leaky integrate & fire model in Brian2.

As an illustration of the rule, we simulate the competition between inputs projecting on a downstream neuron

We kindly ask to cite the article when using the model presented below.

This code was written by Jacopo Bono, 12/2015

from brian2 import *

################################################################################
# PLASTICITY MODEL
################################################################################

#### Plasticity Parameters

V_rest = -70.*mV        # resting potential
V_thresh = -55.*mV      # spiking threshold
Theta_low = V_rest      # depolarization threshold for plasticity
x_reset = 1.            # spike trace reset value
taux = 15.*ms           # spike trace time constant
A_LTD = 1.5e-4          # depression amplitude
A_LTP = 1.5e-2          # potentiation amplitude
tau_lowpass1 = 40*ms    # timeconstant for low-pass filtered voltage
tau_lowpass2 = 30*ms    # timeconstant for low-pass filtered voltage
tau_homeo = 1000*ms     # homeostatic timeconstant
v_target = 12*mV**2     # target depolarisation

#### Plasticity Equations

# equations executed at every timestep
Syn_model =   ('''
            w_ampa:1                # synaptic weight (ampa synapse)
            ''')

# equations executed only when a presynaptic spike occurs
Pre_eq = ('''
            g_ampa_post += w_ampa*ampa_max_cond                                                             # increment synaptic conductance
            A_LTD_u = A_LTD*(v_homeo**2/v_target)                                                           # metaplasticity
            w_minus = A_LTD_u*(v_lowpass1_post/mV - Theta_low/mV)*(v_lowpass1_post/mV - Theta_low/mV > 0)   # synaptic depression
            w_ampa = clip(w_ampa-w_minus,0,w_max)                                                           # hard bounds
            ''' )

# equations executed only when a postsynaptic spike occurs
Post_eq = ('''
            v_lowpass1 += 10*mV                                                                                     # mimics the depolarisation effect due to a spike
            v_lowpass2 += 10*mV                                                                                     # mimics the depolarisation effect due to a spike
             v_homeo += 0.1*mV                                                                                      # mimics the depolarisation effect due to a spike
            w_plus = A_LTP*x_trace_pre*(v_lowpass2_post/mV - Theta_low/mV)*(v_lowpass2_post/mV - Theta_low/mV > 0)  # synaptic potentiation
            w_ampa = clip(w_ampa+w_plus,0,w_max)                                                                    # hard bounds
            ''' )

################################################################################
# I&F Parameters and equations
################################################################################

#### Neuron parameters

gleak = 30.*nS                  # leak conductance
C = 300.*pF                     # membrane capacitance
tau_AMPA = 2.*ms                # AMPA synaptic timeconstant
E_AMPA = 0.*mV                  # reversal potential AMPA

ampa_max_cond = 5.e-8*siemens  # Ampa maximal conductance
w_max = 1.                      # maximal ampa weight

#### Neuron Equations

# We connect 10 presynaptic neurons to 1 downstream neuron

# downstream neuron
eqs_neurons = '''
dv/dt = (gleak*(V_rest-v) + I_ext + I_syn)/C: volt      # voltage
dv_lowpass1/dt = (v-v_lowpass1)/tau_lowpass1 : volt     # low-pass filter of the voltage
dv_lowpass2/dt = (v-v_lowpass2)/tau_lowpass2 : volt     # low-pass filter of the voltage
dv_homeo/dt = (v-V_rest-v_homeo)/tau_homeo : volt       # low-pass filter of the voltage
I_ext : amp                                             # external current
I_syn = g_ampa*(E_AMPA-v): amp                          # synaptic current
dg_ampa/dt = -g_ampa/tau_AMPA : siemens                 # synaptic conductance
dx_trace/dt = -x_trace/taux :1                          # spike trace
'''

# input neurons
eqs_inputs = '''
dv/dt = gleak*(V_rest-v)/C: volt                        # voltage
dx_trace/dt = -x_trace/taux :1                          # spike trace
rates : Hz                                              # input rates
selected_index : integer (shared)                       # active neuron
'''

################################################################################
# Simulation
################################################################################

#### Parameters

defaultclock.dt = 500.*us                        # timestep
Nr_neurons = 1                                   # Number of downstream neurons
Nr_inputs = 5                                    # Number of input neurons
input_rate = 35*Hz                               # Rates
init_weight = 0.5                                # initial synaptic weight
final_t = 20.*second                             # end of simulation
input_time = 100.*ms                             # duration of an input

#### Create neuron objects

Nrn_downstream = NeuronGroup(Nr_neurons, eqs_neurons, threshold='v>V_thresh',
                             reset='v=V_rest;x_trace+=x_reset/(taux/ms)',
                             method='euler')
Nrns_input = NeuronGroup(Nr_inputs, eqs_inputs, threshold='rand()<rates*dt',
                         reset='v=V_rest;x_trace+=x_reset/(taux/ms)',
                         method='linear')

#### create Synapses

Syn = Synapses(Nrns_input, Nrn_downstream,
               model=Syn_model,
               on_pre=Pre_eq,
               on_post=Post_eq
               )

Syn.connect(i=numpy.arange(Nr_inputs), j=0)

#### Monitors and storage
W_evolution = StateMonitor(Syn, 'w_ampa', record=True)

#### Run

# Initial values
Nrn_downstream.v = V_rest
Nrn_downstream.v_lowpass1 = V_rest
Nrn_downstream.v_lowpass2 = V_rest
Nrn_downstream.v_homeo = 0
Nrn_downstream.I_ext = 0.*amp
Nrn_downstream.x_trace = 0.
Nrns_input.v = V_rest
Nrns_input.x_trace = 0.
Syn.w_ampa = init_weight

# Switch on a different input every 100ms
Nrns_input.run_regularly('''
                         selected_index = int(floor(rand()*Nr_inputs))
                         rates = input_rate * int(selected_index == i)  # All rates are zero except for the selected neuron
                         ''', dt=input_time)
run(final_t, report='text')

################################################################################
# Plots
################################################################################
stitle = 'Synaptic Competition'

fig = figure(figsize=(8, 5))
for kk in range(Nr_inputs):
    plt.plot(W_evolution.t, W_evolution.w_ampa[kk], '-', linewidth=2)
xlabel('Time [ms]', fontsize=22)
ylabel('Weight [a.u.]', fontsize=22)
plt.subplots_adjust(bottom=0.2, left=0.15, right=0.95, top=0.85)
title(stitle, fontsize=22)
plt.show()
_images/frompapers.Clopath_et_al_2010_homeostasis.1.png

Example: Clopath_et_al_2010_no_homeostasis

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

This code contains an adapted version of the voltage-dependent triplet STDP rule from: Clopath et al., Connectivity reflects coding: a model of voltage-based STDP with homeostasis, Nature Neuroscience, 2010 (http://dx.doi.org/10.1038/nn.2479)

The plasticity rule is adapted for a leaky integrate & fire model in Brian2 and does not include the homeostatic metaplasticity

As an illustration of the Rule, we simulate a plot analogous to figure 2b in the above article, showing the frequency dependence of plasticity as measured in: Sjöström et al., Rate, timing and cooperativity jointly determine cortical synaptic plasticity. Neuron, 2001

We kindly ask to cite both articles when using the model presented below.

This code was written by Jacopo Bono, 12/2015

from brian2 import *
################################################################################
# PLASTICITY MODEL
################################################################################

#### Plasticity Parameters

V_rest = -70.*mV        # resting potential
V_thresh = -50.*mV      # spiking threshold
Theta_low = V_rest      # depolarization threshold for plasticity
x_reset = 1.            # spike trace reset value
taux = 15.*ms           # spike trace time constant
A_LTD = 1.5e-4          # depression amplitude
A_LTP = 1.5e-2          # potentiation amplitude
tau_lowpass1 = 40*ms    # timeconstant for low-pass filtered voltage
tau_lowpass2 = 30*ms    # timeconstant for low-pass filtered voltage



#### Plasticity Equations


# equations executed at every timestep
Syn_model = '''
            w_ampa:1                # synaptic weight (ampa synapse)
            '''

# equations executed only when a presynaptic spike occurs
Pre_eq = '''
         g_ampa_post += w_ampa*ampa_max_cond                                                             # increment synaptic conductance
         w_minus = A_LTD*(v_lowpass1_post/mV - Theta_low/mV)*(v_lowpass1_post/mV - Theta_low/mV > 0)     # synaptic depression
         w_ampa = clip(w_ampa-w_minus,0,w_max)                                                           # hard bounds
         '''

# equations executed only when a postsynaptic spike occurs
Post_eq = '''
          v_lowpass1 += 10*mV                                                                                     # mimics the depolarisation by a spike
          v_lowpass2 += 10*mV                                                                                     # mimics the depolarisation by a spike
          w_plus = A_LTP*x_trace_pre*(v_lowpass2_post/mV - Theta_low/mV)*(v_lowpass2_post/mV - Theta_low/mV > 0)  # synaptic potentiation
          w_ampa = clip(w_ampa+w_plus,0,w_max)                                                                    # hard bounds
          '''

################################################################################
# I&F Parameters and equations
################################################################################

#### Neuron parameters

gleak = 30.*nS                  # leak conductance
C = 300.*pF                     # membrane capacitance
tau_AMPA = 2.*ms                # AMPA synaptic timeconstant
E_AMPA = 0.*mV                  # reversal potential AMPA

ampa_max_cond = 5.e-10*siemens  # Ampa maximal conductance
w_max = 1.                      # maximal ampa weight


#### Neuron Equations

eqs_neurons = '''
dv/dt = (gleak*(V_rest-v) + I_ext + I_syn)/C: volt      # voltage
dv_lowpass1/dt = (v-v_lowpass1)/tau_lowpass1 : volt     # low-pass filter of the voltage
dv_lowpass2/dt = (v-v_lowpass2)/tau_lowpass2 : volt     # low-pass filter of the voltage
I_ext : amp                                             # external current
I_syn = g_ampa*(E_AMPA-v): amp                          # synaptic current
dg_ampa/dt = -g_ampa/tau_AMPA : siemens                 # synaptic conductance
dx_trace/dt = -x_trace/taux :1                          # spike trace
'''



################################################################################
# Simulation
################################################################################

#### Parameters

defaultclock.dt = 100.*us                           # timestep
Nr_neurons = 2                                      # Number of neurons
rate_array = [1., 5., 10., 15., 20., 30., 50.]*Hz   # Rates
init_weight = 0.5                                   # initial synaptic weight
reps = 15                                           # Number of pairings

#### Create neuron objects

Nrns = NeuronGroup(Nr_neurons, eqs_neurons, threshold='v>V_thresh',
                   reset='v=V_rest;x_trace+=x_reset/(taux/ms)', method='euler')#

#### create Synapses

Syn = Synapses(Nrns, Nrns,
               model=Syn_model,
               on_pre=Pre_eq,
               on_post=Post_eq
               )

Syn.connect('i!=j')

#### Monitors and storage
weight_result = np.zeros((2,len(rate_array)))               # to save the final weights

#### Run

# loop over rates
for jj, rate in enumerate(rate_array):

    # Calculate interval between pairs
    pair_interval = 1./rate - 10*ms
    print('Starting simulations for %s' % rate)

    # Initial values
    Nrns.v = V_rest
    Nrns.v_lowpass1 = V_rest
    Nrns.v_lowpass2 = V_rest
    Nrns.I_ext = 0*amp
    Nrns.x_trace = 0.
    Syn.w_ampa = init_weight

    # loop over pairings
    for ii in range(reps):
        # 1st SPIKE
        Nrns.v[0] = V_thresh + 1*mV
        # 2nd SPIKE
        run(10*ms)
        Nrns.v[1] = V_thresh + 1*mV
        # run
        run(pair_interval)
        print('Pair %d out of %d' % (ii+1, reps))

    #store weight changes
    weight_result[0, jj] = 100.*Syn.w_ampa[0]/init_weight
    weight_result[1, jj] = 100.*Syn.w_ampa[1]/init_weight

################################################################################
# Plots
################################################################################

stitle = 'Pairings'
scolor = 'k'

figure(figsize=(8, 5))
plot(rate_array,weight_result[0, :], '-', linewidth=2, color=scolor)
plot(rate_array,weight_result[1, :], ':', linewidth=2, color=scolor)
xlabel('Pairing frequency [Hz]', fontsize=22)
ylabel('Normalised Weight [%]', fontsize=22)
legend(['Pre-Post', 'Post-Pre'], loc='best')
subplots_adjust(bottom=0.2, left=0.15, right=0.95, top=0.85)
title(stitle)
show()
_images/frompapers.Clopath_et_al_2010_no_homeostasis.1.png

Example: Diesmann_et_al_1999

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Synfire chains

M. Diesmann et al. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529-533.

from brian2 import *

duration = 100*ms

# Neuron model parameters
Vr = -70*mV
Vt = -55*mV
taum = 10*ms
taupsp = 0.325*ms
weight = 4.86*mV
# Neuron model
eqs = Equations('''
dV/dt = (-(V-Vr)+x)*(1./taum) : volt
dx/dt = (-x+y)*(1./taupsp) : volt
dy/dt = -y*(1./taupsp)+25.27*mV/ms+
        (39.24*mV/ms**0.5)*xi : volt
''')

# Neuron groups
n_groups = 10
group_size = 100
P = NeuronGroup(N=n_groups*group_size, model=eqs,
                threshold='V>Vt', reset='V=Vr', refractory=1*ms,
                method='euler')

Pinput = SpikeGeneratorGroup(85, np.arange(85),
                             np.random.randn(85)*1*ms + 50*ms)
# The network structure
S = Synapses(P, P, on_pre='y+=weight')
S.connect(j='k for k in range((int(i/group_size)+1)*group_size, (int(i/group_size)+2)*group_size) '
            'if i<N_pre-group_size')
Sinput = Synapses(Pinput, P[:group_size], on_pre='y+=weight')
Sinput.connect()

# Record the spikes
Mgp = SpikeMonitor(P)
Minput = SpikeMonitor(Pinput)
# Setup the network, and run it
P.V = 'Vr + rand() * (Vt - Vr)'
run(duration)

plot(Mgp.t/ms, 1.0*Mgp.i/group_size, '.')
plot([0, duration/ms], np.arange(n_groups).repeat(2).reshape(-1, 2).T, 'k-')
ylabel('group number')
yticks(np.arange(n_groups))
xlabel('time (ms)')
show()
_images/frompapers.Diesmann_et_al_1999.1.png

Example: Kremer_et_al_2011_barrel_cortex

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Late Emergence of the Whisker Direction Selectivity Map in the Rat Barrel Cortex. Kremer Y, Leger JF, Goodman DF, Brette R, Bourdieu L (2011). J Neurosci 31(29):10689-700.

Development of direction maps with pinwheels in the barrel cortex. Whiskers are deflected with random moving bars. N.B.: network construction can be long.

from brian2 import *
import time

t1 = time.time()

# PARAMETERS
# Neuron numbers
M4, M23exc, M23inh = 22, 25, 12  # size of each barrel (in neurons)
N4, N23exc, N23inh = M4**2, M23exc**2, M23inh**2  # neurons per barrel
barrelarraysize = 5  # Choose 3 or 4 if memory error
Nbarrels = barrelarraysize**2
# Stimulation
stim_change_time = 5*ms
Fmax = .5/stim_change_time # maximum firing rate in layer 4 (.5 spike / stimulation)
# Neuron parameters
taum, taue, taui = 10*ms, 2*ms, 25*ms
El = -70*mV
Vt, vt_inc, tauvt = -55*mV, 2*mV, 50*ms  # adaptive threshold
# STDP
taup, taud = 5*ms, 25*ms
Ap, Ad= .05, -.04
# EPSPs/IPSPs
EPSP, IPSP = 1*mV, -1*mV
EPSC = EPSP * (taue/taum)**(taum/(taue-taum))
IPSC = IPSP * (taui/taum)**(taum/(taui-taum))
Ap, Ad = Ap*EPSC, Ad*EPSC

# Layer 4, models the input stimulus
eqs_layer4 = '''
rate = int(is_active)*clip(cos(direction - selectivity), 0, inf)*Fmax: Hz
is_active = abs((barrel_x + 0.5 - bar_x) * cos(direction) + (barrel_y + 0.5 - bar_y) * sin(direction)) < 0.5: boolean
barrel_x : integer # The x index of the barrel
barrel_y : integer # The y index of the barrel
selectivity : 1
# Stimulus parameters (same for all neurons)
bar_x = cos(direction)*(t - stim_start_time)/(5*ms) + stim_start_x : 1 (shared)
bar_y = sin(direction)*(t - stim_start_time)/(5*ms) + stim_start_y : 1 (shared)
direction : 1 (shared) # direction of the current stimulus
stim_start_time : second (shared) # start time of the current stimulus
stim_start_x : 1 (shared) # start position of the stimulus
stim_start_y : 1 (shared) # start position of the stimulus
'''
layer4 = NeuronGroup(N4*Nbarrels, eqs_layer4, threshold='rand() < rate*dt',
                     method='euler', name='layer4')
layer4.barrel_x = '(i / N4) % barrelarraysize + 0.5'
layer4.barrel_y = 'i / (barrelarraysize*N4) + 0.5'
layer4.selectivity = '(i%N4)/(1.0*N4)*2*pi'  # for each barrel, selectivity between 0 and 2*pi

stimradius = (11+1)*.5

# Chose a new randomly oriented bar every 60ms
runner_code = '''
direction = rand()*2*pi
stim_start_x = barrelarraysize / 2.0 - cos(direction)*stimradius
stim_start_y = barrelarraysize / 2.0 - sin(direction)*stimradius
stim_start_time = t
'''
layer4.run_regularly(runner_code, dt=60*ms, when='start')

# Layer 2/3
# Model: IF with adaptive threshold
eqs_layer23 = '''
dv/dt=(ge+gi+El-v)/taum : volt
dge/dt=-ge/taue : volt
dgi/dt=-gi/taui : volt
dvt/dt=(Vt-vt)/tauvt : volt # adaptation
barrel_idx : integer
x : 1  # in "barrel width" units
y : 1  # in "barrel width" units
'''
layer23 = NeuronGroup(Nbarrels*(N23exc+N23inh), eqs_layer23,
                      threshold='v>vt', reset='v = El; vt += vt_inc',
                      refractory=2*ms, method='euler', name='layer23')
layer23.v = El
layer23.vt = Vt

# Subgroups for excitatory and inhibitory neurons in layer 2/3
layer23exc = layer23[:Nbarrels*N23exc]
layer23inh = layer23[Nbarrels*N23exc:]

# Layer 2/3 excitatory
# The units for x and y are the width/height of a single barrel
layer23exc.x = '(i % (barrelarraysize*M23exc)) * (1.0/M23exc)'
layer23exc.y = '(i / (barrelarraysize*M23exc)) * (1.0/M23exc)'
layer23exc.barrel_idx = 'floor(x) + floor(y)*barrelarraysize'

# Layer 2/3 inhibitory
layer23inh.x = 'i % (barrelarraysize*M23inh) * (1.0/M23inh)'
layer23inh.y = 'i / (barrelarraysize*M23inh) * (1.0/M23inh)'
layer23inh.barrel_idx = 'floor(x) + floor(y)*barrelarraysize'

print("Building synapses, please wait...")
# Feedforward connections (plastic)
feedforward = Synapses(layer4, layer23exc,
                       model='''w:volt
                                dA_source/dt = -A_source/taup : volt (event-driven)
                                dA_target/dt = -A_target/taud : volt (event-driven)''',
                       on_pre='''ge+=w
                              A_source += Ap
                              w = clip(w+A_target, 0, EPSC)''',
                       on_post='''
                              A_target += Ad
                              w = clip(w+A_source, 0, EPSC)''',
                       name='feedforward')
# Connect neurons in the same barrel with 50% probability
feedforward.connect('(barrel_x_pre + barrelarraysize*barrel_y_pre) == barrel_idx_post',
                    p=0.5)
feedforward.w = EPSC*.5

print('excitatory lateral')
# Excitatory lateral connections
recurrent_exc = Synapses(layer23exc, layer23, model='w:volt', on_pre='ge+=w',
                         name='recurrent_exc')
recurrent_exc.connect(p='.15*exp(-.5*(((x_pre-x_post)/.4)**2+((y_pre-y_post)/.4)**2))')
recurrent_exc.w['j<Nbarrels*N23exc'] = EPSC*.3 # excitatory->excitatory
recurrent_exc.w['j>=Nbarrels*N23exc'] = EPSC # excitatory->inhibitory


# Inhibitory lateral connections
print('inhibitory lateral')
recurrent_inh = Synapses(layer23inh, layer23exc, on_pre='gi+=IPSC',
                         name='recurrent_inh')
recurrent_inh.connect(p='exp(-.5*(((x_pre-x_post)/.2)**2+((y_pre-y_post)/.2)**2))')

if get_device().__class__.__name__=='RuntimeDevice':
    print('Total number of connections')
    print('feedforward: %d' % len(feedforward))
    print('recurrent exc: %d' % len(recurrent_exc))
    print('recurrent inh: %d' % len(recurrent_inh))

    t2 = time.time()
    print("Construction time: %.1fs" % (t2 - t1))

run(5*second, report='text')

# Calculate the preferred direction of each cell in layer23 by doing a
# vector average of the selectivity of the projecting layer4 cells, weighted
# by the synaptic weight.
_r = bincount(feedforward.j,
              weights=feedforward.w * cos(feedforward.selectivity_pre)/feedforward.N_incoming,
              minlength=len(layer23exc))
_i = bincount(feedforward.j,
              weights=feedforward.w * sin(feedforward.selectivity_pre)/feedforward.N_incoming,
              minlength=len(layer23exc))
selectivity_exc = (arctan2(_r, _i) % (2*pi))*180./pi


scatter(layer23.x[:Nbarrels*N23exc], layer23.y[:Nbarrels*N23exc],
        c=selectivity_exc[:Nbarrels*N23exc],
        edgecolors='none', marker='s', cmap='hsv')
vlines(np.arange(barrelarraysize), 0, barrelarraysize, 'k')
hlines(np.arange(barrelarraysize), 0, barrelarraysize, 'k')
clim(0, 360)
colorbar()
show()
_images/frompapers.Kremer_et_al_2011_barrel_cortex.1.png

Example: Rossant_et_al_2011bis

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Distributed synchrony example

Fig. 14 from:

Rossant C, Leijon S, Magnusson AK, Brette R (2011). “Sensitivity of noisy neurons to coincident inputs”. Journal of Neuroscience, 31(47).

5000 independent E/I Poisson inputs are injected into a leaky integrate-and-fire neuron. Synchronous events, following an independent Poisson process at 40 Hz, are considered, where 15 E Poisson spikes are randomly shifted to be synchronous at those events. The output firing rate is then significantly higher, showing that the spike timing of less than 1% of the excitatory synapses have an important impact on the postsynaptic firing.

from brian2 import *

# neuron parameters
theta = -55*mV
El = -65*mV
vmean = -65*mV
taum = 5*ms
taue = 3*ms
taui = 10*ms
eqs = Equations("""
                dv/dt  = (ge+gi-(v-El))/taum : volt
                dge/dt = -ge/taue : volt
                dgi/dt = -gi/taui : volt
                """)

# input parameters
p = 15
ne = 4000
ni = 1000
lambdac = 40*Hz
lambdae = lambdai = 1*Hz

# synapse parameters
we = .5*mV/(taum/taue)**(taum/(taue-taum))
wi = (vmean-El-lambdae*ne*we*taue)/(lambdae*ni*taui)

# NeuronGroup definition
group = NeuronGroup(N=2, model=eqs, reset='v = El',
                    threshold='v>theta',
                    refractory=5*ms, method='linear')
group.v = El
group.ge = group.gi = 0

# independent E/I Poisson inputs
p1 = PoissonInput(group[0:1], 'ge', N=ne, rate=lambdae, weight=we)
p2 = PoissonInput(group[0:1], 'gi', N=ni, rate=lambdai, weight=wi)

# independent E/I Poisson inputs + synchronous E events
p3 = PoissonInput(group[1:], 'ge', N=ne, rate=lambdae-(p*1.0/ne)*lambdac, weight=we)
p4 = PoissonInput(group[1:], 'gi', N=ni, rate=lambdai, weight=wi)
p5 = PoissonInput(group[1:], 'ge', N=1, rate=lambdac, weight=p*we)

# run the simulation
M = SpikeMonitor(group)
SM = StateMonitor(group, 'v', record=True)
BrianLogger.log_level_info()
run(1*second)
# plot trace and spikes
for i in [0, 1]:
    spikes = (M.t[M.i == i] - defaultclock.dt)/ms
    val = SM[i].v
    subplot(2,1,i+1)
    plot(SM.t/ms, val)
    plot(tile(spikes, (2,1)),
         vstack((val[array(spikes, dtype=int)],
                 zeros(len(spikes)))), 'b')
    title("%s: %d spikes/second" % (["uncorrelated inputs", "correlated inputs"][i],
                                    M.count[i]))
show()
_images/frompapers.Rossant_et_al_2011bis.1.png

Example: Rothman_Manis_2003

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Cochlear neuron model of Rothman & Manis

Rothman JS, Manis PB (2003) The roles potassium currents play in regulating the electrical activity of ventral cochlear nucleus neurons. J Neurophysiol 89:3097-113.

All model types differ only by the maximal conductances.

Adapted from their Neuron implementation by Romain Brette

from brian2 import *

#defaultclock.dt=0.025*ms # for better precision

'''
Simulation parameters: choose current amplitude and neuron type
(from type1c, type1t, type12, type 21, type2, type2o)
'''
neuron_type = 'type1c'
Ipulse = 250*pA

C = 12*pF
Eh = -43*mV
EK = -70*mV  # -77*mV in mod file
El = -65*mV
ENa = 50*mV
nf = 0.85  # proportion of n vs p kinetics
zss = 0.5  # steady state inactivation of glt
temp = 22.  # temperature in degree celcius
q10 = 3. ** ((temp - 22) / 10.)
# hcno current (octopus cell)
frac = 0.0
qt = 4.5 ** ((temp - 33.) / 10.)

# Maximal conductances of different cell types in nS
maximal_conductances = dict(
type1c=(1000, 150, 0, 0, 0.5, 0, 2),
type1t=(1000, 80, 0, 65, 0.5, 0, 2),
type12=(1000, 150, 20, 0, 2, 0, 2),
type21=(1000, 150, 35, 0, 3.5, 0, 2),
type2=(1000, 150, 200, 0, 20, 0, 2),
type2o=(1000, 150, 600, 0, 0, 40, 2) # octopus cell
)
gnabar, gkhtbar, gkltbar, gkabar, ghbar, gbarno, gl = [x * nS for x in maximal_conductances[neuron_type]]

# Classical Na channel
eqs_na = """
ina = gnabar*m**3*h*(ENa-v) : amp
dm/dt=q10*(minf-m)/mtau : 1
dh/dt=q10*(hinf-h)/htau : 1
minf = 1./(1+exp(-(vu + 38.) / 7.)) : 1
hinf = 1./(1+exp((vu + 65.) / 6.)) : 1
mtau =  ((10. / (5*exp((vu+60.) / 18.) + 36.*exp(-(vu+60.) / 25.))) + 0.04)*ms : second
htau =  ((100. / (7*exp((vu+60.) / 11.) + 10.*exp(-(vu+60.) / 25.))) + 0.6)*ms : second
"""

# KHT channel (delayed-rectifier K+)
eqs_kht = """
ikht = gkhtbar*(nf*n**2 + (1-nf)*p)*(EK-v) : amp
dn/dt=q10*(ninf-n)/ntau : 1
dp/dt=q10*(pinf-p)/ptau : 1
ninf =   (1 + exp(-(vu + 15) / 5.))**-0.5 : 1
pinf =  1. / (1 + exp(-(vu + 23) / 6.)) : 1
ntau =  ((100. / (11*exp((vu+60) / 24.) + 21*exp(-(vu+60) / 23.))) + 0.7)*ms : second
ptau = ((100. / (4*exp((vu+60) / 32.) + 5*exp(-(vu+60) / 22.))) + 5)*ms : second
"""

# Ih channel (subthreshold adaptive, non-inactivating)
eqs_ih = """
ih = ghbar*r*(Eh-v) : amp
dr/dt=q10*(rinf-r)/rtau : 1
rinf = 1. / (1+exp((vu + 76.) / 7.)) : 1
rtau = ((100000. / (237.*exp((vu+60.) / 12.) + 17.*exp(-(vu+60.) / 14.))) + 25.)*ms : second
"""

# KLT channel (low threshold K+)
eqs_klt = """
iklt = gkltbar*w**4*z*(EK-v) : amp
dw/dt=q10*(winf-w)/wtau : 1
dz/dt=q10*(zinf-z)/wtau : 1
winf = (1. / (1 + exp(-(vu + 48.) / 6.)))**0.25 : 1
zinf = zss + ((1.-zss) / (1 + exp((vu + 71.) / 10.))) : 1
wtau = ((100. / (6.*exp((vu+60.) / 6.) + 16.*exp(-(vu+60.) / 45.))) + 1.5)*ms : second
ztau = ((1000. / (exp((vu+60.) / 20.) + exp(-(vu+60.) / 8.))) + 50)*ms : second
"""

# Ka channel (transient K+)
eqs_ka = """
ika = gkabar*a**4*b*c*(EK-v): amp
da/dt=q10*(ainf-a)/atau : 1
db/dt=q10*(binf-b)/btau : 1
dc/dt=q10*(cinf-c)/ctau : 1
ainf = (1. / (1 + exp(-(vu + 31) / 6.)))**0.25 : 1
binf = 1. / (1 + exp((vu + 66) / 7.))**0.5 : 1
cinf = 1. / (1 + exp((vu + 66) / 7.))**0.5 : 1
atau =  ((100. / (7*exp((vu+60) / 14.) + 29*exp(-(vu+60) / 24.))) + 0.1)*ms : second
btau =  ((1000. / (14*exp((vu+60) / 27.) + 29*exp(-(vu+60) / 24.))) + 1)*ms : second
ctau = ((90. / (1 + exp((-66-vu) / 17.))) + 10)*ms : second
"""

# Leak
eqs_leak = """
ileak = gl*(El-v) : amp
"""

# h current for octopus cells
eqs_hcno = """
ihcno = gbarno*(h1*frac + h2*(1-frac))*(Eh-v) : amp
dh1/dt=(hinfno-h1)/tau1 : 1
dh2/dt=(hinfno-h2)/tau2 : 1
hinfno = 1./(1+exp((vu+66.)/7.)) : 1
tau1 = bet1/(qt*0.008*(1+alp1))*ms : second
tau2 = bet2/(qt*0.0029*(1+alp2))*ms : second
alp1 = exp(1e-3*3*(vu+50)*9.648e4/(8.315*(273.16+temp))) : 1
bet1 = exp(1e-3*3*0.3*(vu+50)*9.648e4/(8.315*(273.16+temp))) : 1
alp2 = exp(1e-3*3*(vu+84)*9.648e4/(8.315*(273.16+temp))) : 1
bet2 = exp(1e-3*3*0.6*(vu+84)*9.648e4/(8.315*(273.16+temp))) : 1
"""

eqs = """
dv/dt = (ileak + ina + ikht + iklt + ika + ih + ihcno + I)/C : volt
vu = v/mV : 1  # unitless v
I : amp
"""
eqs += eqs_leak + eqs_ka + eqs_na + eqs_ih + eqs_klt + eqs_kht + eqs_hcno

neuron = NeuronGroup(1, eqs, method='exponential_euler')
neuron.v = El

run(50*ms, report='text')  # Go to rest

M = StateMonitor(neuron, 'v', record=0)
neuron.I = Ipulse

run(100*ms, report='text')

plot(M.t / ms, M[0].v / mV)
xlabel('t (ms)')
ylabel('v (mV)')
show()
_images/frompapers.Rothman_Manis_2003.1.png

Example: Sturzl_et_al_2000

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Adapted from Theory of Arachnid Prey Localization W. Sturzl, R. Kempter, and J. L. van Hemmen PRL 2000

Poisson inputs are replaced by integrate-and-fire neurons

Romain Brette

from brian2 import *

# Parameters
degree = 2 * pi / 360.
duration = 500*ms
R = 2.5*cm  # radius of scorpion
vr = 50*meter/second  # Rayleigh wave speed
phi = 144*degree  # angle of prey
A = 250*Hz
deltaI = .7*ms  # inhibitory delay
gamma = (22.5 + 45 * arange(8)) * degree  # leg angle
delay = R / vr * (1 - cos(phi - gamma))   # wave delay

# Wave (vector w)
time = arange(int(duration / defaultclock.dt) + 1) * defaultclock.dt
Dtot = 0.
w = 0.
for f in arange(150, 451)*Hz:
    D = exp(-(f/Hz - 300) ** 2 / (2 * (50 ** 2)))
    rand_angle = 2 * pi * rand()
    w += 100 * D * cos(2 * pi * f * time + rand_angle)
    Dtot += D
w = .01 * w / Dtot

# Rates from the wave
rates = TimedArray(w, dt=defaultclock.dt)

# Leg mechanical receptors
tau_legs = 1 * ms
sigma = .01
eqs_legs = """
dv/dt = (1 + rates(t - d) - v)/tau_legs + sigma*(2./tau_legs)**.5*xi:1
d : second
"""
legs = NeuronGroup(8, model=eqs_legs, threshold='v > 1', reset='v = 0',
                   refractory=1*ms, method='euler')
legs.d = delay
spikes_legs = SpikeMonitor(legs)

# Command neurons
tau = 1 * ms
taus = 1.001 * ms
wex = 7
winh = -2
eqs_neuron = '''
dv/dt = (x - v)/tau : 1
dx/dt = (y - x)/taus : 1 # alpha currents
dy/dt = -y/taus : 1
'''
neurons = NeuronGroup(8, model=eqs_neuron, threshold='v>1', reset='v=0',
                      method='linear')
synapses_ex = Synapses(legs, neurons, on_pre='y+=wex')
synapses_ex.connect(j='i')
synapses_inh = Synapses(legs, neurons, on_pre='y+=winh', delay=deltaI)
synapses_inh.connect('abs(((j - i) % N_post) - N_post/2) <= 1')
spikes = SpikeMonitor(neurons)

run(duration, report='text')

nspikes = spikes.count
phi_est = imag(log(sum(nspikes * exp(gamma * 1j))))
print("True angle (deg): %.2f" % (phi/degree))
print("Estimated angle (deg): %.2f" % (phi_est/degree))
rmax = amax(nspikes)/duration/Hz
polar(concatenate((gamma, [gamma[0] + 2 * pi])),
      concatenate((nspikes, [nspikes[0]])) / duration / Hz,
      c='k')
axvline(phi, ls='-', c='g')
axvline(phi_est, ls='-', c='b')
show()
_images/frompapers.Sturzl_et_al_2000.1.png

Example: Touboul_Brette_2008

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Chaos in the AdEx model

Fig. 8B from: Touboul, J. and Brette, R. (2008). Dynamics and bifurcations of the adaptive exponential integrate-and-fire model. Biological Cybernetics 99(4-5):319-34.

This shows the bifurcation structure when the reset value is varied (vertical axis shows the values of w at spike times for a given a reset value Vr).

from brian2 import *

defaultclock.dt = 0.01*ms

C = 281*pF
gL = 30*nS
EL = -70.6*mV
VT = -50.4*mV
DeltaT = 2*mV
tauw = 40*ms
a = 4*nS
b = 0.08*nA
I = .8*nA
Vcut = VT + 5 * DeltaT  # practical threshold condition
N = 200

eqs = """
dvm/dt=(gL*(EL-vm)+gL*DeltaT*exp((vm-VT)/DeltaT)+I-w)/C : volt
dw/dt=(a*(vm-EL)-w)/tauw : amp
Vr:volt
"""

neuron = NeuronGroup(N, model=eqs, threshold='vm > Vcut',
                     reset="vm = Vr; w += b", method='euler')
neuron.vm = EL
neuron.w = a * (neuron.vm - EL)
neuron.Vr = linspace(-48.3 * mV, -47.7 * mV, N)  # bifurcation parameter

init_time = 3*second
run(init_time, report='text')  # we discard the first spikes

states = StateMonitor(neuron, "w", record=True, when='start')
spikes = SpikeMonitor(neuron)
run(1 * second, report='text')

# Get the values of Vr and w for each spike
Vr = neuron.Vr[spikes.i]
w = states.w[spikes.i, int_((spikes.t-init_time)/defaultclock.dt)]

figure()
plot(Vr / mV, w / nA, '.k')
xlabel('Vr (mV)')
ylabel('w (nA)')
show()
_images/frompapers.Touboul_Brette_2008.1.png

Example: Vogels_et_al_2011

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Inhibitory synaptic plasticity in a recurrent network model

(F. Zenke, 2011) (from the 2012 Brian twister)

Adapted from: Vogels, T. P., H. Sprekeler, F. Zenke, C. Clopath, and W. Gerstner. Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science (November 10, 2011).

from brian2 import *

# ###########################################
# Defining network model parameters
# ###########################################

NE = 8000          # Number of excitatory cells
NI = NE/4          # Number of inhibitory cells

tau_ampa = 5.0*ms   # Glutamatergic synaptic time constant
tau_gaba = 10.0*ms  # GABAergic synaptic time constant
epsilon = 0.02      # Sparseness of synaptic connections

tau_stdp = 20*ms    # STDP time constant

simtime = 10*second # Simulation time

# ###########################################
# Neuron model
# ###########################################

gl = 10.0*nsiemens   # Leak conductance
el = -60*mV          # Resting potential
er = -80*mV          # Inhibitory reversal potential
vt = -50.*mV         # Spiking threshold
memc = 200.0*pfarad  # Membrane capacitance
bgcurrent = 200*pA   # External current

eqs_neurons='''
dv/dt=(-gl*(v-el)-(g_ampa*v+g_gaba*(v-er))+bgcurrent)/memc : volt (unless refractory)
dg_ampa/dt = -g_ampa/tau_ampa : siemens
dg_gaba/dt = -g_gaba/tau_gaba : siemens
'''

# ###########################################
# Initialize neuron group
# ###########################################

neurons = NeuronGroup(NE+NI, model=eqs_neurons, threshold='v > vt',
                      reset='v=el', refractory=5*ms, method='euler')
Pe = neurons[:NE]
Pi = neurons[NE:]

# ###########################################
# Connecting the network
# ###########################################

con_e = Synapses(Pe, neurons, on_pre='g_ampa += 0.3*nS')
con_e.connect(p=epsilon)
con_ii = Synapses(Pi, Pi, on_pre='g_gaba += 3*nS')
con_ii.connect(p=epsilon)

# ###########################################
# Inhibitory Plasticity
# ###########################################

eqs_stdp_inhib = '''
w : 1
dA_pre/dt=-A_pre/tau_stdp : 1 (event-driven)
dA_post/dt=-A_post/tau_stdp : 1 (event-driven)
'''
alpha = 3*Hz*tau_stdp*2  # Target rate parameter
gmax = 100               # Maximum inhibitory weight

con_ie = Synapses(Pi, Pe, model=eqs_stdp_inhib,
                  on_pre='''A_pre += 1.
                         w = clip(w+(A_post-alpha)*eta, 0, gmax)
                         g_gaba += w*nS''',
                  on_post='''A_post += 1.
                          w = clip(w+A_pre*eta, 0, gmax)
                       ''')
con_ie.connect(p=epsilon)
con_ie.w = 1e-10

# ###########################################
# Setting up monitors
# ###########################################

sm = SpikeMonitor(Pe)

# ###########################################
# Run without plasticity
# ###########################################
eta = 0          # Learning rate
run(1*second)

# ###########################################
# Run with plasticity
# ###########################################
eta = 1e-2          # Learning rate
run(simtime-1*second, report='text')

# ###########################################
# Make plots
# ###########################################

i, t = sm.it
subplot(211)
plot(t/ms, i, 'k.', ms=0.25)
title("Before")
xlabel("")
yticks([])
xlim(0.8*1e3, 1*1e3)
subplot(212)
plot(t/ms, i, 'k.', ms=0.25)
xlabel("time (ms)")
yticks([])
title("After")
xlim((simtime-0.2*second)/ms, simtime/ms)
show()
_images/frompapers.Vogels_et_al_2011.1.png

Example: Wang_Buszaki_1996

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Wang-Buszaki model

J Neurosci. 1996 Oct 15;16(20):6402-13. Gamma oscillation by synaptic inhibition in a hippocampal interneuronal network model. Wang XJ, Buzsaki G.

Note that implicit integration (exponential Euler) cannot be used, and therefore simulation is rather slow.

from brian2 import *

defaultclock.dt = 0.01*ms

Cm = 1*uF # /cm**2
Iapp = 2*uA
gL = 0.1*msiemens
EL = -65*mV
ENa = 55*mV
EK = -90*mV
gNa = 35*msiemens
gK = 9*msiemens

eqs = '''
dv/dt = (-gNa*m**3*h*(v-ENa)-gK*n**4*(v-EK)-gL*(v-EL)+Iapp)/Cm : volt
m = alpha_m/(alpha_m+beta_m) : 1
alpha_m = -0.1/mV*(v+35*mV)/(exp(-0.1/mV*(v+35*mV))-1)/ms : Hz
beta_m = 4*exp(-(v+60*mV)/(18*mV))/ms : Hz
dh/dt = 5*(alpha_h*(1-h)-beta_h*h) : 1
alpha_h = 0.07*exp(-(v+58*mV)/(20*mV))/ms : Hz
beta_h = 1./(exp(-0.1/mV*(v+28*mV))+1)/ms : Hz
dn/dt = 5*(alpha_n*(1-n)-beta_n*n) : 1
alpha_n = -0.01/mV*(v+34*mV)/(exp(-0.1/mV*(v+34*mV))-1)/ms : Hz
beta_n = 0.125*exp(-(v+44*mV)/(80*mV))/ms : Hz
'''

neuron = NeuronGroup(1, eqs, method='exponential_euler')
neuron.v = -70*mV
neuron.h = 1
M = StateMonitor(neuron, 'v', record=0)

run(100*ms, report='text')

plot(M.t/ms, M[0].v/mV)
show()
_images/frompapers.Wang_Buszaki_1996.1.png

frompapers/Brette_2012

Example: Fig1

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Brette R (2013). Sharpness of spike initiation in neurons explained by compartmentalization. PLoS Comp Biol, doi: 10.1371/journal.pcbi.1003338.

Fig 1C-E. Somatic voltage-clamp in a ball-and-stick model with Na channels at a particular location.

from brian2 import *
from params import *

defaultclock.dt = 0.025*ms

# Morphology
morpho = Soma(50*um)  # chosen for a target Rm
morpho.axon = Cylinder(diameter=1*um, length=300*um, n=300)

location = 40*um # where Na channels are placed
duration = 500*ms

# Channels
eqs='''
Im = gL*(EL - v) + gclamp*(vc - v) + gNa*m*(ENa - v) : amp/meter**2
dm/dt = (minf - m) / taum: 1  # simplified Na channel
minf = 1 / (1 + exp((va - v) / ka)) : 1
gclamp : siemens/meter**2
gNa : siemens/meter**2
vc = EL + 50*mV * t/duration : volt (shared)  # Voltage clamp with a ramping voltage command
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=Cm, Ri=Ri)
compartment = morpho.axon[location]
neuron.v = EL
neuron.gclamp[0] = gL*500
neuron.gNa[compartment] = gNa_0/neuron.area[compartment]

# Monitors
mon = StateMonitor(neuron, ['v', 'vc', 'm'], record=True)

run(duration, report='text')

subplot(221)
plot(mon[0].vc/mV,
     -((mon[0].vc - mon[0].v)*(neuron.gclamp[0]))*neuron.area[0]/nA, 'k')
xlabel('V (mV)')
ylabel('I (nA)')
xlim(-75, -45)
title('I-V curve')

subplot(222)
plot(mon[0].vc/mV, mon[compartment].m, 'k')
xlabel('V (mV)')
ylabel('m')
title('Activation curve (m(V))')

subplot(223)
# Number of simulation time steps for each volt increment in the voltage-clamp
dt_per_volt = len(mon.t)/(50*mV)
for v in [-64*mV, -61*mV, -58*mV, -55*mV]:
    plot(mon.v[:100 ,int(dt_per_volt*(v - EL))]/mV, 'k')
xlabel('Distance from soma (um)')
ylabel('V (mV)')
title('Voltage across axon')

subplot(224)
plot(mon[compartment].v/mV, mon[compartment].v/mV, 'k--')  # Diagonal
plot(mon[0].v/mV, mon[compartment].v/mV, 'k')
xlabel('Vs (mV)')
ylabel('Va (mV)')
show()
_images/frompapers.Brette_2012.Fig1.1.png

Example: Fig3AB

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Brette R (2013). Sharpness of spike initiation in neurons explained by compartmentalization. PLoS Comp Biol, doi: 10.1371/journal.pcbi.1003338.

Fig. 3. A, B. Kink with only Nav1.6 channels

from brian2 import *
from params import *

codegen.target='numpy'

defaultclock.dt = 0.025*ms

# Morphology
morpho = Soma(50*um)  # chosen for a target Rm
morpho.axon = Cylinder(diameter=1*um, length=300*um, n=300)

location = 40*um  # where Na channels are placed

# Channels
eqs='''
Im = gL*(EL - v) + gNa*m*(ENa - v) : amp/meter**2
dm/dt = (minf - m) / taum : 1 # simplified Na channel
minf = 1 / (1 + exp((va - v) / ka)) : 1
gNa : siemens/meter**2
Iin : amp (point current)
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=Cm, Ri=Ri,
                       method="exponential_euler")

compartment = morpho.axon[location]
neuron.v = EL
neuron.gNa[compartment] = gNa_0/neuron.area[compartment]
M = StateMonitor(neuron, ['v', 'm'], record=True)

run(20*ms, report='text')
neuron.Iin[0] = gL * 20*mV * neuron.area[0]
run(80*ms, report='text')

subplot(121)
plot(M.t/ms, M[0].v/mV, 'r')
plot(M.t/ms, M[compartment].v/mV, 'k')
plot(M.t/ms, M[compartment].m*(80+60)-80, 'k--')  # open channels
ylim(-80, 60)
xlabel('Time (ms)')
ylabel('V (mV)')
title('Voltage traces')

subplot(122)
dm = diff(M[0].v) / defaultclock.dt
dm40 = diff(M[compartment].v) / defaultclock.dt
plot((M[0].v/mV)[1:], dm/(volt/second), 'r')
plot((M[compartment].v/mV)[1:], dm40/(volt/second), 'k')
xlim(-80, 40)
xlabel('V (mV)')
ylabel('dV/dt (V/s)')
title('Phase plot')

show()
_images/frompapers.Brette_2012.Fig3AB.1.png

Example: Fig3CF

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Brette R (2013). Sharpness of spike initiation in neurons explained by compartmentalization. PLoS Comp Biol, doi: 10.1371/journal.pcbi.1003338.

Fig. 3C-F. Kink with Nav1.6 and Nav1.2

from brian2 import *
from params import *

defaultclock.dt = 0.01*ms

# Morphology
morpho = Soma(50*um) # chosen for a target Rm
morpho.axon = Cylinder(diameter=1*um, length=300*um, n=300)

location16 = 40*um  # where Nav1.6 channels are placed
location12 = 15*um  # where Nav1.2 channels are placed

va2 = va + 15*mV  # depolarized Nav1.2

# Channels
duration = 100*ms
eqs='''
Im = gL * (EL - v) + gNa*m*(ENa - v) + gNa2*m2*(ENa - v) : amp/meter**2
dm/dt = (minf - m) / taum : 1  # simplified Na channel
minf = 1 / (1 + exp((va - v) / ka)) : 1
dm2/dt = (minf2 - m2) / taum : 1 # simplified Na channel, Nav1.2
minf2 = 1/(1 + exp((va2 - v) / ka)) : 1
gNa : siemens/meter**2
gNa2 : siemens/meter**2  # Nav1.2
Iin : amp (point current)
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=Cm, Ri=Ri,
                       method="exponential_euler")
compartment16 = morpho.axon[location16]
compartment12 = morpho.axon[location12]
neuron.v = EL
neuron.gNa[compartment16] = gNa_0/neuron.area[compartment16]
neuron.gNa2[compartment12] = 20*gNa_0/neuron.area[compartment12]
# Monitors
M = StateMonitor(neuron, ['v', 'm', 'm2'], record=True)

run(20*ms, report='text')
neuron.Iin[0] = gL * 20*mV * neuron.area[0]
run(80*ms, report='text')

subplot(221)
plot(M.t/ms, M[0].v/mV, 'r')
plot(M.t/ms, M[compartment16].v/mV, 'k')
plot(M.t/ms, M[compartment16].m*(80+60)-80, 'k--')  # open channels
ylim(-80, 60)
xlabel('Time (ms)')
ylabel('V (mV)')
title('Voltage traces')

subplot(222)
plot(M[0].v/mV, M[compartment16].m,'k')
plot(M[0].v/mV, 1 / (1 + exp((va - M[0].v) / ka)), 'k--')
plot(M[0].v/mV, M[compartment12].m2, 'r')
plot(M[0].v/mV, 1 / (1 + exp((va2 - M[0].v) / ka)), 'r--')
xlim(-70, 0)
xlabel('V (mV)')
ylabel('m')
title('Activation curves')

subplot(223)
dm = diff(M[0].v) / defaultclock.dt
dm40 = diff(M[compartment16].v) / defaultclock.dt
plot((M[0].v/mV)[1:], dm/(volt/second), 'r')
plot((M[compartment16].v/mV)[1:], dm40/(volt/second), 'k')
xlim(-80, 40)
xlabel('V (mV)')
ylabel('dV/dt (V/s)')
title('Phase plot')

subplot(224)
plot((M[0].v/mV)[1:], dm/(volt/second), 'r')
plot((M[compartment16].v/mV)[1:], dm40/(volt/second), 'k')
plot((M[0].v/mV)[1:], 10 + 0*dm/(volt/second), 'k--')
xlim(-70, -40)
ylim(0, 20)
xlabel('V (mV)')
ylabel('dV/dt (V/s)')
title('Phase plot(zoom)')

show()
_images/frompapers.Brette_2012.Fig3CF.1.png

Example: Fig4

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Brette R (2013). Sharpness of spike initiation in neurons explained by compartmentalization. PLoS Comp Biol, doi: 10.1371/journal.pcbi.1003338.

Fig. 4E-F. Spatial distribution of Na channels. Tapering axon near soma.

from brian2 import *
from params import *

defaultclock.dt = 0.025*ms

# Morphology
morpho = Soma(50*um) # chosen for a target Rm
# Tapering (change this for the other figure panels)
diameters = hstack([linspace(4, 1, 11), ones(290)])*um
morpho.axon = Section(diameter=diameters, length=ones(300)*um, n=300)

# Na channels
Na_start = (25 + 10)*um
Na_end = (40 + 10)*um
linear_distribution = True  # True is F, False is E

duration = 500*ms

# Channels
eqs='''
Im = gL*(EL - v) + gclamp*(vc - v) + gNa*m*(ENa - v) : amp/meter**2
dm/dt = (minf - m) / taum: 1  # simplified Na channel
minf = 1 / (1 + exp((va - v) / ka)) : 1
gclamp : siemens/meter**2
gNa : siemens/meter**2
vc = EL + 50*mV * t / duration : volt (shared)  # Voltage clamp with a ramping voltage command
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=Cm, Ri=Ri,
                       method="exponential_euler")
compartments = morpho.axon[Na_start:Na_end]
neuron.v = EL
neuron.gclamp[0] = gL*500

if linear_distribution:
    profile = linspace(1, 0, len(compartments))
else:
    profile = ones(len(compartments))
profile = profile / sum(profile)  # normalization

neuron.gNa[compartments] = gNa_0 * profile / neuron.area[compartments]

# Monitors
mon = StateMonitor(neuron, 'v', record=True)

run(duration, report='text')

dt_per_volt = len(mon.t) / (50*mV)
for v in [-64*mV, -61*mV, -58*mV, -55*mV, -52*mV]:
    plot(mon.v[:100, int(dt_per_volt * (v - EL))]/mV, 'k')
xlim(0, 50+10)
ylim(-65, -25)
ylabel('V (mV)')
xlabel('Location (um)')
title('Voltage across axon')
show()
_images/frompapers.Brette_2012.Fig4.1.png

Example: Fig5A

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Brette R (2013). Sharpness of spike initiation in neurons explained by compartmentalization. PLoS Comp Biol, doi: 10.1371/journal.pcbi.1003338.

Fig. 5A. Voltage trace for current injection, with an additional reset when a spike is produced.

Trick: to reset the entire neuron, we use a set of synapses from the spike initiation compartment where the threshold condition applies to all compartments, and the reset operation (v = EL) is applied there every time a spike is produced.

from brian2 import *
from params import *

defaultclock.dt = 0.025*ms
duration = 500*ms

# Morphology
morpho = Soma(50*um)  # chosen for a target Rm
morpho.axon = Cylinder(diameter=1*um, length=300*um, n=300)

# Input
taux = 5*ms
sigmax = 12*mV
xx0 = 7*mV

compartment = 40

# Channels
eqs = '''
Im = gL * (EL - v) + gNa * m * (ENa - v) + gLx * (xx0 + xx) : amp/meter**2
dm/dt = (minf - m) / taum : 1  # simplified Na channel
minf = 1 / (1 + exp((va - v) / ka)) : 1
gNa : siemens/meter**2
gLx : siemens/meter**2
dxx/dt = -xx / taux + sigmax * (2 / taux)**.5 *xi : volt
'''

neuron = SpatialNeuron(morphology=morpho, model=eqs, Cm=Cm, Ri=Ri,
                       threshold='m>0.5', threshold_location=compartment,
                       refractory=5*ms)
neuron.v = EL
neuron.gLx[0] = gL
neuron.gNa[compartment] = gNa_0 / neuron.area[compartment]

# Reset the entire neuron when there is a spike
reset = Synapses(neuron, neuron, on_pre='v = EL')
reset.connect('i == compartment')  # Connects the spike initiation compartment to all compartments

# Monitors
S = SpikeMonitor(neuron)
M = StateMonitor(neuron, 'v', record=0)
run(duration, report='text')

# Add spikes for display
v = M[0].v
for t in S.t:
    v[int(t / defaultclock.dt)] = 50*mV

plot(M.t/ms, v/mV, 'k')
show()
_images/frompapers.Brette_2012.Fig5A.1.png

Example: params

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Parameters for spike initiation simulations.

from brian2.units import *

# Passive parameters
EL = -75*mV
S = 7.85e-9*meter**2  # area (sphere of 50 um diameter)
Cm = 0.75*uF/cm**2
gL = 1. / (30000*ohm*cm**2)
Ri = 150*ohm*cm

# Na channels
ENa = 60*mV
ka = 6*mV
va = -40*mV
gNa_0 = gL * 2*S
taum = 0.1*ms

standalone

Example: STDP_standalone

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Spike-timing dependent plasticity. Adapted from Song, Miller and Abbott (2000) and Song and Abbott (2001).

This example is modified from synapses_STDP.py and writes a standalone C++ project in the directory STDP_standalone.

from brian2 import *

set_device('cpp_standalone', directory='STDP_standalone')

N = 1000
taum = 10*ms
taupre = 20*ms
taupost = taupre
Ee = 0*mV
vt = -54*mV
vr = -60*mV
El = -74*mV
taue = 5*ms
F = 15*Hz
gmax = .01
dApre = .01
dApost = -dApre * taupre / taupost * 1.05
dApost *= gmax
dApre *= gmax

eqs_neurons = '''
dv/dt = (ge * (Ee-vr) + El - v) / taum : volt
dge/dt = -ge / taue : 1
'''

input = PoissonGroup(N, rates=F)
neurons = NeuronGroup(1, eqs_neurons, threshold='v>vt', reset='v = vr',
                      method='linear')
S = Synapses(input, neurons,
             '''w : 1
                dApre/dt = -Apre / taupre : 1 (event-driven)
                dApost/dt = -Apost / taupost : 1 (event-driven)''',
             on_pre='''ge += w
                    Apre += dApre
                    w = clip(w + Apost, 0, gmax)''',
             on_post='''Apost += dApost
                     w = clip(w + Apre, 0, gmax)''',
             )
S.connect()
S.w = 'rand() * gmax'
mon = StateMonitor(S, 'w', record=[0, 1])
s_mon = SpikeMonitor(input)

run(100*second, report='text')

subplot(311)
plot(S.w / gmax, '.k')
ylabel('Weight / gmax')
xlabel('Synapse index')
subplot(312)
hist(S.w / gmax, 20)
xlabel('Weight / gmax')
subplot(313)
plot(mon.t/second, mon.w.T/gmax)
xlabel('Time (s)')
ylabel('Weight / gmax')
tight_layout()
show()
_images/standalone.STDP_standalone.1.png

Example: cuba_openmp

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Run the cuba.py example with OpenMP threads.

from brian2 import *

set_device('cpp_standalone', directory='CUBA')
prefs.devices.cpp_standalone.openmp_threads = 4

taum = 20*ms
taue = 5*ms
taui = 10*ms
Vt = -50*mV
Vr = -60*mV
El = -49*mV

eqs = '''
dv/dt  = (ge+gi-(v-El))/taum : volt (unless refractory)
dge/dt = -ge/taue : volt (unless refractory)
dgi/dt = -gi/taui : volt (unless refractory)
'''

P = NeuronGroup(4000, eqs, threshold='v>Vt', reset='v = Vr', refractory=5*ms,
                method='linear')
P.v = 'Vr + rand() * (Vt - Vr)'
P.ge = 0*mV
P.gi = 0*mV

we = (60*0.27/10)*mV # excitatory synaptic weight (voltage)
wi = (-20*4.5/10)*mV # inhibitory synaptic weight
Ce = Synapses(P, P, on_pre='ge += we')
Ci = Synapses(P, P, on_pre='gi += wi')
Ce.connect('i<3200', p=0.02)
Ci.connect('i>=3200', p=0.02)

s_mon = SpikeMonitor(P)

run(1 * second)

plot(s_mon.t/ms, s_mon.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
show()
_images/standalone.cuba_openmp.1.png

synapses

Example: STDP

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Spike-timing dependent plasticity Adapted from Song, Miller and Abbott (2000) and Song and Abbott (2001)

from brian2 import *

N = 1000
taum = 10*ms
taupre = 20*ms
taupost = taupre
Ee = 0*mV
vt = -54*mV
vr = -60*mV
El = -74*mV
taue = 5*ms
F = 15*Hz
gmax = .01
dApre = .01
dApost = -dApre * taupre / taupost * 1.05
dApost *= gmax
dApre *= gmax

eqs_neurons = '''
dv/dt = (ge * (Ee-vr) + El - v) / taum : volt
dge/dt = -ge / taue : 1
'''

input = PoissonGroup(N, rates=F)
neurons = NeuronGroup(1, eqs_neurons, threshold='v>vt', reset='v = vr',
                      method='linear')
S = Synapses(input, neurons,
             '''w : 1
                dApre/dt = -Apre / taupre : 1 (event-driven)
                dApost/dt = -Apost / taupost : 1 (event-driven)''',
             on_pre='''ge += w
                    Apre += dApre
                    w = clip(w + Apost, 0, gmax)''',
             on_post='''Apost += dApost
                     w = clip(w + Apre, 0, gmax)''',
             )
S.connect()
S.w = 'rand() * gmax'
mon = StateMonitor(S, 'w', record=[0, 1])
s_mon = SpikeMonitor(input)

run(100*second, report='text')

subplot(311)
plot(S.w / gmax, '.k')
ylabel('Weight / gmax')
xlabel('Synapse index')
subplot(312)
hist(S.w / gmax, 20)
xlabel('Weight / gmax')
subplot(313)
plot(mon.t/second, mon.w.T/gmax)
xlabel('Time (s)')
ylabel('Weight / gmax')
tight_layout()
show()
_images/synapses.STDP.1.png

Example: efficient_gaussian_connectivity

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

An example of turning an expensive Synapses.connect() operation into three cheap ones using a mathematical trick.

Consider the connection probability between neurons i and j given by the Gaussian function \(p=e^{-\alpha(i-j)^2}\) (for some constant \(\alpha\)). If we want to connect neurons with this probability, we can very simply do:

S.connect(p='exp(-alpha*(i-j)**2)')

However, this has a problem. Although we know that this will create \(O(N)\) synapses if N is the number of neurons, because we have specified p as a function of i and j, we have to evaluate p(i, j) for every pair (i, j), and therefore it takes \(O(N^2)\) operations.

Our first option is to take a cutoff, and say that if \(p<q\) for some small \(q\), then we assume that \(p\approx 0\). We can work out which j values are compatible with a given value of i by solving \(e^{-\alpha(i-j)^2}<q\) which gives \(|i-j|<\sqrt{-\log(q)/\alpha)}=w\). Now we implement the rule using the generator syntax to only search for values between i-w and i+w, except that some of these values will be outside the valid range of values for j so we set skip_if_invalid=True. The connection code is then:

S.connect(j='k for k in range(i-w, i+w) if rand()<exp(-alpha*(i-j)**2)',
          skip_if_invalid=True)

This is a lot faster (see graph labelled “Limited” for this algorithm).

However, it may be a problem that we have to specify a cutoff and so we will lose some synapses doing this: it won’t be mathematically exact. This isn’t a problem for the Gaussian because w grows very slowly with the cutoff probability q, but for other probability distributions with more weight in the tails, it could be an issue.

If we want to be exact, we can still do a big improvement. For the case \(i-w\leq j\leq i+w\) we use the same connection code, but we also handle the case \(|i-j|>w\). This time, we note that we want to create a synapse with probability \(p(i-j)\) and we can rewrite this as \(p(i-j)/p(w)\cdot p(w)\). If \(|i-j|>w\) then this is a product of two probabilities \(p(i-j)/p(w)\) and \(p(w)\). So in the region \(|i-j|>w\) a synapse will be created if two random events both occur, with these two probabilities. This might seem a little strange until you notice that one of the two probabilities \(p(w)\) doesn’t depend on i or j. This lets us use the much more efficient sample algorithm to generate a set of candidate j values, and then add the additional test rand()<p(i-j)/p(w). Here’s the code for that:

w = int(ceil(sqrt(log(q)/-0.1)))
S.connect(j='k for k in range(i-w, i+w) if rand()<exp(-alpha*(i-j)**2)',
          skip_if_invalid=True)
pmax = exp(-0.1*w**2)
S.connect(j='k for k in sample(0, i-w, p=pmax) if rand()<exp(-alpha*(i-j)**2)/pmax',
          skip_if_invalid=True)
S.connect(j='k for k in sample(i+w, N_post, p=pmax) if rand()<exp(-alpha*(i-j)**2)/pmax',
          skip_if_invalid=True)

This “Divided” method is also much faster than the naive method, and is mathematically correct. Note though that this method is still \(O(N^2)\) but the constants are much, much smaller and this will usually be sufficient. It is possible to take the ideas developed here even further and get even better scaling, but in most cases it’s unlikely to be worth the effort.

The code below shows these examples written out, along with some timing code and plots for different values of N.

from brian2 import *
import time

def naive(N):
    G = NeuronGroup(N, 'v:1', threshold='v>1', name='G')
    S = Synapses(G, G, on_pre='v += 1', name='S')
    S.connect(p='exp(-0.1*(i-j)**2)')

def limited(N, q=0.001):
    G = NeuronGroup(N, 'v:1', threshold='v>1', name='G')
    S = Synapses(G, G, on_pre='v += 1', name='S')
    w = int(ceil(sqrt(log(q)/-0.1)))
    S.connect(j='k for k in range(i-w, i+w) if rand()<exp(-0.1*(i-j)**2)', skip_if_invalid=True)

def divided(N, q=0.001):
    G = NeuronGroup(N, 'v:1', threshold='v>1', name='G')
    S = Synapses(G, G, on_pre='v += 1', name='S')
    w = int(ceil(sqrt(log(q)/-0.1)))
    S.connect(j='k for k in range(i-w, i+w) if rand()<exp(-0.1*(i-j)**2)', skip_if_invalid=True)
    pmax = exp(-0.1*w**2)
    S.connect(j='k for k in sample(0, i-w, p=pmax) if rand()<exp(-0.1*(i-j)**2)/pmax', skip_if_invalid=True)
    S.connect(j='k for k in sample(i+w, N_post, p=pmax) if rand()<exp(-0.1*(i-j)**2)/pmax', skip_if_invalid=True)

def repeated_run(f, N, repeats):
    start_time = time.time()
    for _ in range(repeats):
        f(N)
    end_time = time.time()
    return (end_time-start_time)/repeats

N = array([100, 500, 1000, 5000, 10000, 20000])
repeats = array([100, 10, 10, 1, 1, 1])*3
naive(10)
limited(10)
divided(10)
print 'Starting naive'
loglog(N, [repeated_run(naive, n, r) for n, r in zip(N, repeats)],
       label='Naive', lw=2)
print 'Starting limit'
loglog(N, [repeated_run(limited, n, r) for n, r in zip(N, repeats)],
       label='Limited', lw=2)
print 'Starting divided'
loglog(N, [repeated_run(divided, n, r) for n, r in zip(N, repeats)],
       label='Divided', lw=2)
xlabel('N')
ylabel('Time (s)')
legend(loc='best', frameon=False)
show()
_images/synapses.efficient_gaussian_connectivity.1.png

Example: gapjunctions

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Neurons with gap junctions.

from brian2 import *

n = 10
v0 = 1.05
tau = 10*ms

eqs = '''
dv/dt = (v0 - v + Igap) / tau : 1
Igap : 1 # gap junction current
'''

neurons = NeuronGroup(n, eqs, threshold='v > 1', reset='v = 0',
                      method='linear')
neurons.v = 'i * 1.0 / (n-1)'
trace = StateMonitor(neurons, 'v', record=[0, 5])

S = Synapses(neurons, neurons, '''
             w : 1 # gap junction conductance
             Igap_post = w * (v_pre - v_post) : 1 (summed)
             ''')
S.connect()
S.w = .02

run(500*ms)

plot(trace.t/ms, trace[0].v)
plot(trace.t/ms, trace[5].v)
xlabel('Time (ms)')
ylabel('v')
show()
_images/synapses.gapjunctions.1.png

Example: jeffress

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Jeffress model, adapted with spiking neuron models. A sound source (white noise) is moving around the head. Delay differences between the two ears are used to determine the azimuth of the source. Delays are mapped to a neural place code using delay lines (each neuron receives input from both ears, with different delays).

from brian2 import *

defaultclock.dt = .02*ms

# Sound
sound = TimedArray(10 * randn(50000), dt=defaultclock.dt) # white noise

# Ears and sound motion around the head (constant angular speed)
sound_speed = 300*metre/second
interaural_distance = 20*cm # big head!
max_delay = interaural_distance / sound_speed
print("Maximum interaural delay: %s" % max_delay)
angular_speed = 2 * pi / second # 1 turn/second
tau_ear = 1*ms
sigma_ear = .1
eqs_ears = '''
dx/dt = (sound(t-delay)-x)/tau_ear+sigma_ear*(2./tau_ear)**.5*xi : 1 (unless refractory)
delay = distance*sin(theta) : second
distance : second # distance to the centre of the head in time units
dtheta/dt = angular_speed : radian
'''
ears = NeuronGroup(2, eqs_ears, threshold='x>1', reset='x = 0',
                   refractory=2.5*ms, name='ears', method='euler')
ears.distance = [-.5 * max_delay, .5 * max_delay]
traces = StateMonitor(ears, 'delay', record=True)
# Coincidence detectors
num_neurons = 30
tau = 1*ms
sigma = .1
eqs_neurons = '''
dv/dt = -v / tau + sigma * (2 / tau)**.5 * xi : 1
'''
neurons = NeuronGroup(num_neurons, eqs_neurons, threshold='v>1',
                      reset='v = 0', name='neurons', method='euler')

synapses = Synapses(ears, neurons, on_pre='v += .5')
synapses.connect()

synapses.delay['i==0'] = '(1.0*j)/(num_neurons-1)*1.1*max_delay'
synapses.delay['i==1'] = '(1.0*(num_neurons-j-1))/(num_neurons-1)*1.1*max_delay'

spikes = SpikeMonitor(neurons)

run(1000*ms)

# Plot the results
i, t = spikes.it
subplot(2, 1, 1)
plot(t/ms, i, '.')
xlabel('Time (ms)')
ylabel('Neuron index')
xlim(0, 1000)
subplot(2, 1, 2)
plot(traces.t/ms, traces.delay.T/ms)
xlabel('Time (ms)')
ylabel('Input delay (ms)')
xlim(0, 1000)
show()
_images/synapses.jeffress.1.png

Example: licklider

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Spike-based adaptation of Licklider’s model of pitch processing (autocorrelation with delay lines) with phase locking.

from brian2 import *

defaultclock.dt = .02 * ms

# Ear and sound
max_delay = 20*ms # 50 Hz
tau_ear = 1*ms
sigma_ear = 0.0
eqs_ear = '''
dx/dt = (sound-x)/tau_ear+0.1*(2./tau_ear)**.5*xi : 1 (unless refractory)
sound = 5*sin(2*pi*frequency*t)**3 : 1 # nonlinear distortion
#sound = 5*(sin(4*pi*frequency*t)+.5*sin(6*pi*frequency*t)) : 1 # missing fundamental
frequency = (200+200*t*Hz)*Hz : Hz # increasing pitch
'''
receptors = NeuronGroup(2, eqs_ear, threshold='x>1', reset='x=0',
                        refractory=2*ms, method='euler')
# Coincidence detectors
min_freq = 50*Hz
max_freq = 1000*Hz
num_neurons = 300
tau = 1*ms
sigma = .1
eqs_neurons = '''
dv/dt = -v/tau+sigma*(2./tau)**.5*xi : 1
'''

neurons = NeuronGroup(num_neurons, eqs_neurons, threshold='v>1', reset='v=0',
                      method='euler')

synapses = Synapses(receptors, neurons, on_pre='v += 0.5')
synapses.connect()
synapses.delay = 'i*1.0/exp(log(min_freq/Hz)+(j*1.0/(num_neurons-1))*log(max_freq/min_freq))*second'

spikes = SpikeMonitor(neurons)

run(500*ms)
plot(spikes.t/ms, spikes.i, '.k')
xlabel('Time (ms)')
ylabel('Frequency')
yticks([0, 99, 199, 299],
       array(1. / synapses.delay[1, [0, 99, 199, 299]], dtype=int))
show()
_images/synapses.licklider.1.png

Example: nonlinear

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

NMDA synapses.

from brian2 import *

a = 1 / (10*ms)
b = 1 / (10*ms)
c = 1 / (10*ms)

input = NeuronGroup(2, 'dv/dt = 1/(10*ms) : 1', threshold='v>1', reset='v = 0',
                    method='euler')
neurons = NeuronGroup(1, """dv/dt = (g-v)/(10*ms) : 1
                            g : 1""", method='linear')
S = Synapses(input, neurons,'''
                dg_syn/dt = -a*g_syn+b*x*(1-g_syn) : 1 (clock-driven)
                g_post = g_syn : 1 (summed)
                dx/dt=-c*x : 1 (clock-driven)
                w : 1 # synaptic weight
             ''', on_pre='x += w') # NMDA synapses

S.connect()
S.w = [1., 10.]
input.v = [0., 0.5]

M = StateMonitor(S, 'g',
                 # If not using standalone mode, this could also simply be
                 # record=True
                 record=np.arange(len(input)*len(neurons)))
Mn = StateMonitor(neurons, 'g', record=0)

run(1000*ms)

subplot(2, 1, 1)
plot(M.t/ms, M.g.T)
xlabel('Time (ms)')
ylabel('g_syn')
subplot(2, 1, 2)
plot(Mn.t/ms, Mn[0].g)
ylabel('Time (ms)')
ylabel('g')
show()
_images/synapses.nonlinear.1.png

Example: spatial_connections

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

A simple example showing how string expressions can be used to implement spatial (deterministic or stochastic) connection patterns.

from brian2 import *

rows, cols = 20, 20
G = NeuronGroup(rows * cols, '''x : meter
                                y : meter''')
# initialize the grid positions
grid_dist = 25*umeter
G.x = '(i / rows) * grid_dist - rows/2.0 * grid_dist'
G.y = '(i % rows) * grid_dist - cols/2.0 * grid_dist'

# Deterministic connections
distance = 120*umeter
S_deterministic = Synapses(G, G)
S_deterministic.connect('sqrt((x_pre - x_post)**2 + (y_pre - y_post)**2) < distance')

# Random connections (no self-connections)
S_stochastic = Synapses(G, G)
S_stochastic.connect('i != j',
                     p='1.5 * exp(-((x_pre-x_post)**2 + (y_pre-y_post)**2)/(2*(60*umeter)**2))')

figure(figsize=(12, 6))

# Show the connections for some neurons in different colors
for color in ['g', 'b', 'm']:
    subplot(1, 2, 1)
    neuron_idx = np.random.randint(0, rows*cols)
    plot(G.x[neuron_idx] / umeter, G.y[neuron_idx] / umeter, 'o', mec=color,
             mfc='none')
    plot(G.x[S_deterministic.j[neuron_idx, :]] / umeter,
             G.y[S_deterministic.j[neuron_idx, :]] / umeter, color + '.')
    subplot(1, 2, 2)
    plot(G.x[neuron_idx] / umeter, G.y[neuron_idx] / umeter, 'o', mec=color,
             mfc='none')
    plot(G.x[S_stochastic.j[neuron_idx, :]] / umeter,
             G.y[S_stochastic.j[neuron_idx, :]] / umeter, color + '.')

for idx, t in enumerate(['determininstic connections',
                         'random connections']):
    subplot(1, 2, idx + 1)
    xlim((-rows/2.0 * grid_dist) / umeter, (rows/2.0 * grid_dist) / umeter)
    ylim((-cols/2.0 * grid_dist) / umeter, (cols/2.0 * grid_dist) / umeter)
    title(t)
    xlabel('x')
    ylabel('y', rotation='horizontal')
    axis('equal')

tight_layout()
show()
_images/synapses.spatial_connections.1.png

Example: state_variables

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Set state variable values with a string (using code generation).

from brian2 import *

G = NeuronGroup(100, 'v:volt', threshold='v>-50*mV')
G.v = '(sin(2*pi*i/N) - 70 + 0.25*randn()) * mV'
S = Synapses(G, G, 'w : volt', on_pre='v += w')
S.connect()

space_constant = 200.0
S.w['i > j'] = 'exp(-(i - j)**2/space_constant) * mV'

# Generate a matrix for display
w_matrix = np.zeros((len(G), len(G)))
w_matrix[S.i[:], S.j[:]] = S.w[:]

subplot(1, 2, 1)
plot(G.v[:] / mV)
xlabel('Neuron index')
ylabel('v')
subplot(1, 2, 2)
imshow(w_matrix)
xlabel('i')
ylabel('j')
title('Synaptic weight')
show()
_images/synapses.state_variables.1.png

Example: synapses

Note

You can launch an interactive, editable version of this example without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

A simple example of using Synapses.

from brian2 import *

G1 = NeuronGroup(10, 'dv/dt = -v / (10*ms) : 1',
                 threshold='v > 1', reset='v=0.', method='linear')
G1.v = 1.2
G2 = NeuronGroup(10, 'dv/dt = -v / (10*ms) : 1',
                 threshold='v > 1', reset='v=0', method='linear')

syn = Synapses(G1, G2, 'dw/dt = -w / (50*ms): 1 (event-driven)', on_pre='v += w')

syn.connect('i == j', p=0.75)

# Set the delays
syn.delay = '1*ms + i*ms + 0.25*ms * randn()'
# Set the initial values of the synaptic variable
syn.w = 1

mon = StateMonitor(G2, 'v', record=True)
run(20*ms)
plot(mon.t/ms, mon.v.T)
xlabel('Time (ms)')
ylabel('v')
show()
_images/synapses.synapses.1.png

brian2 package

Brian 2.0

hears module

This is only a bridge for using Brian 1 hears with Brian 2.

NOTES:

  • Slicing sounds with Brian 2 units doesn’t work, you need to either use Brian 1 units or replace calls to sound[:20*ms] with sound.slice(None, 20*ms), etc.

TODO: handle properties (e.g. sound.duration)

Not working examples:

  • time_varying_filter1 (care with units)

Exported members: convert_unit_b1_to_b2, convert_unit_b2_to_b1

Classes

BridgeSound We add a new method slice because slicing with units can’t work with Brian 2 units.
FilterbankGroup(filterbank, targetvar, ...)

Methods

Sound alias of BridgeSound
WrappedSound alias of new_class

Functions

convert_unit_b1_to_b2(val)
convert_unit_b2_to_b1(val)
modify_arg(arg) Modify arguments to make them compatible with Brian 1.
wrap_units(f) Wrap a function to convert units into a form that Brian 1 can handle.
wrap_units_class(_C) Wrap a class to convert units into a form that Brian 1 can handle in all methods
wrap_units_property(p)

numpy_ module

A dummy package to allow importing numpy and the unit-aware replacements of numpy functions without having to know which functions are overwritten.

This can be used for example as import brian2.numpy_ as np

Exported members: add_newdocs, ModuleDeprecationWarning, __version__, pkgload(), PackageLoader, show_config(), char, rec, memmap, newaxis, ndarray, flatiter, nditer, nested_iters, ufunc, arange(), array, zeros, count_nonzero, empty, broadcast, dtype, fromstring, fromfile, frombuffer ... (592 more members)

only module

A dummy package to allow wildcard import from brian2 without also importing the pylab (numpy + matplotlib) namespace.

Usage: from brian2.only import *

Functions

restore_initial_state() Restores internal Brian variables to the state they are in when Brian is imported

Subpackages

codegen package

Package providing the code generation framework.

_prefs module

Module declaring general code generation preferences.

Preferences

Code generation preferences

codegen.loop_invariant_optimisations = True

Whether to pull out scalar expressions out of the statements, so that they are only evaluated once instead of once for every neuron/synapse/... Can be switched off, e.g. because it complicates the code (and the same optimisation is already performed by the compiler) or because the code generation target does not deal well with it. Defaults to True.

codegen.string_expression_target = 'numpy'

Default target for the evaluation of string expressions (e.g. when indexing state variables). Should normally not be changed from the default numpy target, because the overhead of compiling code is not worth the speed gain for simple expressions.

Accepts the same arguments as codegen.target, except for 'auto'

codegen.target = 'auto'

Default target for code generation.

Can be a string, in which case it should be one of:

  • 'auto' the default, automatically chose the best code generation target available.
  • 'weave' uses scipy.weave to generate and compile C++ code, should work anywhere where gcc is installed and available at the command line.
  • 'cython', uses the Cython package to generate C++ code. Needs a working installation of Cython and a C++ compiler.
  • 'numpy' works on all platforms and doesn’t need a C compiler but is often less efficient.

Or it can be a CodeObject class.

codeobject module

Module providing the base CodeObject and related functions.

Exported members: CodeObject, CodeObjectUpdater, constant_or_scalar

Classes

CodeObject(owner, code, variables, ...[, name]) Executable code object.

Functions

constant_or_scalar(varname, variable) Convenience function to generate code to access the value of a variable.
create_runner_codeobj(group, code, ...[, ...]) Create a CodeObject for the execution of code in the context of a Group.
cpp_prefs module

Preferences related to C++ compilation

Preferences

C++ compilation preferences

codegen.cpp.compiler = ''

Compiler to use (uses default if empty)

Should be gcc or msvc.

codegen.cpp.define_macros = []

List of macros to define; each macro is defined using a 2-tuple, where ‘value’ is either the string to define it to or None to define it without a particular value (equivalent of “#define FOO” in source or -DFOO on Unix C compiler command line).

codegen.cpp.extra_compile_args = None

Extra arguments to pass to compiler (if None, use either extra_compile_args_gcc or extra_compile_args_msvc).

codegen.cpp.extra_compile_args_gcc = ['-w', '-O3', '-ffast-math', '-fno-finite-math-only', '-march=native']

Extra compile arguments to pass to GCC compiler

codegen.cpp.extra_compile_args_msvc = ['/Ox', '/w', '/arch:SSE2']

Extra compile arguments to pass to MSVC compiler (the default /arch: flag is determined based on the processor architecture)
Any extra platform- and compiler-specific information to use when linking object files together.

codegen.cpp.headers = []

A list of strings specifying header files to use when compiling the code. The list might look like [“<vector>”,“‘my_header’”]. Note that the header strings need to be in a form than can be pasted at the end of a #include statement in the C++ code.

codegen.cpp.include_dirs = []

Include directories to use. Note that $prefix/include will be appended to the end automatically, where $prefix is Python’s site-specific directory prefix as returned by sys.prefix.

codegen.cpp.libraries = []

List of library names (not filenames or paths) to link against.

codegen.cpp.library_dirs = []

List of directories to search for C/C++ libraries at link time. Note that $prefix/lib will be appended to the end automatically, where $prefix is Python’s site-specific directory prefix as returned by sys.prefix.

codegen.cpp.msvc_architecture = ''

MSVC architecture name (or use system architectue by default).

Could take values such as x86, amd64, etc.

codegen.cpp.msvc_vars_location = ''

Location of the MSVC command line tool (or search for best by default).

codegen.cpp.runtime_library_dirs = []

List of directories to search for C/C++ libraries at run time.

Exported members: get_compiler_and_args

Functions

get_compiler_and_args() Returns the computed compiler and compilation flags
update_for_cross_compilation(library_dirs, ...) Update the compiler arguments to allow cross-compilation for 32bit on a 64bit Linux system.
optimisation module

Simplify and optimise sequences of statements by rewriting and pulling out loop invariants.

Exported members: optimise_statements, ArithmeticSimplifier, Simplifier

Classes

ArithmeticSimplifier(variables) Carries out the following arithmetic simplifications:
Simplifier(variables, scalar_statements[, ...]) Carry out arithmetic simplifications (see ArithmeticSimplifier) and loop invariants

Functions

cancel_identical_terms(primary, inverted) Cancel terms in a collection, e.g.
collect(node) Attempts to collect commutative operations into one and simplifies them.
collect_commutative(node, primary, inverted, ...)
evaluate_expr(expr, ns) Try to evaluate the expression in the given namespace
expression_complexity(expr, variables)
optimise_statements(scalar_statements, ...) Optimise a sequence of scalar and vector statements
reduced_node(terms, op) Reduce a sequence of terms with the given operator
permutation_analysis module

Module for analysing synaptic pre and post code for synapse order independence.

Exported members: OrderDependenceError, check_for_order_independence

Classes

OrderDependenceError

Functions

check_for_order_independence(statements, ...) Check that the sequence of statements doesn’t depend on the order in which the indices are iterated through.
statements module

Module providing the Statement class.

Classes

Statement(var, op, expr, comment, dtype[, ...]) A single line mathematical statement.
targets module

Module that stores all known code generation targets as codegen_targets.

Exported members: codegen_targets

templates module

Handles loading templates from a directory.

Exported members: Templater

Classes

CodeObjectTemplate(template, template_source) Single template object returned by Templater and used for final code generation
LazyTemplateLoader(environment, extension) Helper object to load templates only when they are needed.
MultiTemplate(module) Code generated by a CodeObjectTemplate with multiple blocks
Templater(package_name, extension[, env_globals]) Class to load and return all the templates a CodeObject defines.

Functions

autoindent(code)
autoindent_postfilter(code)
translation module

This module translates a series of statements into a language-specific syntactically correct code block that can be inserted into a template.

It infers whether or not a variable can be declared as constant, etc. It should handle common subexpressions, and so forth.

The input information needed:

  • The sequence of statements (a multiline string) in standard mathematical form
  • The list of known variables, common subexpressions and functions, and for each variable whether or not it is a value or an array, and if an array what the dtype is.
  • The dtype to use for newly created variables
  • The language to translate to

Exported members: make_statements(), analyse_identifiers(), get_identifiers_recursively()

Classes

LineInfo(\*\*kwds) A helper class, just used to store attributes.

Functions

analyse_identifiers(code, variables[, recursive]) Analyses a code string (sequence of statements) to find all identifiers by type.
get_identifiers_recursively(expressions, ...) Gets all the identifiers in a list of expressions, recursing down into subexpressions.
is_scalar_expression(expr, variables) Whether the given expression is scalar.
make_statements(code, variables, dtype[, ...]) Turn a series of abstract code statements into Statement objects, inferring whether each line is a set/declare operation, whether the variables are constant or not, and handling the cacheing of subexpressions.
Subpackages
generators package
base module

Base class for generating code in different programming languages, gives the methods which should be overridden to implement a new language.

Exported members: CodeGenerator

Classes

CodeGenerator(variables, variable_indices, ...) Base class for all languages.
cpp_generator module

Exported members: CPPCodeGenerator, c_data_type()

Classes

CPPCodeGenerator(\*args, \*\*kwds) C++ language

Functions

c_data_type(dtype) Gives the C language specifier for numpy data types.
cython_generator module

Exported members: CythonCodeGenerator

Classes

CythonCodeGenerator(variables, ...[, ...]) Cython code generator
CythonNodeRenderer([use_vectorisation_idx])

Methods

Functions

get_cpp_dtype(obj)
get_numpy_dtype(obj)
numpy_generator module

Exported members: NumpyCodeGenerator

Classes

NumpyCodeGenerator(variables, ...[, ...]) Numpy language
VectorisationError

Functions

ceil_func(value)
clip_func(array, a_min, a_max)
floor_func(value)
int_func(value)
rand_func(vectorisation_idx)
randn_func(vectorisation_idx)
runtime package

Runtime targets for code generation.

Subpackages
cython_rt package
cython_rt module

Exported members: CythonCodeObject

Classes

CythonCodeObject(owner, code, variables, ...) Execute code using Cython.
extension_manager module

Cython automatic extension builder/manager

Inspired by IPython’s Cython cell magics, see: https://github.com/ipython/ipython/blob/master/IPython/extensions/cythonmagic.py

Exported members: cython_extension_manager

Classes

CythonExtensionManager()

Attributes

Functions

simplify_path_env_var(path)

Objects

cython_extension_manager
numpy_rt package

Numpy runtime implementation.

Preferences

Numpy runtime codegen preferences

codegen.runtime.numpy.discard_units = False

Whether to change the namespace of user-specifed functions to remove units.
numpy_rt module

Module providing NumpyCodeObject.

Exported members: NumpyCodeObject

Classes

NumpyCodeObject(owner, code, variables, ...) Execute code using Numpy
synapse_vectorisation module

Module for efficient vectorisation of synapses code

Exported members: vectorise_synapses_code, SynapseVectorisationError

Classes

SynapseVectorisationError

Functions

ufunc_at_vectorisation(statements, ...)
weave_rt package

Runtime C++ code generation via weave.

weave_rt module

Module providing WeaveCodeObject.

Exported members: WeaveCodeObject, WeaveCodeGenerator

Classes

WeaveCodeGenerator(\*args, \*\*kwds)
WeaveCodeObject(owner, code, variables, ...) Weave code object

Functions

weave_data_type(dtype) Gives the C language specifier for numpy data types using weave.

core package

Essential Brian modules, in particular base classes for all kinds of brian objects.

Built-in preferences

Core Brian preferences

core.default_float_dtype = float64

Default dtype for all arrays of scalars (state variables, weights, etc.).

Currently, this is not supported (only float64 can be used).

core.default_integer_dtype = int32

Default dtype for all arrays of integer scalars.

core.outdated_dependency_error = True

Whether to raise an error for outdated dependencies (True) or just a warning (False).
base module

All Brian objects should derive from BrianObject.

Exported members: BrianObject, weakproxy_with_fallback(), BrianObjectException, brian_object_exception()

Classes

BrianObject(\*args, \*\*kwds) All Brian objects derive from this class, defines magic tracking and update.
BrianObjectException(message, brianobj, ...) High level exception that adds extra Brian-specific information to exceptions

Functions

brian_object_exception(message, brianobj, ...) Returns a BrianObjectException derived from the original exception.
device_override(name) Decorates a function/method to allow it to be overridden by the current Device.
weakproxy_with_fallback(obj) Attempts to create a weakproxy to the object, but falls back to the object if not possible.
clocks module

Clocks for the simulator.

Exported members: Clock, defaultclock

Classes

Clock(dt[, name]) An object that holds the simulation time and the time step.
DefaultClockProxy Method proxy to access the defaultclock of the currently active device

Functions

check_dt(new_dt, old_dt, target_t) Check that the target time can be represented equally well with the new dt.

Objects

defaultclock The standard clock, used for objects that do not specify any clock or dt
core_preferences module

Definitions, documentation, default values and validation functions for core Brian preferences.

Functions

default_float_dtype_validator(dtype)
dtype_repr(dtype)
functions module

Exported members: DEFAULT_FUNCTIONS, Function, implementation(), declare_types()

Classes

Function(pyfunc[, sympy_func, arg_units, ...]) An abstract specification of a function that can be used as part of model equations, etc.
FunctionImplementation([name, code, ...]) A simple container object for function implementations.
FunctionImplementationContainer(function) Helper object to store implementations and give access in a dictionary-like fashion, using CodeGenerator implementations as a fallback for CodeObject implementations.
SymbolicConstant(name, sympy_obj, value) Class for representing constants (e.g.
log10

Methods

Functions

declare_types(\*\*types) Decorator to declare argument and result types for a function
implementation(target[, code, namespace, ...]) A simple decorator to extend user-written Python functions to work with code generation in other languages.
magic module

Exported members: MagicNetwork, magic_network, MagicError, run(), stop(), collect(), store(), restore(), start_scope()

Classes

MagicError Error that is raised when something goes wrong in MagicNetwork
MagicNetwork() Network that automatically adds all Brian objects

Functions

collect([level]) Return the list of BrianObjects that will be simulated if run() is called.
get_objects_in_namespace(level) Get all the objects in the current namespace that derive from BrianObject.
restore([name, filename]) Restore the state of the network and all included objects.
run(duration[, report, report_period, ...]) Runs a simulation with all “visible” Brian objects for the given duration.
start_scope() Starts a new scope for magic functions
stop() Stops all running simulations.
store([name, filename]) Store the state of the network and all included objects.

Objects

magic_network Automatically constructed MagicNetwork of all Brian objects
names module

Exported members: Nameable

Classes

Nameable(name) Base class to find a unique name for an object

Functions

find_name(name)
namespace module

Implementation of the namespace system, used to resolve the identifiers in model equations of NeuronGroup and Synapses

Exported members: get_local_namespace(), DEFAULT_FUNCTIONS, DEFAULT_UNITS, DEFAULT_CONSTANTS

Functions

get_local_namespace(level) Get the surrounding namespace.
network module

Module defining the Network object, the basis of all simulation runs.

Preferences

Network preferences

core.network.default_schedule = ['start', 'groups', 'thresholds', 'synapses', 'resets', 'end']

Default schedule used for networks that don’t specify a schedule.

Exported members: Network, profiling_summary()

Classes

Network(\*objs[, name]) The main simulation controller in Brian
ProfilingSummary(net[, show]) Class to nicely display the results of profiling.
TextReport(stream) Helper object to report simulation progress in Network.run().

Functions

profiling_summary([net, show]) Returns a ProfilingSummary of the profiling info for a run.
schedule_propagation_offset([net]) Returns the minimal time difference for a post-synaptic effect after a spike.
operations module

Exported members: NetworkOperation, network_operation()

Classes

NetworkOperation(function[, dt, clock, ...]) Object with function that is called every time step.

Functions

network_operation([when]) Decorator to make a function get called every time step of a simulation.
preferences module

Brian global preferences are stored as attributes of a BrianGlobalPreferences object prefs.

Exported members: PreferenceError, BrianPreference, prefs, brian_prefs

Classes

BrianGlobalPreferences() Class of the prefs object.
BrianGlobalPreferencesView(basename, all_prefs) A class allowing for accessing preferences in a subcategory.
BrianPreference(default, docs[, validator, ...]) Used for defining a Brian preference.
DefaultValidator(value) Default preference validator
ErrorRaiser
PreferenceError Exception relating to the Brian preferences system.

Functions

check_preference_name(name) Make sure that a preference name is valid.
parse_preference_name(name) Split a preference name into a base and end name.

Objects

brian_prefs
prefs Preference categories:
spikesource module

Exported members: SpikeSource

Classes

SpikeSource A source of spikes.
tracking module

Exported members: Trackable

Classes

InstanceFollower Keep track of all instances of classes derived from Trackable
InstanceTrackerSet A set of weakref.ref to all existing objects of a certain class.
Trackable Classes derived from this will have their instances tracked.
variables module

Classes used to specify the type of a function, variable or common sub-expression.

Exported members: Variable, Constant, ArrayVariable, DynamicArrayVariable, Subexpression, AuxiliaryVariable, VariableView, Variables, LinkedVariable, linked_var()

Classes

ArrayVariable(name, unit, owner, size, device) An object providing information about a model variable stored in an array (for example, all state variables).
AuxiliaryVariable(name, unit[, dtype, scalar]) Variable description for an auxiliary variable (most likely one that is added automatically to abstract code, e.g.
Constant(name, unit, value[, owner]) A scalar constant (e.g.
DynamicArrayVariable(name, unit, owner, ...) An object providing information about a model variable stored in a dynamic array (used in Synapses).
LinkedVariable(group, name, variable[, index]) A simple helper class to make linking variables explicit.
Subexpression(name, unit, owner, expr, device) An object providing information about a named subexpression in a model.
Variable(name, unit[, owner, dtype, scalar, ...]) An object providing information about model variables (including implicit variables such as t or xi).
VariableView(name, variable, group[, unit]) A view on a variable that allows to treat it as an numpy array while allowing special indexing (e.g.
Variables(owner[, default_index]) A container class for storing Variable objects.

Functions

get_dtype(obj) Helper function to return the numpy.dtype of an arbitrary object.
get_dtype_str(val) Returns canonical string representation of the dtype of a value or dtype
linked_var(group_or_variable[, name, index]) Represents a link target for setting a linked variable.
variables_by_owner(variables, owner)

devices package

Package providing the “devices” infrastructure.

device module

Module containing the Device base class as well as the RuntimeDevice implementation and some helper functions to access/set devices.

Exported members: Device, RuntimeDevice, get_device(), set_device(), all_devices, reinit_devices, reset_device, device, seed()

Classes

CurrentDeviceProxy Method proxy for access to the currently active device
Device() Base Device object.
Dummy Dummy object
RuntimeDevice() The default device used in Brian, state variables are stored as numpy arrays in memory.

Functions

auto_target() Automatically chose a code generation target (invoked when the codegen.target preference is set to 'auto'.
get_default_codeobject_class([pref]) Returns the default CodeObject class from the preferences.
get_device() Gets the actve Device object
reinit_devices() Reinitialize all devices, call Device.activate again on the current device and reset the preferences.
reset_device([device]) Reset to a previously used device.
seed([seed]) Set the seed for the random number generator.
set_device(device[, build_on_run]) Set the device used for simulations.

Objects

active_device The currently active device (set with set_device())
device Proxy object to access methods of the current device
runtime_device The default device used in Brian, state variables are stored as numpy arrays in memory.
Subpackages
cpp_standalone package

Package implementing the C++ “standalone” Device and CodeObject.

codeobject module

Module implementing the C++ “standalone” CodeObject

Exported members: CPPStandaloneCodeObject

Classes

CPPStandaloneCodeObject(owner, code, ...[, name]) C++ standalone code object

Functions

generate_rand_code(rand_func, owner)
openmp_pragma(pragma_type)
device module

Module implementing the C++ “standalone” device.

Classes

CPPStandaloneDevice() The Device used for C++ standalone simulations.
CPPWriter(project_dir)

Methods

RunFunctionContext(name, include_in_parent)

Functions

invert_dict(x)

Objects

cpp_standalone_device The Device used for C++ standalone simulations.

equations package

Module handling equations and “code strings”, expressions or statements, used for example for the reset and threshold definition of a neuron.

Exported members: Equations, Expression, Statements

codestrings module

Module defining CodeString, a class for a string of code together with information about its namespace. Only serves as a parent class, its subclasses Expression and Statements are the ones that are actually used.

Exported members: Expression, Statements

Classes

CodeString(code) A class for representing “code strings”, i.e.
Expression([code, sympy_expression]) Class for representing an expression.
Statements(code) Class for representing statements.

Functions

is_constant_over_dt(expression, variables, ...) Check whether an expression can be considered as constant over a time step.
equations module

Differential equations for Brian models.

Exported members: Equations

Classes

EquationError Exception type related to errors in an equation definition.
Equations(eqns, \*\*kwds) Container that stores equations from which models can be created.
SingleEquation(type, varname, unit[, ...]) Class for internal use, encapsulates a single equation or parameter.

Functions

check_identifier_basic(identifier) Check an identifier (usually resulting from an equation string provided by the user) for conformity with the rules.
check_identifier_constants(identifier) Make sure that identifier names do not clash with function names.
check_identifier_functions(identifier) Make sure that identifier names do not clash with function names.
check_identifier_reserved(identifier) Check that an identifier is not using a reserved special variable name.
check_identifier_units(identifier) Make sure that identifier names do not clash with unit names.
check_subexpressions(group, equations, ...) Checks the subexpressions in the equations and raises an error if a subexpression refers to stateful functions without being marked as “constant over dt”.
extract_constant_subexpressions(eqs)
is_stateful(expression, variables) Whether the given expression refers to stateful functions (and is therefore not guaranteed to give the same result if called repetively).
parse_string_equations(eqns) Parse a string defining equations.
unit_and_type_from_string(unit_string) Returns the unit that results from evaluating a string like “siemens / metre ** 2”, allowing for the special string “1” to signify dimensionless units, the string “boolean” for a boolean and “integer” for an integer variable.
refractory module

Module implementing Brian’s refractory mechanism.

Exported members: add_refractoriness

Functions

add_refractoriness(eqs) Extends a given set of equations with the refractory mechanism.
check_identifier_refractory(identifier) Check that the identifier is not using a name reserved for the refractory mechanism.
unitcheck module

Utility functions for handling the units in Equations.

Exported members: unit_from_expression, check_unit, check_units_statements

Functions

check_unit(expression, unit, variables) Compares the unit for an expression to an expected unit in a given namespace.
check_units_statements(code, variables) Check the units for a series of statements.

groups package

Package providing groups such as NeuronGroup or PoissonGroup.

group module

This module defines the VariableOwner class, a mix-in class for everything that saves state variables, e.g. Clock or NeuronGroup, the class Group for objects that in addition to storing state variables also execute code, i.e. objects such as NeuronGroup or StateMonitor but not Clock, and finally CodeRunner, a class to run code in the context of a Group.

Exported members: Group, VariableOwner, CodeRunner

Classes

CodeRunner(group, template[, code, ...]) A “code runner” that runs a CodeObject every timestep and keeps a reference to the Group.
Group(\*args, \*\*kwds)

Methods

IndexWrapper(group) Convenience class to allow access to the indices via indexing syntax.
Indexing(group[, default_index]) Object responsible for calculating flat index arrays from arbitrary group- specific indices.
VariableOwner(name) Mix-in class for accessing arrays by attribute.

Functions

get_dtype(equation[, dtype]) Helper function to interpret the dtype keyword argument in NeuronGroup etc.
neurongroup module

This model defines the NeuronGroup, the core of most simulations.

Exported members: NeuronGroup

Classes

NeuronGroup(N, model[, method, threshold, ...]) A group of neurons.
Resetter(group[, when, order, event]) The CodeRunner that applies the reset statement(s) to the state variables of neurons that have spiked in this timestep.
StateUpdater(group, method) The CodeRunner that updates the state variables of a NeuronGroup at every timestep.
SubexpressionUpdater(group, subexpressions) The CodeRunner that updates the state variables storing the values of subexpressions that have been marked as “constant over dt”.
Thresholder(group[, when, event]) The CodeRunner that applies the threshold condition to the state variables of a NeuronGroup at every timestep and sets its spikes and refractory_until attributes.
subgroup module

Exported members: Subgroup

Classes

Subgroup(source, start, stop[, name]) Subgroup of any Group

importexport package

Package providing import/export support.

Exported members: ImportExport

dictlike module

Module providing DictImportExport and PandasImportExport (requiring a working installation of pandas).

Classes

DictImportExport An importer/exporter for variables in format of dict of numpy arrays.
PandasImportExport An importer/exporter for variables in pandas DataFrame format.
importexport module

Module defining the ImportExport class that enables getting state variable data in and out of groups in various formats (see Group.get_states() and Group.set_states()).

Classes

ImportExport Class for registering new import/export methods (via static methods).

input package

Classes for providing external input to a network.

binomial module

Implementation of BinomialFunction

Exported members: BinomialFunction

Classes

BinomialFunction(n, p[, approximate, name]) A function that generates samples from a binomial distribution.
poissongroup module

Implementation of PoissonGroup.

Exported members: PoissonGroup

Classes

PoissonGroup(\*args, \*\*kwds) Poisson spike source
poissoninput module

Implementation of PoissonInput.

Exported members: PoissonInput

Classes

PoissonInput(target, target_var, N, rate, weight) Adds independent Poisson input to a target variable of a Group.
spikegeneratorgroup module

Module defining SpikeGeneratorGroup.

Exported members: SpikeGeneratorGroup

Classes

SpikeGeneratorGroup(N, indices, times[, dt, ...]) A group emitting spikes at given times.
timedarray module

Implementation of TimedArray.

Exported members: TimedArray

Classes

TimedArray(values, dt[, name]) A function of time built from an array of values.

memory package

dynamicarray module

TODO: rewrite this (verbatim from Brian 1.x), more efficiency

Exported members: DynamicArray, DynamicArray1D

Classes

DynamicArray(shape[, dtype, factor, ...]) An N-dimensional dynamic array class
DynamicArray1D(shape[, dtype, factor, ...]) Version of DynamicArray with specialised resize method designed to be more efficient.

Functions

getslices(shape)

monitors package

ratemonitor module

Exported members: PopulationRateMonitor

Classes

PopulationRateMonitor(source[, name, ...]) Record instantaneous firing rates, averaged across neurons from a NeuronGroup or other spike source.
spikemonitor module

Exported members: EventMonitor, SpikeMonitor

Classes

EventMonitor(source, event[, variables, ...]) Record events from a NeuronGroup or another event source.
SpikeMonitor(source[, variables, record, ...]) Record spikes from a NeuronGroup or other spike source.
statemonitor module

Exported members: StateMonitor

Classes

StateMonitor(source, variables, record[, ...]) Record values of state variables during a run
StateMonitorView(monitor, item)

parsing package

bast module

Brian AST representation

This is a standard Python AST representation with additional information added.

Exported members: brian_ast, BrianASTRenderer, dtype_hierarchy

Classes

BrianASTRenderer(variables[, copy_variables]) This class is modelled after NodeRenderer - see there for details.

Functions

brian_ast(expr, variables) Returns an AST tree representation with additional information
brian_dtype_from_dtype(dtype) Returns ‘boolean’, ‘integer’ or ‘float’
brian_dtype_from_value(value) Returns ‘boolean’, ‘integer’ or ‘float’
is_boolean(value)
is_boolean_dtype(obj)
is_float(value)
is_float_dtype(obj)
is_integer(value)
is_integer_dtype(obj)
dependencies module

Exported members: abstract_code_dependencies

Functions

abstract_code_dependencies(code[, ...]) Analyses identifiers used in abstract code blocks
get_read_write_funcs(parsed_code)
expressions module

AST parsing based analysis of expressions

Exported members: is_boolean_expression, parse_expression_unit

Functions

is_boolean_expression(expr, variables) Determines if an expression is of boolean type or not
parse_expression_unit(expr, variables) Returns the unit value of an expression, and checks its validity
functions module

Exported members: AbstractCodeFunction, abstract_code_from_function, extract_abstract_code_functions, substitute_abstract_code_functions

Classes

AbstractCodeFunction(name, args, code, ...) The information defining an abstract code function
FunctionRewriter(func[, numcalls]) Inlines a function call using temporary variables
VarRewriter(pre) Rewrites all variable names in names by prepending pre

Functions

abstract_code_from_function(func) Converts the body of the function to abstract code
extract_abstract_code_functions(code) Returns a set of abstract code functions from function definitions.
substitute_abstract_code_functions(code, funcs) Performs inline substitution of all the functions in the code
rendering module

Exported members: NodeRenderer, NumpyNodeRenderer, CPPNodeRenderer, SympyNodeRenderer

Classes

CPPNodeRenderer([use_vectorisation_idx])

Methods

NodeRenderer([use_vectorisation_idx])

Methods

NumpyNodeRenderer([use_vectorisation_idx])

Methods

SympyNodeRenderer([use_vectorisation_idx])

Methods

statements module

Functions

parse_statement(code) Parses a single line of code into “var op expr”.
sympytools module

Utility functions for parsing expressions and statements.

Classes

CustomSympyPrinter([settings]) Printer that overrides the printing of some basic sympy objects.

Functions

check_expression_for_multiple_stateful_functions(...)
expression_complexity(expr[, complexity]) Returns the complexity of an expression (either string or sympy)
replace_constants(sympy_expr[, variables]) Replace constant values in a sympy expression with their numerical value.
str_to_sympy(expr[, variables]) Parses a string into a sympy expression.
sympy_to_str(sympy_expr) Converts a sympy expression into a string.

Objects

PRINTER Printer that overrides the printing of some basic sympy objects.

random package

spatialneuron package

morphology module

Neuronal morphology module. This module defines classes to load and build neuronal morphologies.

Exported members: Morphology, Section, Cylinder, Soma

Classes

Children(owner) Helper class to represent the children (sub trees) of a section.
Cylinder(\*args, \*\*kwds) A cylindrical section.
Morphology(\*args, \*\*kwds) Neuronal morphology (tree structure).
MorphologyIndexWrapper(morphology) A simpler version of IndexWrapper, not allowing for string indexing (Morphology is not a Group).
Node

Attributes

Section(\*args, \*\*kwds) A section (unbranched structure), described as a sequence of truncated cones with potentially varying diameters and lengths per compartment.
Soma(\*args, \*\*kwds) A spherical, iso-potential soma.
SubMorphology(morphology, i, j) A view on a subset of a section in a morphology.
Topology(morphology) A representation of the topology of a Morphology.
spatialneuron module

Compartmental models. This module defines the SpatialNeuron class, which defines multicompartmental models.

Exported members: SpatialNeuron

Classes

FlatMorphology(morphology) Container object to store the flattened representation of a morphology.
SpatialNeuron([morphology, model, ...]) A single neuron with a morphology and possibly many compartments.
SpatialStateUpdater(group, method, clock[, ...]) The CodeRunner that updates the state variables of a SpatialNeuron at every timestep.
SpatialSubgroup(source, start, stop, morphology) A subgroup of a SpatialNeuron.

stateupdaters package

Module for transforming model equations into “abstract code” that can be then be further translated into executable code by the codegen module.

base module

This module defines the StateUpdateMethod class that acts as a base class for all stateupdaters and allows to register stateupdaters so that it is able to return a suitable stateupdater object for a given set of equations. This is used for example in NeuronGroup when no state updater is given explicitly.

Exported members: StateUpdateMethod

Classes

StateUpdateMethod

Attributes

UnsupportedEquationsException
exact module

Exact integration for linear equations.

Exported members: linear, independent

Classes

IndependentStateUpdater A state update for equations that do not depend on other state variables, i.e.
LinearStateUpdater A state updater for linear equations.

Functions

get_linear_system(eqs, variables) Convert equations into a linear system using sympy.

Objects

independent A state update for equations that do not depend on other state variables, i.e.
linear A state updater for linear equations.
explicit module

Numerical integration functions.

Exported members: milstein, heun, euler, rk2, rk4, ExplicitStateUpdater

Classes

ExplicitStateUpdater(description[, ...]) An object that can be used for defining state updaters via a simple description (see below).

Functions

diagonal_noise(equations, variables) Checks whether we deal with diagonal noise, i.e.
split_expression(expr) Split an expression into a part containing the function f and another one containing the function g.

Objects

euler Forward Euler state updater
heun Stochastic Heun method (for multiplicative Stratonovic SDEs with non-diagonal
milstein Derivative-free Milstein method
rk2 Second order Runge-Kutta method (midpoint method)
rk4 Classical Runge-Kutta method (RK4)
exponential_euler module

Exported members: exponential_euler

Classes

ExponentialEulerStateUpdater A state updater for conditionally linear equations, i.e.

Functions

get_conditionally_linear_system(eqs[, variables]) Convert equations into a linear system using sympy.

Objects

exponential_euler A state updater for conditionally linear equations, i.e.

synapses package

Package providing synapse support.

parse_synaptic_generator_syntax module

Exported members: parse_synapse_generator

Functions

handle_range(\*args, \*\*kwds) Checks the arguments/keywords for the range iterator
handle_sample(\*args, \*\*kwds) Checks the arguments/keywords for the sample iterator
parse_synapse_generator(expr) Returns a parsed form of a synapse generator expression.
spikequeue module

The spike queue class stores future synaptic events produced by a given presynaptic neuron group (or postsynaptic for backward propagation in STDP).

Exported members: SpikeQueue

Classes

SpikeQueue(source_start, source_end) Data structure saving the spikes and taking care of delays.
synapses module

Module providing the Synapses class and related helper classes/functions.

Exported members: Synapses

Classes

StateUpdater(group, method, clock, order) The CodeRunner that updates the state variables of a Synapses at every timestep.
SummedVariableUpdater(expression, ...) The CodeRunner that updates a value in the target group with the sum over values in the Synapses object.
Synapses(source[, target, model, on_pre, ...]) Class representing synaptic connections.
SynapticIndexing(synapses)

Methods

SynapticPathway(synapses, code, prepost[, ...]) The CodeRunner that applies the pre/post statement(s) to the state variables of synapses where the pre-/postsynaptic group spiked in this time step.
SynapticSubgroup(synapses, indices) A simple subgroup of Synapses that can be used for indexing.

Functions

find_synapses(index, synaptic_neuron)
slice_to_test(x) Returns a testing function corresponding to whether an index is in slice x.

units package

The unit system.

Exported members: pamp, namp, uamp, mamp, amp, kamp, Mamp, Gamp, Tamp, kilogram, pmetre, nmetre, umetre, mmetre, metre, kmetre, Mmetre, Gmetre, Tmetre, pmeter, nmeter, umeter, mmeter, meter, kmeter ... (185 more members)

allunits module

THIS FILE IS AUTOMATICALLY GENERATED BY A STATIC CODE GENERATION TOOL DO NOT EDIT BY HAND

Instead edit the template:

dev/tools/static_codegen/units_template.py

Exported members: metre, meter, gram, second, amp, kelvin, mole, candle, gramme, kilogram, radian, steradian, hertz, newton, pascal, joule, watt, coulomb, volt, farad, ohm, siemens, weber, tesla, henry ... (1991 more members)

fundamentalunits module

Defines physical units and quantities

Quantity Unit Symbol
Length metre m
Mass kilogram kg
Time second s
Electric current ampere A
Temperature kelvin K
Quantity of substance mole mol
Luminosity candle cd

Exported members: DimensionMismatchError, get_or_create_dimension(), get_dimensions(), is_dimensionless(), have_same_dimensions(), in_unit(), in_best_unit(), Quantity, Unit, register_new_unit(), check_units(), is_scalar_type(), get_unit(), get_unit_fast(), unit_checking

Classes

Dimension(dims) Stores the indices of the 7 basic SI unit dimension (length, mass, etc.).
DimensionMismatchError(description, \*dims) Exception class for attempted operations with inconsistent dimensions.
Quantity A number with an associated physical dimension.
Unit(value[, dim, scale]) A physical unit.
UnitRegistry() Stores known units for printing in best units.

Functions

all_registered_units(\*regs) Generator returning all registered units.
check_units(\*\*au) Decorator to check units of arguments passed to a function
fail_for_dimension_mismatch(obj1[, obj2, ...]) Compare the dimensions of two objects.
get_dimensions(obj) Return the dimensions of any object that has them.
get_or_create_dimension(\*args, \*\*kwds) Create a new Dimension object or get a reference to an existing one.
get_unit(x, \*regs) Find the most appropriate consistent unit from the unit registries.
get_unit_fast(x) Return a Quantity with value 1 and the same dimensions.
get_unit_for_display(x) Return a string representation of the most appropriate unit or '1' for
have_same_dimensions(obj1, obj2) Test if two values have the same dimensions.
in_best_unit(x[, precision]) Represent the value in the “best” unit.
in_unit(x, u[, precision]) Display a value in a certain unit with a given precision.
is_dimensionless(obj) Test if a value is dimensionless or not.
is_scalar_type(obj) Tells you if the object is a 1d number type.
quantity_with_dimensions(floatval, dims) Create a new Quantity with the given dimensions.
register_new_unit(u) Register a new unit for automatic displaying of quantities
unregister_unit(u) Remove a previously registered unit for automatic displaying of
wrap_function_change_dimensions(func, ...) Returns a new function that wraps the given function func so that it changes the dimensions of its input.
wrap_function_dimensionless(func) Returns a new function that wraps the given function func so that it raises a DimensionMismatchError if the function is called on a quantity with dimensions (excluding dimensionless quantitities).
wrap_function_keep_dimensions(func) Returns a new function that wraps the given function func so that it keeps the dimensions of its input.
wrap_function_remove_dimensions(func) Returns a new function that wraps the given function func so that it removes any dimensions from its input.

Objects

DIMENSIONLESS The singleton object for dimensionless Dimensions.
additional_unit_register UnitRegistry containing additional units (newton*metre, farad / metre, ...)
standard_unit_register UnitRegistry containing all the standard units (metre, kilogram, um2...)
user_unit_register UnitRegistry containing all units defined by the user
stdunits module

Optional short unit names

This module defines the following short unit names:

mV, mA, uA (micro_amp), nA, pA, mF, uF, nF, mS, uS, ms, Hz, kHz, MHz, cm, cm2, cm3, mm, mm2, mm3, um, um2, um3

Exported members: mV, mA, uA, nA, pA, pF, uF, nF, nS, uS, ms, us, Hz, kHz, MHz, cm, cm2, cm3, mm, mm2, mm3, um, um2, um3

unitsafefunctions module

Unit-aware replacements for numpy functions.

Exported members: log(), log10(), exp(), sin(), cos(), tan(), arcsin(), arccos(), arctan(), sinh(), cosh(), tanh(), arcsinh(), arccosh(), arctanh(), diagonal(), ravel(), trace(), dot(), where(), ones_like(), zeros_like(), arange(), linspace()

Functions

arange([start,] stop[, step,][, dtype]) Return evenly spaced values within a given interval.
arccos(x[, out]) Trigonometric inverse cosine, element-wise.
arccosh(x[, out]) Inverse hyperbolic cosine, element-wise.
arcsin(x[, out]) Inverse sine, element-wise.
arcsinh(x[, out]) Inverse hyperbolic sine element-wise.
arctan(x[, out]) Trigonometric inverse tangent, element-wise.
arctanh(x[, out]) Inverse hyperbolic tangent element-wise.
cos(x[, out]) Cosine element-wise.
cosh(x[, out]) Hyperbolic cosine, element-wise.
diagonal(x, \*args, \*\*kwds) Return specified diagonals.
dot(a, b[, out]) Dot product of two arrays.
exp(x[, out]) Calculate the exponential of all elements in the input array.
linspace(start, stop[, num, endpoint, ...]) Return evenly spaced numbers over a specified interval.
log(x[, out]) Natural logarithm, element-wise.
ravel(x, \*args, \*\*kwds) Return a flattened array.
setup() Setup function for doctests (used by nosetest).
sin(x[, out]) Trigonometric sine, element-wise.
sinh(x[, out]) Hyperbolic sine, element-wise.
tan(x[, out]) Compute tangent element-wise.
tanh(x[, out]) Compute hyperbolic tangent element-wise.
trace(x, \*args, \*\*kwds) Return the sum along diagonals of the array.
where(condition, [x, y]) Return elements, either from x or y, depending on condition.
wrap_function_to_method(func) Wraps a function so that it calls the corresponding method on the Quantities object (if called with a Quantities object as the first argument).

utils package

Utility functions for Brian.

arrays module

Helper module containing functions that operate on numpy arrays.

Functions

calc_repeats(delay) Calculates offsets corresponding to an array, where repeated values are subsequently numbered, i.e.
environment module

Utility functions to get information about the environment Brian is running in.

Functions

running_from_ipython() Check whether we are currently running under ipython.
filetools module

File system tools

Exported members: ensure_directory, ensure_directory_of_file, in_directory, copy_directory

Classes

in_directory(new_dir) Safely temporarily work in a subdirectory

Functions

copy_directory(source, target) Copies directory source to target.
ensure_directory(d) Ensures that a given directory exists (creates it if necessary)
ensure_directory_of_file(f) Ensures that a directory exists for filename to go in (creates if necessary), and returns the directory path.
logger module

Brian’s logging module.

Preferences

Logging system preferences

logging.console_log_level = 'INFO'

What log level to use for the log written to the console.

Has to be one of CRITICAL, ERROR, WARNING, INFO, DEBUG or DIAGNOSTIC.

logging.delete_log_on_exit = True

Whether to delete the log and script file on exit.

If set to True (the default), log files (and the copy of the main script) will be deleted after the brian process has exited, unless an uncaught exception occured. If set to False, all log files will be kept.

logging.file_log = True

Whether to log to a file or not.

If set to True (the default), logging information will be written to a file. The log level can be set via the logging.file_log_level preference.

logging.file_log_level = 'DIAGNOSTIC'

What log level to use for the log written to the log file.

In case file logging is activated (see logging.file_log), which log level should be used for logging. Has to be one of CRITICAL, ERROR, WARNING, INFO, DEBUG or DIAGNOSTIC.

logging.save_script = True

Whether to save a copy of the script that is run.

If set to True (the default), a copy of the currently run script is saved to a temporary location. It is deleted after a successful run (unless logging.delete_log_on_exit is False) but is kept after an uncaught exception occured. This can be helpful for debugging, in particular when several simulations are running in parallel.

logging.std_redirection = True

Whether or not to redirect stdout/stderr to null at certain places.

This silences a lot of annoying compiler output, but will also hide error messages making it harder to debug problems. You can always temporarily switch it off when debugging. If logging.std_redirection_to_file is set to True as well, then the output is saved to a file and if an error occurs the name of this file will be printed.

logging.std_redirection_to_file = True

Whether to redirect stdout/stderr to a file.

If both logging.std_redirection and this preference are set to True, all standard output/error (most importantly output from the compiler) will be stored in files and if an error occurs the name of this file will be printed. If logging.std_redirection is True and this preference is False, then all standard output/error will be completely suppressed, i.e. neither be displayed nor stored in a file.

The value of this preference is ignore if logging.std_redirection is set to False.

Exported members: get_logger(), BrianLogger, std_silent

Classes

BrianLogger(name) Convenience object for logging.
HierarchyFilter(name) A class for suppressing all log messages in a subtree of the name hierarchy.
LogCapture(log_list[, log_level]) A class for capturing log warnings.
NameFilter(name) A class for suppressing log messages ending with a certain name.
catch_logs([log_level]) A context manager for catching log messages.
std_silent([alwaysprint]) Context manager that temporarily silences stdout and stderr but keeps the output saved in a temporary file and writes it if an exception is raised.

Functions

brian_excepthook(exc_type, exc_obj, exc_tb) Display a message mentioning the debug log in case of an uncaught exception.
clean_up_logging() Shutdown the logging system and delete the debug log file if no error occured.
get_logger([module_name]) Get an object that can be used for logging.
log_level_validator(log_level)
stringtools module

A collection of tools for string formatting tasks.

Exported members: indent, deindent, word_substitute, replace, get_identifiers, strip_empty_lines, stripped_deindented_lines, strip_empty_leading_and_trailing_lines, code_representation, SpellChecker

Classes

SpellChecker(words[, alphabet]) A simple spell checker that will be used to suggest the correct name if the user made a typo (e.g.

Functions

code_representation(code) Returns a string representation for several different formats of code
deindent(text[, numtabs, spacespertab, ...]) Returns a copy of the string with the common indentation removed.
get_identifiers(expr[, include_numbers]) Return all the identifiers in a given string expr, that is everything that matches a programming language variable like expression, which is here implemented as the regexp \b[A-Za-z_][A-Za-z0-9_]*\b.
indent(text[, numtabs, spacespertab, tab]) Indents a given multiline string.
replace(s, substitutions) Applies a dictionary of substitutions.
strip_empty_leading_and_trailing_lines(s) Removes all empty leading and trailing lines in the multi-line string s.
strip_empty_lines(s) Removes all empty lines from the multi-line string s.
stripped_deindented_lines(code) Returns a list of the lines in a multi-line string, deindented.
word_substitute(expr, substitutions) Applies a dict of word substitutions.
topsort module

Exported members: topsort

Functions

topsort(graph) Topologically sort a graph

Developer’s guide

This section is intended as a guide to how Brian functions internally for people developing Brian itself, or extensions to Brian. It may also be of some interest to others wishing to better understand how Brian works internally.

Coding guidelines

The basic principles of developing Brian are:

  1. For the user, the emphasis is on making the package flexible, readable and easy to use. See the paper “The Brian simulator” in Frontiers in Neuroscience for more details.
  2. For the developer, the emphasis is on keeping the package maintainable by a small number of people. To this end, we use stable, well maintained, existing open source packages whenever possible, rather than writing our own code.

Development workflow

Brian development is done in a git repository on github. Continuous integration testing is provided by travis CI, code coverage is measured with coveralls.io.

The repository structure

Brian’s repository structure is very simple, as we are normally not supporting older versions with bugfixes or other complicated things. The master branch of the repository is the basis for releases, a release is nothing more than adding a tag to the branch, creating the tarball, etc. The master branch should always be in a deployable state, i.e. one should be able to use it as the base for everyday work without worrying about random breakages due to updates. To ensure this, no commit ever goes into the master branch without passing the test suite before (see below). The only exception to this rule is if a commit not touches any code files, e.g. additions to the README file or to the documentation (but even in this case, care should be taken that the documentation is still built correctly).

For every feature that a developer works on, a new branch should be opened (normally based on the master branch), with a descriptive name (e.g. add-numba-support). For developers that are members of “brian-team”, the branch should ideally be created in the main repository. This way, one can easily get an overview over what the “core team” is currently working on. Developers who are not members of the team should fork the repository and work in their own repository (if working on multiple issues/features, also using branches).

Implementing a feature/fixing a bug

Every new feature or bug fix should be done in a dedicated branch and have an issue in the issue database. For bugs, it is important to not only fix the bug but also to introduce a new test case (see Testing) that makes sure that the bug will not ever be reintroduced by other changes. It is often a good idea to first define the test cases (that should fail) and then work on the fix so that the tests pass. As soon as the feature/fix is complete or as soon as specific feedback on the code is needed, open a “pull request” to merge the changes from your branch into master. In this pull request, others can comment on the code and make suggestions for improvements. New commits to the respective branch automatically appear in the pull request which makes it a great tool for iterative code review. Even more useful, travis will automatically run the test suite on the result of the merge. As a reviewer, always wait for the result of this test (it can take up to 30 minutes or so until it appears) before doing the merge and never merge when a test fails. As soon as the reviewer (someone from the core team and not the author of the feature/fix) decides that the branch is ready to merge, he/she can merge the pull request and optionally delete the corresponding branch (but it will be hidden by default, anyway).

Coding conventions

General recommendations

Syntax is chosen as much as possible from the user point of view, to reflect the concepts as directly as possible. Ideally, a Brian script should be readable by someone who doesn’t know Python or Brian, although this isn’t always possible. Function, class and keyword argument names should be explicit rather than abbreviated and consistent across Brian. See Romain’s paper On the design of script languages for neural simulators for a discussion.

We use the PEP-8 coding conventions for our code. This in particular includes the following conventions:

  • Use 4 spaces instead of tabs per indentation level

  • Use spaces after commas and around the following binary operators: assignment (=), augmented assignment (+=, -= etc.), comparisons (==, <, >, !=, <>, <=, >=, in, not in, is, is not), Booleans (and, or, not).

  • Do not use spaces around the equals sign in keyword arguments or when specifying default values. Neither put spaces immediately inside parentheses, brackets or braces, immediately before the open parenthesis that starts the argument list of a function call, or immediately before the open parenthesis that starts an indexing or slicing.

  • Avoid using a backslash for continuing lines whenever possible, instead use Python’s implicit line joining inside parentheses, brackets and braces.

  • The core code should only contain ASCII characters, no encoding has to be declared

  • imports should be on different lines (e.g. do not use import sys, os) and should be grouped in the following order, using blank lines between each group:

    1. standard library imports
    2. third-party library imports (e.g. numpy, scipy, sympy, ...)
    3. brian imports
  • Use absolute imports for everything outside of “your” package, e.g. if you are working in brian2.equations, import functions from the stringtools modules via from brian2.utils.stringtools import .... Use the full path when importing, e.g. do from brian2.units.fundamentalunits import seconds instead of from brian2 import seconds.

  • Use “new-style” relative imports for everything in “your” package, e.g. in brian2.codegen.functions.py import the Function class as from .specifiers import Function.

  • Do not use wildcard imports (from brian2 import *), instead import only the identifiers you need, e.g. from brian2 import NeuronGroup, Synapses. For packages like numpy that are used a lot, use import numpy as np. But note that the user should still be able to do something like from brian2 import * (and this style can also be freely used in examples and tests, for example). Modules always have to use the __all__ mechanism to specify what is being made available with a wildcard import. As an exception from this rule, the main brian2/__init__.py may use wildcard imports.

Python 2 vs. Python 3

Brian is written in Python 2 but runs on Python 3 using the 2to3 conversion tool (which is automatically applied if Brian is installed using the standard python setup.py install mechanism). To make this possible without too much effort, Brian no longer supports Python 2.5 and can therefore make use of a couple of forward-compatible (but backward-incompatible) idioms introduced in Python 2.6. The Porting to Python 3 book is available online and has a lot of information on these topics. Here are some things to keep in mind when developing Brian:

  • If you are working with integers and using division, consider using // for flooring division (default behaviour for / in python 2) and switch the behaviour of / to floating point division by using from __future__ import division .
  • If importing modules from the standard library (which have changed quite a bit from Python 2 to Python 3), only use simple import statements like import itertools instead of from itertools import izip2to3 is otherwise unable to make the correct conversion.
  • If you are using the print statement (which should only occur in tests, in particular doctests – always use the Logging framework if you want to present messages to the user otherwise), try “cheating” and use the functional style in Python 2, i.e. write print('some text') instead of print 'some text'. More complicated print statements should be avoided, e.g instead of print >>sys.stderr, 'Error message use sys.stderr.write('Error message\n') (or, again, use logging).
  • Exception stacktraces look a bit different in Python 2 and 3: For non-standard exceptions, Python 2 only prints the Exception class name (e.g. DimensionMismatchError) whereas Python 3 prints the name including the module name (e.g. brian2.units.fundamentalunits.DimensionMismatchError). This will make doctests fail that match the exception message. In this case, write the doctest in the style of Python 2 but add the doctest directive #doctest: +IGNORE_EXCEPTION_DETAIL to the statement leading to the exception. This unfortunately has the side effect of also ignoring the text of the exception, but it will still fail for an incorrect exception type.
  • If you write code reading and writing strings to files, make sure you make the distinction between bytes and unicode (see “separate binary data and strings” ) In general, strings within Brian are unicode strings and only converted to bytes when reading from or writing to a file (or something like a network stream, for example).
  • If you are sorting lists or dictionaries, have a look at “when sorting, use key instead of cmp”
  • Make sure to define a __hash__ function for objects that define an __eq__ function (and to define it consistently). Python 3 is more strict about this, an object with __eq__ but without __hash__ is unhashable.

Representing Brian objects

__repr__ and __str__

Every class should specify or inherit useful __repr__ and __str__ methods. The __repr__ method should give the “official” representation of the object; if possible, this should be a valid Python expression, ideally allowing for eval(repr(x)) == x. The __str__ method on the other hand, gives an “informal” representation of the object. This can be anything that is helpful but does not have to be Python code. For example:

>>> import numpy as np
>>> ar = np.array([1, 2, 3]) * mV
>>> print(ar)  # uses __str__
[ 1.  2.  3.] mV
>>> ar  # uses __repr__
array([ 1.,  2.,  3.]) * mvolt

If the representation returned by __repr__ is not Python code, it should be enclosed in <...>, e.g. a Synapses representation might be <Synapses object with 64 synapses>.

If you don’t want to make the distinction between __repr__ and __str__, simply define only a __repr__ function, it will be used instead of __str__ automatically (no need to write __str__ = __repr__). Finally, if you include the class name in the representation (which you should in most cases), use self.__class__.__name__ instead of spelling out the name explicitly – this way it will automatically work correctly for subclasses. It will also prevent you from forgetting to update the class name in the representation if you decide to rename the class.

LaTeX representations with sympy

Brian objects dealing with mathematical expressions and equations often internally use sympy. Sympy’s latex function does a nice job of converting expressions into LaTeX code, using fractions, root symbols, etc. as well as converting greek variable names into corresponding symbols and handling sub- and superscripts. For the conversion of variable names to work, they should use an underscore for subscripts and two underscores for superscripts:

>>> from sympy import latex, Symbol
>>> tau_1__e = Symbol('tau_1__e')
>>> print latex(tau_1__e)
\tau^{e}_{1}

Sympy’s printer supports formatting arbitrary objects, all they have to do is to implement a _latex method (no trailing underscore). For most Brian objects, this is unnecessary as they will never be formatted with sympy’s LaTeX printer. For some core objects, in particular the units, is is useful, however, as it can be reused in LaTeX representations for ipython (see below). Note that the _latex method should not return $ or \begin{equation} (sympy’s method includes a mode argument that wraps the output automatically).

Representations for ipython
“Old” ipython console

In particular for representations involing arrays or lists, it can be useful to break up the representation into chunks, or indent parts of the representation. This is supported by the ipython console’s “pretty printer”. To make this work for a class, add a _repr_pretty_(self, p, cycle) (note the single underscores) method. You can find more information in the ipython documentation .

“New” ipython console (qtconsole and notebook)

The new ipython consoles, the qtconsole and the ipython notebook support a much richer set of representations for objects. As Brian deals a lot with mathematical objects, in particular the LaTeX and to a lesser extent the HTML formatting capabilities of the ipython notebook are interesting. To support LaTeX representation, implement a _repr_latex_ method returning the LaTeX code (including $, \begin{equation} or similar). If the object already has a _latex method (see LaTeX representations with sympy above), this can be as simple as:

def _repr_latex_(self):
    return sympy.latex(self, mode='inline')  # wraps the expression in $ .. $

The LaTeX rendering only supports a single mathematical block. For complex objects, e.g. NeuronGroup it might be useful to have a richer representation. This can be achieved by returning HTML code from _repr_html_ – this HTML code is processed by MathJax so it can include literal LaTeX code that will be transformed before it is rendered as HTML. An object containing two equations could therefore be represented with a method like this:

def _repr_html_(self):
    return '''
    <h3> Equation 1 </h3>
    {eq_1}
    <h3> Equation 2 </h3>
    {eq_2}'''.format(eq_1=sympy.latex(self.eq_1, mode='equation'),
                     eq_2=sympy.latex(self.eq_2, mode='equation'))

Defensive programming

One idea for Brian 2 is to make it so that it’s more likely that errors are raised rather than silently causing weird bugs. Some ideas in this line:

Synapses.source should be stored internally as a weakref Synapses._source, and Synapses.source should be a computed attribute that dereferences this weakref. Like this, if the source object isn’t kept by the user, Synapses won’t store a reference to it, and so won’t stop it from being deallocated.

We should write an automated test that takes a piece of correct code like:

NeuronGroup(N, eqs, reset='V>Vt')

and tries replacing all arguments by nonsense arguments, it should always raise an error in this case (forcing us to write code to validate the inputs). For example, you could create a new NonsenseObject class, and do this:

nonsense = NonsenseObject()
NeuronGroup(nonsense, eqs, reset='V>Vt')
NeuronGroup(N, nonsense, reset='V>Vt')
NeuronGroup(N, eqs, nonsense)

In general, the idea should be to make it hard for something incorrect to run without raising an error, preferably at the point where the user makes the error and not in some obscure way several lines later.

The preferred way to validate inputs is one that handles types in a Pythonic way. For example, instead of doing something like:

if not isinstance(arg, (float, int)):
    raise TypeError(...)

Do something like:

arg = float(arg)

(or use try/except to raise a more specific error). In contrast to the isinstance check it does not make any assumptions about the type except for its ability to be converted to a float.

This approach is particular useful for numpy arrays:

arr = np.asarray(arg)

(or np.asanyarray if you want to allow for array subclasses like arrays with units or masked arrays). This approach has also the nice advantage that it allows all “array-like” arguments, e.g. a list of numbers.

Documentation

It is very important to maintain documentation. We use the Sphinx documentation generator tools. The documentation is all hand written. Sphinx source files are stored in the docs_sphinx folder (currently: dev/brian2/docs_sphinx). The HTML files can be generated via the script dev/tools/docs/build_html_brian2.py and end up in the docs folder (currently: dev/brian2/docs).

Most of the documentation is stored directly in the Sphinx source text files, but reference documentation for important Brian classes and functions are kept in the documentation strings of those classes themselves. This is automatically pulled from these classes for the reference manual section of the documentation. The idea is to keep the definitive reference documentation near the code that it documents, serving as both a comment for the code itself, and to keep the documentation up to date with the code.

The reference documentation includes all classes, functions and other objects that are defined in the modules and only documents them in the module where they were defined. This makes it possible to document a class like Quantity only in brian2.units.fundamentalunits and not additionally in brian2.units and brian2. This mechanism relies on the __module__ attribute, in some cases, in particular when wrapping a function with a decorator (e.g. check_units), this attribute has to be set manually:

foo.__module__ = __name__

Without this manual setting, the function might not be documented at all or in the wrong module.

In addition to the reference, all the examples in the examples folder are automatically included in the documentation.

Note that you can directly link to github issues using #issuenumber, e.g. writing #33 links to a github issue about running benchmarks for Brian 2: #33. This feature should rarely be used in the main documentation, reserve its use for release notes and important known bugs.

Docstrings

Every module, class, method or function has to start with a docstring, unless it is a private or special method (i.e. starting with _ or __) and it is obvious what it does. For example, there is normally no need to document __str__ with “Return a string representation.”.

For the docstring format, we use the our own sphinx extension (in brian2.utils.sphinxext) based on numpydoc, allowing to write docstrings that are well readable both in sourcecode as well as in the rendered HTML. We generally follow the format used by numpy

When the docstring uses variable, class or function names, these should be enclosed in single backticks. Class and function/method names will be automatically linked to the corresponding documentation. For classes imported in the main brian2 package, you do not have to add the package name, e.g. writing `NeuronGroup` is enough. For other classes, you have to give the full path, e.g. `brian2.units.fundamentalunits.UnitRegistry`. If it is clear from the context where the class is (e.g. within the documentation of UnitRegistry), consider using the ~ abbreviation: `~brian2.units.fundamentalunits.UnitRegistry` displays only the class name: UnitRegistry. Note that you do not have to enclose the exception name in a “Raises” or “Warns” section, or the class/method/function name in a “See Also” section in backticks, they will be automatically linked (putting backticks there will lead to incorrect display or an error message),

Inline source fragments should be enclosed in double backticks.

Class docstrings follow the same conventions as method docstrings and should document the __init__ method, the __init__ method itself does not need a docstring.

Documenting functions and methods

The docstring for a function/method should start with a one-line description of what the function does, without referring to the function name or the names of variables. Use a “command style” for this summary, e.g. “Return the result.” instead of “Returns the result.” If the signature of the function cannot be automatically extracted because of an decorator (e.g. check_units()), place a signature in the very first row of the docstring, before the one-line description.

For methods, do not document the self parameter, nor give information about the method being static or a class method (this information will be automatically added to the documentation).

Documenting classes

Class docstrings should use the same “Parameters” and “Returns” sections as method and function docstrings for documenting the __init__ constructor. If a class docstring does not have any “Attributes” or “Methods” section, these sections will be automatically generated with all documented (i.e. having a docstring), public (i.e. not starting with _) attributes respectively methods of the class. Alternatively, you can provide these sections manually. This is useful for example in the Quantity class, which would otherwise include the documentation of many ndarray methods, or when you want to include documentation for functions like __getitem__ which would otherwise not be documented. When specifying these sections, you only have to state the names of documented methods/attributes but you can also provide direct documentation. For example:

Attributes
----------
foo
bar
baz
    This is a description.

This can be used for example for class or instance attributes which do not have “classical” docstrings. However, you can also use a special syntax: When defining class attributes in the class body or instance attributes in __init__ you can use the following variants (here shown for instance attributes):

def __init__(a, b, c):
    #: The docstring for the instance attribute a.
    #: Can also span multiple lines
    self.a = a

    self.b = b #: The docstring for self.b (only one line).

    self.c = c
    'The docstring for self.c, directly *after* its definition'
Long example of a function docstring

This is a very long docstring, showing all the possible sections. Most of the time no See Also, Notes or References section is needed:

def foo(var1, var2, long_var_name='hi') :
"""
A one-line summary that does not use variable names or the function name.

Several sentences providing an extended description. Refer to
variables using back-ticks, e.g. `var1`.

Parameters
----------
var1 : array_like
    Array_like means all those objects -- lists, nested lists, etc. --
    that can be converted to an array.  We can also refer to
    variables like `var1`.
var2 : int
    The type above can either refer to an actual Python type
    (e.g. ``int``), or describe the type of the variable in more
    detail, e.g. ``(N,) ndarray`` or ``array_like``.
Long_variable_name : {'hi', 'ho'}, optional
    Choices in brackets, default first when optional.

Returns
-------
describe : type
    Explanation
output : type
    Explanation
tuple : type
    Explanation
items : type
    even more explaining

Raises
------
BadException
    Because you shouldn't have done that.

See Also
--------
otherfunc : relationship (optional)
newfunc : Relationship (optional), which could be fairly long, in which
          case the line wraps here.
thirdfunc, fourthfunc, fifthfunc

Notes
-----
Notes about the implementation algorithm (if needed).

This can have multiple paragraphs.

You may include some math:

.. math:: X(e^{j\omega } ) = x(n)e^{ - j\omega n}

And even use a greek symbol like :math:`omega` inline.

References
----------
Cite the relevant literature, e.g. [1]_.  You may also cite these
references in the notes section above.

.. [1] O. McNoleg, "The integration of GIS, remote sensing,
   expert systems and adaptive co-kriging for environmental habitat
   modelling of the Highland Haggis using object-oriented, fuzzy-logic
   and neural-network techniques," Computers & Geosciences, vol. 22,
   pp. 585-588, 1996.

Examples
--------
These are written in doctest format, and should illustrate how to
use the function.

>>> a=[1,2,3]
>>> print [x + 3 for x in a]
[4, 5, 6]
>>> print "a\n\nb"
a
b

"""

pass

Logging

For a description of logging from the users point of view, see Logging.

Logging in Brian is based on the logging module in Python’s standard library.

Every brian module that needs logging should start with the following line, using the get_logger() function to get an instance of BrianLogger:

logger = get_logger(__name__)

In the code, logging can then be done via:

logger.diagnostic('A diagnostic message')
logger.debug('A debug message')
logger.info('An info message')
logger.warn('A warning message')
logger.error('An error message')

If a module logs similar messages in different places or if it might be useful to be able to suppress a subset of messages in a module, add an additional specifier to the logging command, specifying the class or function name, or a method name including the class name (do not include the module name, it will be automatically added as a prefix):

logger.debug('A debug message', 'CodeString')
logger.debug('A debug message', 'NeuronGroup.update')
logger.debug('A debug message', 'reinit')

If you want to log a message only once, e.g. in a function that is called repeatedly, set the optional once keyword to True:

logger.debug('Will only be shown once', once=True)
logger.debug('Will only be shown once', once=True)

The output of debugging looks like this in the log file:

2012-10-02 14:41:41,484 DEBUG    brian2.equations.equations.CodeString: A debug message

and like this on the console (if the log level is set to “debug”):

DEBUG    A debug message [brian2.equations.equations.CodeString]
Log level recommendations
diagnostic
Low-level messages that are not of any interest to the normal user but useful for debugging Brian itself. A typical example is the source code generated by the code generation module.
debug
Messages that are possibly helpful for debugging the user’s code. For example, this shows which objects were included in the network, which clocks the network uses and when simulations start and stop.
info
Messages which are not strictly necessary, but are potentially helpful for the user. In particular, this will show messages about the chosen state updater and other information that might help the user to achieve better performance and/or accuracy in the simulations (e.g. using (event-driven) in synaptic equations, avoiding incompatible dt values between TimedArray and the NeuronGroup using it, ...)
warn
Messages that alert the user to a potential mistake in the code, e.g. two possible solutions for an identifier in an equation. It can also be used to make the user aware that he/she is using an experimental feature, an unsupported compiler or similar. In this case, normally the once=True option should be used to raise this warning only once. As a rule of thumb, “common” scripts like the examples provided in the examples folder should normally not lead to any warnings.
error
This log level is not used currently in Brian, an exception should be raised instead. It might be useful in “meta-code”, running scripts and catching any errors that occur.

The default log level shown to the user is info. As a general rule, all messages that the user sees in the default configuration (i.e., info and warn level) should be avoidable by simple changes in the user code, e.g. the renaming of variables, explicitly specifying a state updater instead of relying on the automatic system, adding (clock-driven)/(event-driven) to synaptic equations, etc.

Testing log messages

It is possible to test whether code emits an expected log message using the catch_logs context manager. This is normally not necessary for debug and info messages, but should be part of the unit tests for warning messages (catch_logs by default only catches warning and error messages):

with catch_logs() as logs:
    # code that is expected to trigger a warning
    # ...
    assert len(logs) == 1
    # logs contains tuples of (log level, name, message)
    assert logs[0][0] == 'WARNING' and logs[0][1].endswith('warning_type')

Testing

Brian uses the nose package for its testing framework. To check the code coverage of the test suite, we use coverage.py.

Running the test suite

The nosetests tool automatically finds tests in the code. When brian2 is in your Python path or when you are in the main brian2 directory, you can start the test suite with:

$ nosetests brian2 --with-doctest

This should show no errors or failures but possibly a number of skipped tests. The recommended way however is to import brian2 and call the test function, which gives you convenient control over which tests are run:

>>> import brian2
>>> brian2.test()

By default, this runs the test suite for all available (runtime) code generation targets. If you only want to test a specific target, provide it as an argument:

>>> brian2.test('numpy')

If you want to test several targets, use a list of targets:

>>> brian2.test(['weave', 'cython'])

In addition to the tests specific to a code generation target, the test suite will also run a set of independent tests (e.g. parsing of equations, unit system, utility functions, etc.). To exclude these tests, set the test_codegen_independent argument to False. Not all available tests are run by default, tests that take a long time are excluded. To include these, set long_tests to True.

To run the C++ standalone tests, you have to set the test_standalone argument to the name of a standalone device. If you provide an empty argument for the runtime code generation targets, you will only run the standalone tests:

>>> brian2.test([], test_standalone='cpp_standalone')
Checking the code coverage

To check the code coverage under Linux (with coverage and nosetests in your path) and generate a report, use the following commands (this assumes the source code of Brian with the file .coveragerc in the directory /path/to/brian):

$ coverage run --rcfile=/path/to/brian/.coveragerc $(which nosetests) --with-doctest brian2
$ coverage report

Using coverage html you can also generate a HTML report which will end up in the directory htmlcov.

Writing tests

Generally speaking, we aim for a 100% code coverage by the test suite. Less coverage means that some code paths are never executed so there’s no way of knowing whether a code change broke something in that path.

Unit tests

The most basic tests are unit tests, tests that test one kind of functionality or feature. To write a new unit test, add a function called test_... to one of the test_... files in the brian2.tests package. Test files should roughly correspond to packages, test functions should roughly correspond to tests for one function/method/feature. In the test functions, use assertions that will raise an AssertionError when they are violated, e.g.:

G = NeuronGroup(42, model='dv/dt = -v / (10*ms) : 1')
assert len(G) == 42

When comparing arrays, use the array_equal() function from numpy.testing.utils which takes care of comparing types, shapes and content and gives a nicer error message in case the assertion fails. Never make tests depend on external factors like random numbers – tests should always give the same result when run on the same codebase. You should not only test the expected outcome for the correct use of functions and classes but also that errors are raised when expected. For that you can use the assert_raises function (also in numpy.testing.utils) which takes an Exception type and a callable as arguments:

assert_raises(DimensionMismatchError, lambda: 3*volt + 5*second)

Note that you cannot simply write 3*volt + 5*second in the above example, this would raise an exception before calling assert_raises. Using a callable like the simple lambda expression above makes it possible for assert_raises to catch the error and compare it against the expected type. You can also check whether expected warnings are raised, see the documentation of the logging mechanism for details

For simple functions, doctests (see below) are a great alternative to writing classical unit tests.

By default, all tests are executed for all selected code generation targets (see Running the test suite above). This is not useful for all tests, some basic tests that for example test equation syntax or the use of physical units do not depend on code generation and need therefore not to be repeated. To execute such tests only once, they can be annotated with a codegen-independent attribute, using the attr decorator:

from nose.plugins.attrib import attr
from brian2 import NeuronGroup

@attr('codegen-independent')
def test_simple():
    # Test that the length of a NeuronGroup is correct
    group = NeuronGroup(5, '')
    assert len(group) == 5

Tests that are not “codegen-independent” are by default only executed for the runtimes device, i.e. not for the cpp_standalone device, for example. However, many of those tests follow a common pattern that is compatible with standalone devices as well: they set up a network, run it, and check the state of the network afterwards. Such tests can be marked as standalone-compatible, using the attr decorator in the same way as for codegen-independent tests. Since standalone devices usually have an internal state where they store information about arrays, array assignments, etc., they need to be reinitialized after such a test. For that use the with_setup decorator and provide the restore_device function as the teardown argument:

from nose import with_setup
from nose.plugins.attrib import attr
from numpy.testing.utils import assert_equal
from brian2 import *
from brian2.devices.device import restore_device

@attr('standalone-compatible')
@with_setup(teardown=restore_initial_state)
def test_simple_run():
    # Check that parameter values of a neuron don't change after a run
    group = NeuronGroup(5, 'v : volt')
    group.v = 'i*mV'
    run(1*ms)
    assert_equal(group.v[:], np.arange(5)*mV)

As a rule of thumb:

  • If a test does not have a run call, mark it as codegen-independent
  • If a test has only a single run and only reads state variable values after the run, mark it as standalone-compatible and register the restore_device teardown function

Tests can also be written specifically for a standalone device (they then have to include the set_device and build calls explicitly). In this case tests have to be annotated with the name of the device (e.g. 'cpp_standalone') and with 'standalone-only' to exclude this test from the runtime tests. Also, the device should be restored in the end:

from nose import with_setup
from nose.plugins.attrib import attr
from brian2 import *
from brian2.devices.device import restore_device

@attr('cpp_standalone', 'standalone-only')
@with_setup(teardown=restore_initial_state)
def test_cpp_standalone():
    set_device('cpp_standalone')
    # set up simulation
    # run simulation
    device.build(...)
    # check simulation results
Doctests

Doctests are executable documentation. In the Examples block of a class or function documentation, simply write code copied from an interactive Python session (to do this from ipython, use %doctestmode), e.g.:

>>> expr = 'a*_b+c5+8+f(A)'
>>> print word_substitute(expr, {'a':'banana', 'f':'func'})
banana*_b+c5+8+func(A)

During testing, the actual output will be compared to the expected output and an error will be raised if they don’t match. Note that this comparison is strict, e.g. trailing whitespace is not ignored. There are various ways of working around some problems that arise because of this expected exactness (e.g. the stacktrace of a raised exception will never be identical because it contains file names), see the doctest documentation for details.

Doctests can (and should) not only be used in docstrings, but also in the hand-written documentation, making sure that the examples actually work. To turn a code example into a doc test, use the .. doctest:: directive, see Equations for examples written as doctests. For all doctests, everything that is available after from brian2 import * can be used directly. For everything else, add import statements to the doctest code or – if you do not want the import statements to appear in the document – add them in a .. testsetup:: block. See the documentation for Sphinx’s doctest extension for more details.

Doctests are a great way of testing things as they not only make sure that the code does what it is supposed to do but also that the documentation is up to date!

Test attributes

As explained above, the test suite can be run with different subsets of the available tests. For this, tests have to be annotated with the attr decorator available from nose.plugins.attrib. Currently, the following attributes are understood:

  • standalone: A C++ standalone test (not run by default when calling brian2.test())
  • codegen-independent: A test that does not use any code generation (run by default)
  • long: A test that takes a long time to run (not run by default)

Attributes can be simply given as a string argument to the attr decorator:

 from nose.plugins.attrib import attr

 @attr('standalone')
 test_for_standalone():
     pass  # ...
Correctness tests

[These do not exist yet for brian2]. Unit tests test a specific function or feature in isolation. In addition, we want to have tests where a complex piece of code (e.g. a complete simulation) is tested. Even if it is sometimes impossible to really check whether the result is correct (e.g. in the case of the spiking activity of a complex network), a useful check is also whether the result is consistent. For example, the spiking activity should be the same when using code generation for Python or C++. Or, a network could be pickled before running and then the result of the run could be compared to a second run that starts from the unpickled network.

Units

Casting rules

In Brian 1, a distinction is made between scalars and numpy arrays (including scalar arrays): Scalars could be multiplied with a unit, resulting in a Quantity object whereas the multiplication of an array with a unit resulted in a (unitless) array. Accordingly, scalars where considered as dimensionless quantities for the purpose of unit checking (e.g.. 1 + 1 * mV raised an error) whereas arrays where not (e.g. array(1) + 1 * mV resulted in 1.001 without any errors). Brian 2 no longer makes this distinction and treats both scalars and arrays as dimensionless for unit checking and make all operations involving quantities return a quantity.:

>>> 1 + 1*second   
Traceback (most recent call last):
...
DimensionMismatchError: Cannot calculate 1. s + 1, units do not match (units are second and 1).

>>> np.array([1]) + 1*second   
Traceback (most recent call last):
...
DimensionMismatchError: Cannot calculate 1. s + [1], units do not match (units are second and 1).

>>> 1*second + 1*second
2. * second
>>> np.array([1])*second + 1*second
array([ 2.]) * second

As one exception from this rule, a scalar or array 0 is considered as having “any unit”, i.e. 0 + 1 * second will result in 1 * second without a dimension mismatch error and 0 == 0 * mV will evaluate to True. This seems reasonable from a mathematical viewpoint and makes some sources of error disappear. For example, the Python builtin sum (not numpy’s version) adds the value of the optional argument start, which defaults to 0, to its main argument. Without this exception, sum([1 * mV, 2 * mV]) would therefore raise an error.

The above rules also apply to all comparisons (e.g. == or <) with one further exception: inf and -inf also have “any unit”, therefore an expression like v <= inf will never raise an exception (and always return True).

Functions and units

ndarray methods

All methods that make sense on quantities should work, i.e. they check for the correct units of their arguments and return quantities with units were appropriate. Most of the methods are overwritten using thin function wrappers:

wrap_function_keep_dimension:
Strips away the units before giving the array to the method of ndarray, then reattaches the unit to the result (examples: sum, mean, max)
wrap_function_change_dimension:
Changes the dimensions in a simple way that is independent of function arguments, the shape of the array, etc. (examples: sqrt, var, power)
wrap_function_dimensionless:
Raises an error if the method is called on a quantity with dimensions (i.e. it works on dimensionless quantities).

List of methods

all, any, argmax, argmax, argsort, clip, compress, conj, conjugate, copy, cumsum, diagonal, dot, dump, dumps, fill, flatten, getfield, item, itemset, max, mean, min, newbyteorder, nonzero, prod, ptp, put, ravel, repeat, reshape, round, searchsorted, setasflat, setfield, setflags, sort, squeeze, std, sum, take, tolist, trace, transpose, var, view

Notes

  • Methods directly working on the internal data buffer (setfield, getfield, newbyteorder) ignore the dimensions of the quantity.
  • The type of a quantity cannot be int, therefore astype does not quite work when trying to convert the array into integers.
  • choose is only defined for integer arrays and therefore does not work
  • tostring and tofile only return/save the pure array data without the unit (but you can use dump or dumps to pickle a quantity array)
  • resize does not work: ValueError: cannot resize this array: it does not own its data
  • cumprod would result in different dimensions for different elements and is therefore forbidden
  • item returns a pure Python float by definition
  • itemset does not check for units
Numpy ufuncs

All of the standard numpy ufuncs (functions that operate element-wise on numpy arrays) are supported, meaning that they check for correct units and return appropriate arrays. These functions are often called implicitly, for example when using operators like < or **.

Math operations:
add, subtract, multiply, divide, logaddexp, logaddexp2, true_divide, floor_divide, negative, power, remainder, mod, fmod, absolute, rint, sign, conj, conjugate, exp, exp2, log, log2, log10, expm1, log1p, sqrt, square, reciprocal, ones_like
Trigonometric functions:
sin, cos, tan, arcsin, arccos, arctan, arctan2, hypot, sinh, cosh, tanh, arcsinh, arccosh, arctanh, deg2rad, rad2deg
Bitwise functions:
bitwise_and, bitwise_or, bitwise_xor, invert, left_shift, right_shift
Comparison functions:
greater, greater_equal, less, less_equal, not_equal, equal, logical_and, logical_or, logical_xor, logical_not, maximum, minimum
Floating functions:
isreal, iscomplex, isfinite, isinf, isnan, floor, ceil, trunc, fmod

Not taken care of yet: signbit, copysign, nextafter, modf, ldexp, frexp

Notes

  • Everything involving log or exp, as well as trigonometric functions only works on dimensionless array (for arctan2 and hypot this is questionable, though)
  • Unit arrays can only be raised to a scalar power, not to an array of exponents as this would lead to differing dimensions across entries. For simplicity, this is enforced even for dimensionless quantities.
  • Bitwise functions never works on quantities (numpy will by itself throw a TypeError because they are floats not integers).
  • All comparisons only work for matching dimensions (with the exception of always allowing comparisons to 0) and return a pure boolean array.
  • All logical functions treat quantities as boolean values in the same way as floats are treated as boolean: Any non-zero value is True.
Numpy functions

Many numpy functions are functional versions of ndarray methods (e.g. mean, sum, clip). They therefore work automatically when called on quantities, as numpy propagates the call to the respective method.

There are some functions in numpy that do not propagate their call to the corresponding method (because they use np.asarray instead of np.asanyarray, which might actually be a bug in numpy): trace, diagonal, ravel, dot. For these, wrapped functions in unitsafefunctions.py are provided.

Wrapped numpy functions in unitsafefunctions.py

These functions are thin wrappers around the numpy functions to correctly check for units and return quantities when appropriate:

log, exp, sin, cos, tan, arcsin, arccos, arctan, sinh, cosh, tanh, arcsinh, arccosh, arctanh, diagonal, ravel, trace, dot

numpy functions that work unchanged

This includes all functional counterparts of the methods mentioned above (with the exceptions mentioned above). Some other functions also work correctly, as they are only using functions/methods that work with quantities:

  • linspace, diff, digitize [1]
  • trim_zeros, fliplr, flipud, roll, rot90, shuffle
  • corrcoeff [1]
[1](1, 2) But does not care about the units of its input.

numpy functions that return a pure numpy array instead of quantities

  • arange
  • cov
  • random.permutation
  • histogram, histogram2d
  • cross, inner, outer
  • where

numpy functions that do something wrong

  • insert, delete (return a quantity array but without units)
  • correlate (returns a quantity with wrong units)
  • histogramdd (raises a DimensionMismatchError)
User-defined functions and units

For performance and simplicity reasons, code within the Brian core does not use Quantity objects but unitless numpy arrays instead. See Adding support for new functions for details on how to make use user-defined functions with Brian’s unit system.

Equations and namespaces

Equation parsing

Parsing is done via pyparsing, for now find the grammar at the top of the brian2.equations.equations file.

Variables

Each Brian object that saves state variables (e.g. NeuronGroup, Synapses, StateMonitor) has a variables attribute, a dictionary mapping variable names to Variable objects (in fact a Variables object, not a simple dictionary). Variable objects contain information about the variable (name, dtype, units) as well as access to the variable’s value via a get_value method. Some will also allow setting the values via a corresponding set_value method. These objects can therefore act as proxies to the variables’ “contents”.

Variable objects provide the “abstract namespace” corresponding to a chunk of “abstract code”, they are all that is needed to check for syntactic correctness, unit consistency, etc.

Namespaces

The namespace attribute of a group can contain information about the external (variable or function) names used in the equations. It specifies a group-specific namespace used for resolving names in that group. At run time, this namespace is combined with a “run namespace”. This namespace is either explicitly provided to the Network.run() method, or the implicit namespace consisting of the locals and globals around the point where the run function is called is used. This namespace is then passed down to all the objects via Network.before_fun which calls all the individual BrianObject.before_run() methods with this namespace.

Variables and indices

Introduction

To be able to generate the proper code out of abstract code statements, the code generation process has to have access to information about the variables (their type, size, etc.) as well as to the indices that should be used for indexing arrays (e.g. a state variable of a NeuronGroup will be indexed differently in the NeuronGroup state updater and in synaptic propagation code). Most of this information is stored in the variables attribute of a VariableOwner (this includes NeuronGroup, Synapses, PoissonGroup and everything else that has state variables). The variables attribute can be accessed as a (read-only) dictionary, mapping variable names to Variable objects storing the information about the respective variable. However, it is not a simple dictionary but an instance of the Variables class. Let’s have a look at its content for a simple example:

 >>> tau = 10*ms
 >>> G = NeuronGroup(10, 'dv/dt = -v / tau : volt')
 >>> for name, var in G.variables.items():
 ...     print('%r : %s' % (name, var))
 ...
'_spikespace' : <ArrayVariable(unit=Unit(1),  dtype=<type 'numpy.int32'>, scalar=False, constant=False, read_only=False)>
 'i' : <ArrayVariable(unit=Unit(1),  dtype=<type 'numpy.int32'>, scalar=False, constant=True, read_only=True)>
 'N' : <Constant(unit=Unit(1),  dtype=<type 'numpy.int64'>, scalar=True, constant=True, read_only=True)>
 't' : <ArrayVariable(unit=second,  dtype=<type 'numpy.float64'>, scalar=True, constant=False, read_only=True)>
 'v' : <ArrayVariable(unit=volt,  dtype=<type 'numpy.float64'>, scalar=False, constant=False, read_only=False)>
 'dt' : <ArrayVariable(unit=second,  dtype=<type 'float'>, scalar=True, constant=True, read_only=True)>

The state variable v we specified for the NeuronGroup is represented as an ArrayVariable, all the other variables were added automatically. By convention, internal names for variables that should not be directly accessed by the user start with an underscore, in the above example the only variable of this kind is '_spikespace', the internal datastructure used to store the spikes that occured in the current time step. There’s another array i, the neuronal indices (simply an array of integers from 0 to 9), that is used for string expressions involving neuronal indices. The constant N represents the total number of neurons. At the first sight it might be surprising that t, the current time of the clock and dt, its timestep, are ArrayVariable objects as well. This is because those values can change during a run (for t) or between runs (for dt), and storing them as arrays with a single value (note the scalar=True) is the easiest way to share this value – all code accessing it only needs a reference to the array and can access its only element.

The information stored in the Variable objects is used to do various checks on the level of the abstract code, i.e. before any programming language code is generated. Here are some examples of errors that are caught this way:

>>> G.v = 3*ms  # G.variables['v'].unit is volt   
Traceback (most recent call last):
...
DimensionMismatchError: v should be set with a value with units volt, but got 3. ms (unit is second).
>>> G.N = 5  # G.variables['N'] is read-only
Traceback (most recent call last):
...
TypeError: Variable N is read-only
>>> G2 = NeuronGroup(10, 'dv/dt = -v / tau : volt', threshold='v')  #G2.variables['v'].is_bool is False
Traceback (most recent call last):
...
TypeError: Threshold condition "v" is not a boolean expression

Creating variables

Each variable that should be accessible as a state variable and/or should be available for use in abstract code has to be created as a Variable. For this, first a Variables container with a reference to the group has to be created, individual variables can then be added using the various add_... methods:

self.variables = Variables(self)
self.variables.add_array('an_array', unit=volt, size=100)
self.variables.add_constant('N', unit=Unit(1), value=self._N, dtype=np.int32)
self.variables.create_clock_variables(self.clock)

As an additional argument, array variables can be specified with a specific index (see Indices below).

References

For each variable, only one Variable object exists even if it is used in different contexts. Let’s consider the following example:

G = NeuronGroup(5, 'dv/dt = -v / tau : volt')
subG = G[2:]
S = Synapses(G, G, on_pre='v+=1*mV')
S.connect()

All allow an access to the state variable v (note the different shapes, these arise from the different indices used, see below):

>>> G.v
<neurongroup.v: array([ 0.,  0.,  0.,  0.,  0.]) * volt>
>>> subG.v
<neurongroup_subgroup.v: array([ 0.,  0.,  0.]) * volt>
>>> S.v
<synapses.v: array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
    0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]) * volt>

In all of these cases, the Variables object stores references to the same ArrayVariable object:

>>> id(G.variables['v'])
108610960
>>> id(subG.variables['v'])
108610960
>>> id(S.variables['v'])
108610960

Such a reference can be added using Variables.add_reference, note that the name used for the reference is not necessarily the same as in the original group, e.g. in the above example S.variables also stores references to v under the names v_pre and v_post.

Indices

In subgroups and especially in synapses, the transformation of abstract code into executable code is not straightforward because it can involve variables from different contexts. Here is a simple example:

G = NeuronGroup(5, 'dv/dt = -v / tau : volt')
S = Synapses(G, G, 'w : volt', on_pre='v+=w')

The seemingly trivial operation v+=w involves the variable v of the NeuronGroup and the variable w of the Synapses object which have to be indexed in the appropriate way. Since this statement is executed in the context of S, the variable indices stored there are relevant:

>>> S.variables.indices['w']
'_idx'
>>> S.variables.indices['v']
'_postsynaptic_idx'

The index _idx has a special meaning and always refers to the “natural” index for a group (e.g. all neurons for a NeuronGroup, all synapses for a Synapses object, etc.). All other indices have to refer to existing arrays:

>>> S.variables['_postsynaptic_idx']
<DynamicArrayVariable(unit=Unit(1),  dtype=<type 'numpy.int32'>, scalar=False, constant=False, is_bool=False, read_only=False)>

In this case, _postsynaptic_idx refers to a dynamic array that stores the postsynaptic targets for each synapse (since it is an array itself, it also has an index. It is defined for each synapse so its index is _idx – in fact there is currently no support for an additional level of indirection in Brian: a variable representing an index has to have _idx as its own index). Using this index information, the following C++ code (slightly simplified) is generated:

for(int _spiking_synapse_idx=0;
    _spiking_synapse_idx<_num_spiking_synapses;
    _spiking_synapse_idx++)
{
    const int _idx = _spiking_synapses[_spiking_synapse_idx];
    const int _postsynaptic_idx = _ptr_array_synapses__synaptic_post[_idx];
    const double w = _ptr_array_synapses_w[_idx];
    double v = _ptr_array_neurongroup_v[_postsynaptic_idx];
    v += w;
    _ptr_array_neurongroup_v[_postsynaptic_idx] = v;
}

In this case, the “natural” index _idx iterates over all the synapses that received a spike (this is defined in the template) and _postsynaptic_idx refers to the postsynaptic targets for these synapses. The variables w and v are then pulled out of their respective arrays with these indices so that the statement v += w; does the right thing.

Getting and setting state variables

When a state variable is accessed (e.g. using G.v), the group does not return a reference to the underlying array itself but instead to a VariableView object. This is because a state variable can be accessed in different contexts and indexing it with a number/array (e.g. obj.v[0]) or a string (e.g. obj.v['i>3']) can refer to different values in the underlying array depending on whether the object is the NeuronGroup, a Subgroup or a Synapses object.

The __setitem__ and __getitem__ methods in VariableView delegate to VariableView.set_item and VariableView.get_item respectively (which can also be called directly under special circumstances). They analyze the arguments (is the index a number, a slice or a string? Is the target value an array or a string expression?) and delegate the actual retrieval/setting of the values to a specific method:

  • Getting with a numerical (or slice) index (e.g. G.v[0]): VariableView.get_with_index_array
  • Getting with a string index (e.g. G.v['i>3']): VariableView.get_with_expression
  • Setting with a numerical (or slice) index and a numerical target value (e.g. G.v[5:] = -70*mV): VariableView.set_with_index_array
  • Setting with a numerical (or slice) index and a string expression value (e.g. G.v[5:] = (-70+i)*mV): VariableView.set_with_expression
  • Setting with a string index and a string expression value (e.g. G.v['i>5'] = (-70+i)*mV): VariableView.set_with_expression_conditional

These methods are annotated with the device_override decorator and can therefore be implemented in a different way in certain devices. The standalone device, for example, overrides the all the getting functions and the setting with index arrays. Note that for standalone devices, the “setter” methods do not actually set the values but only note them down for later code generation.

Additional variables and indices

The variables stored in the variables attribute of a VariableOwner can be used everywhere (e.g. in the state updater, in the threshold, the reset, etc.). Objects that depend on these variables, e.g. the Thresholder of a NeuronGroup add additional variables, in particular AuxiliaryVariables that are automatically added to the abstract code: a threshold condition v > 1 is converted into the statement _cond = v > 1; to specify the meaning of the variable _cond for the code generation stage (in particular, C++ code generation needs to know the data type) an AuxiliaryVariable object is created.

In some rare cases, a specific variable_indices dictionary is provided that overrides the indices for variables stored in the variables attribute. This is necessary for synapse creation because the meaning of the variables changes in this context: an expression v>0 does not refer to the v variable of all the connected postsynaptic variables, as it does under other circumstances in the context of a Synapses object, but to the v variable of all possible targets.

Preferences system

Each preference looks like codegen.c.compiler, i.e. dotted names. Each preference has to be registered and validated. The idea is that registering all preferences ensures that misspellings of a preference value by a user causes an error, e.g. if they wrote codgen.c.compiler it would raise an error. Validation means that the value is checked for validity, so codegen.c.compiler = 'gcc' would be allowed, but codegen.c.compiler = 'hcc' would cause an error.

An additional requirement is that the preferences system allows for extension modules to define their own preferences, including extending the existing core brian preferences. For example, an extension might want to define extension.* but it might also want to define a new language for codegen, e.g. codegen.lisp.*. However, extensions cannot add preferences to an existing category.

Accessing and setting preferences

Preferences can be accessed and set either keyword-based or attribute-based. To set/get the value for the preference example mentioned before, the following are equivalent:

prefs['codegen.c.compiler'] = 'gcc'
prefs.codegen.c.compiler = 'gcc'

if prefs['codegen.c.compiler'] == 'gcc':
    ...
if prefs.codegen.c.compiler == 'gcc':
    ...

Using the attribute-based form can be particulary useful for interactive work, e.g. in ipython, as it offers autocompletion and documentation. In ipython, prefs.codegen.c? would display a docstring with all the preferences available in the codegen.c category.

Preference files

Preferences are stored in a hierarchy of files, with the following order (each step overrides the values in the previous step but no error is raised if one is missing):

  • The global defaults are stored in the installation directory.
  • The user default are stored in ~/.brian/preferences (which works on Windows as well as Linux).
  • The file brian_preferences in the current directory.

Registration

Registration of preferences is performed by a call to BrianGlobalPreferences.register_preferences, e.g.:

register_preferences(
    'codegen.c',
    'Code generation preferences for the C language',
    'compiler'= BrianPreference(
        validator=is_compiler,
        docs='...',
        default='gcc'),
     ...
    )

The first argument 'codegen.c' is the base name, and every preference of the form codegen.c.* has to be registered by this function (preferences in subcategories such as codegen.c.somethingelse.* have to be specified separately). In other words, by calling register_preferences, a module takes ownership of all the preferences with one particular base name. The second argument is a descriptive text explaining what this category is about. The preferences themselves are provided as keyword arguments, each set to a BrianPreference object.

Validation functions

A validation function takes a value for the preference and returns True (if the value is a valid value) or False. If no validation function is specified, a default validator is used that compares the value against the default value: Both should belong to the same class (e.g. int or str) and, in the case of a Quantity have the same unit.

Validation

Setting the value of a preference with a registered base name instantly triggers validation. Trying to set an unregistered preference using keyword or attribute access raises an error. The only exception from this rule is when the preferences are read from configuration files (see below). Since this happens before the user has the chance to import extensions that potentially define new preferences, this uses a special function (_set_preference). In this case,for base names that are not yet registered, validation occurs when the base name is registered. If, at the time that Network.run() is called, there are unregistered preferences set, a PreferenceError is raised.

File format

The preference files are of the following form:

a.b.c = 1
# Comment line
[a]
b.d = 2
[a.b]
b.e = 3

This would set preferences a.b.c=1, a.b.d=2 and a.b.e=3.

Built-in preferences

Brian itself defines the following preferences:

codegen

Code generation preferences

codegen.loop_invariant_optimisations = True

Whether to pull out scalar expressions out of the statements, so that they are only evaluated once instead of once for every neuron/synapse/... Can be switched off, e.g. because it complicates the code (and the same optimisation is already performed by the compiler) or because the code generation target does not deal well with it. Defaults to True.

codegen.string_expression_target = 'numpy'

Default target for the evaluation of string expressions (e.g. when indexing state variables). Should normally not be changed from the default numpy target, because the overhead of compiling code is not worth the speed gain for simple expressions.

Accepts the same arguments as codegen.target, except for 'auto'

codegen.target = 'auto'

Default target for code generation.

Can be a string, in which case it should be one of:

  • 'auto' the default, automatically chose the best code generation target available.
  • 'weave' uses scipy.weave to generate and compile C++ code, should work anywhere where gcc is installed and available at the command line.
  • 'cython', uses the Cython package to generate C++ code. Needs a working installation of Cython and a C++ compiler.
  • 'numpy' works on all platforms and doesn’t need a C compiler but is often less efficient.

Or it can be a CodeObject class.

codegen.cpp

C++ compilation preferences

codegen.cpp.compiler = ''

Compiler to use (uses default if empty)

Should be gcc or msvc.

codegen.cpp.define_macros = []

List of macros to define; each macro is defined using a 2-tuple, where ‘value’ is either the string to define it to or None to define it without a particular value (equivalent of “#define FOO” in source or -DFOO on Unix C compiler command line).

codegen.cpp.extra_compile_args = None

Extra arguments to pass to compiler (if None, use either extra_compile_args_gcc or extra_compile_args_msvc).

codegen.cpp.extra_compile_args_gcc = ['-w', '-O3', '-ffast-math', '-fno-finite-math-only', '-march=native']

Extra compile arguments to pass to GCC compiler

codegen.cpp.extra_compile_args_msvc = ['/Ox', '/w', '/arch:SSE2']

Extra compile arguments to pass to MSVC compiler (the default /arch: flag is determined based on the processor architecture)

codegen.cpp.extra_link_args = []

Any extra platform- and compiler-specific information to use when linking object files together.

codegen.cpp.headers = []

A list of strings specifying header files to use when compiling the code. The list might look like [“<vector>”,“‘my_header’”]. Note that the header strings need to be in a form than can be pasted at the end of a #include statement in the C++ code.

codegen.cpp.include_dirs = []

Include directories to use. Note that $prefix/include will be appended to the end automatically, where $prefix is Python’s site-specific directory prefix as returned by sys.prefix.

codegen.cpp.libraries = []

List of library names (not filenames or paths) to link against.

codegen.cpp.library_dirs = []

List of directories to search for C/C++ libraries at link time. Note that $prefix/lib will be appended to the end automatically, where $prefix is Python’s site-specific directory prefix as returned by sys.prefix.

codegen.cpp.msvc_architecture = ''

MSVC architecture name (or use system architectue by default).

Could take values such as x86, amd64, etc.

codegen.cpp.msvc_vars_location = ''

Location of the MSVC command line tool (or search for best by default).

codegen.cpp.runtime_library_dirs = []

List of directories to search for C/C++ libraries at run time.

codegen.generators

Codegen generator preferences (see subcategories for individual languages)

codegen.generators.cpp

C++ codegen preferences

codegen.generators.cpp.flush_denormals = False

Adds code to flush denormals to zero.

The code is gcc and architecture specific, so may not compile on all platforms. The code, for reference is:

#define CSR_FLUSH_TO_ZERO         (1 << 15)
unsigned csr = __builtin_ia32_stmxcsr();
csr |= CSR_FLUSH_TO_ZERO;
__builtin_ia32_ldmxcsr(csr);

Found at http://stackoverflow.com/questions/2487653/avoiding-denormal-values-in-c.

codegen.generators.cpp.restrict_keyword = '__restrict'

The keyword used for the given compiler to declare pointers as restricted.

This keyword is different on different compilers, the default works for gcc and MSVS.

codegen.runtime

Runtime codegen preferences (see subcategories for individual targets)

codegen.runtime.cython

Cython runtime codegen preferences

codegen.runtime.cython.cache_dir = None

Location of the cache directory for Cython files. By default, will be stored in a brian_extensions subdirectory where Cython inline stores its temporary files (the result of get_cython_cache_dir()).

codegen.runtime.cython.multiprocess_safe = True

Whether to use a lock file to prevent simultaneous write access to cython .pyx and .so files.

codegen.runtime.numpy

Numpy runtime codegen preferences

codegen.runtime.numpy.discard_units = False

Whether to change the namespace of user-specifed functions to remove units.
core

Core Brian preferences

core.default_float_dtype = float64

Default dtype for all arrays of scalars (state variables, weights, etc.).

Currently, this is not supported (only float64 can be used).

core.default_integer_dtype = int32

Default dtype for all arrays of integer scalars.

core.outdated_dependency_error = True

Whether to raise an error for outdated dependencies (True) or just a warning (False).

core.network

Network preferences

core.network.default_schedule = ['start', 'groups', 'thresholds', 'synapses', 'resets', 'end']

Default schedule used for networks that don’t specify a schedule.
devices

Device preferences

devices.cpp_standalone

C++ standalone preferences

devices.cpp_standalone.openmp_spatialneuron_strategy = None

Which strategy to chose for solving the three tridiagonal systems with OpenMP: 'branches' means to solve the three systems sequentially, but for all the branches in parallel, 'systems' means to solve the three systems in parallel, but all the branches within each system sequentially. The 'branches' approach is usually better for morphologies with many branches and a large number of threads, while the 'systems' strategy should be better for morphologies with few branches (e.g. cables) and/or simulations with no more than three threads. If not specified (the default), the 'systems' strategy will be used when using no more than three threads or when the morphology has less than three branches in total.

devices.cpp_standalone.openmp_threads = 0

The number of threads to use if OpenMP is turned on. By default, this value is set to 0 and the C++ code is generated without any reference to OpenMP. If greater than 0, then the corresponding number of threads are used to launch the simulation.
logging

Logging system preferences

logging.console_log_level = 'INFO'

What log level to use for the log written to the console.

Has to be one of CRITICAL, ERROR, WARNING, INFO, DEBUG or DIAGNOSTIC.

logging.delete_log_on_exit = True

Whether to delete the log and script file on exit.

If set to True (the default), log files (and the copy of the main script) will be deleted after the brian process has exited, unless an uncaught exception occured. If set to False, all log files will be kept.

logging.file_log = True

Whether to log to a file or not.

If set to True (the default), logging information will be written to a file. The log level can be set via the logging.file_log_level preference.

logging.file_log_level = 'DIAGNOSTIC'

What log level to use for the log written to the log file.

In case file logging is activated (see logging.file_log), which log level should be used for logging. Has to be one of CRITICAL, ERROR, WARNING, INFO, DEBUG or DIAGNOSTIC.

logging.save_script = True

Whether to save a copy of the script that is run.

If set to True (the default), a copy of the currently run script is saved to a temporary location. It is deleted after a successful run (unless logging.delete_log_on_exit is False) but is kept after an uncaught exception occured. This can be helpful for debugging, in particular when several simulations are running in parallel.

logging.std_redirection = True

Whether or not to redirect stdout/stderr to null at certain places.

This silences a lot of annoying compiler output, but will also hide error messages making it harder to debug problems. You can always temporarily switch it off when debugging. If logging.std_redirection_to_file is set to True as well, then the output is saved to a file and if an error occurs the name of this file will be printed.

logging.std_redirection_to_file = True

Whether to redirect stdout/stderr to a file.

If both logging.std_redirection and this preference are set to True, all standard output/error (most importantly output from the compiler) will be stored in files and if an error occurs the name of this file will be printed. If logging.std_redirection is True and this preference is False, then all standard output/error will be completely suppressed, i.e. neither be displayed nor stored in a file.

The value of this preference is ignore if logging.std_redirection is set to False.

Adding support for new functions

For a description of Brian’s function system from the user point of view, see Functions.

The default functions available in Brian are stored in the DEFAULT_FUNCTIONS dictionary. New Function objects can be added to this dictionary to make them available to all Brian code, independent of its namespace.

To add a new implementation for a code generation target, a FunctionImplementation can be added to the Function.implementations dictionary. The key for this dictionary has to be either a CodeGenerator class object, or a CodeObject class object. The CodeGenerator of a CodeObject (e.g. CPPCodeGenerator for WeaveCodeObject) is used as a fallback if no implementation specific to the CodeObject class exists.

If a function is already provided for the target language (e.g. it is part of a library imported by default), using the same name, all that is needed is to add an empty FunctionImplementation object to mark the function as implemented. For example, exp is a standard function in C++:

DEFAULT_FUNCTIONS['exp'].implementations[CPPCodeGenerator] = FunctionImplementation()

Some functions are implemented but have a different name in the target language. In this case, the FunctionImplementation object only has to specify the new name:

DEFAULT_FUNCTIONS['arcsin'].implementations[CPPCodeGenerator] = FunctionImplementation('asin')

Finally, the function might not exist in the target language at all, in this case the code for the function has to be provided, the exact form of this code is language-specific. In the case of C++, it’s a dictionary of code blocks:

clip_code = {'support_code': '''
        double _clip(const float value, const float a_min, const float a_max)
        {
                if (value < a_min)
                    return a_min;
                if (value > a_max)
                    return a_max;
                return value;
        }
        '''}
DEFAULT_FUNCTIONS['clip'].implementations[CPPCodeGenerator] = FunctionImplementation('_clip',
                                                                                code=clip_code)

Code generation

The generation of a code snippet is done by a CodeGenerator class. The templates are stored in the CodeObject.templater attribute, which is typically implemented as a subdirectory of templates. The compilation and running of code is done by a CodeObject. See the sections below for each of these.

Code path

The following gives an outline of the key steps that happen for the code generation associated to a NeuronGroup StateUpdater. The items in grey are Brian core functions and methods and do not need to be implemented to create a new code generation target or device. The parts in yellow are used when creating a new device. The parts in green relate to generating code snippets from abstract code blocks. The parts in blue relate to creating new templates which these snippets are inserted into. The parts in red relate to creating new runtime behaviour (compiling and running generated code).

_images/codegen_code_paths.png

In brief, what happens can be summarised as follows. Network.run() will call BrianObject.before_run() on each of the objects in the network. Objects such as StateUpdater, which is a subclass of CodeRunner use this spot to generate and compile their code. The process for doing this is to first create the abstract code block, done in the StateUpdater.update_abstract_code method. Then, a CodeObject is created with this code block. In doing so, Brian will call out to the currently active Device to get the CodeObject and CodeGenerator classes associated to the device, and this hierarchy of calls gives several hooks which can be changed to implement new targets.

Code generation

To implement a new language, or variant of an existing language, derive a class from CodeGenerator. Good examples to look at are the NumpyCodeGenerator, CPPCodeGenerator and CythonCodeGenerator classes in the brian2.codegen.generators package. Each CodeGenerator has a class_name attribute which is a string used by the user to refer to this code generator (for example, when defining function implementations).

The derived CodeGenerator class should implement the methods marked as NotImplemented in the base CodeGenerator class. CodeGenerator also has several handy utility methods to make it easier to write these, see the existing examples to get an idea of how these work.

Syntax translation

One aspect of writing a new language is that sometimes you need to translate from Python syntax into the syntax of another language. You are free to do this however you like, but we recommend using a NodeRenderer class which allows you to iterate over the abstract syntax tree of an expression. See examples in brian2.parsing.rendering.

Templates

In addition to snippet generation, you need to create templates for the new language. See the templates directories in brian2.codegen.runtime.* for examples of these. They are written in the Jinja2 templating system. The location of these templates is set as the CodeObject.templater attribute. Examples such as CPPCodeObject show how this is done.

Code objects

To allow the final code block to be compiled and run, derive a class from CodeObject. This class should implement the placeholder methods defined in the base class. The class should also have attributes templater (which should be a Templater object pointing to the directory where the templates are stored) generator_class (which should be the CodeGenerator class), and class_name (which should be a string the user can use to refer to this code generation target.

Default functions

You will typically want to implement the default functions such as the trigonometric, exponential and rand functions. We usually put these implementations either in the same module as the CodeGenerator class or the CodeObject class depending on whether they are language-specific or runtime target specific. See those modules for examples of implementing these functions.

Code guide

  • brian2.codegen: everything related to code generation
  • brian2.codegen.generators: snippet generation, including the CodeGenerator classes and default function implementations.
  • brian2.codegen.runtime: templates, compilation and running of code, including CodeObject and default function implementations.
  • brian2.core.functions, brian2.core.variables: these define the values that variable names can have.
  • brian2.parsing: tools for parsing expressions, etc.
  • brian2.parsing.rendering: AST tools for rendering expressions in Python into different languages.
  • brian2.utils: various tools for string manipulation, file management, etc.

Additional information

For some additional (older, but still accurate) notes on code generation:

Older notes on code generation

The following is an outline of how the Brian 2 code generation system works, with indicators as to which packages to look at and which bits of code to read for a clearer understanding.

We illustrate the global process with an example, the creation and running of a single NeuronGroup object:

  • Parse the equations, add refractoriness to them: this isn’t really part of code generation.
  • Allocate memory for the state variables.
  • Create Thresholder, Resetter and StateUpdater objects.
    • Determine all the variable and function names used in the respective abstract code blocks and templates
    • Determine the abstract namespace, i.e. determine a Variable or Function object for each name.
    • Create a CodeObject based on the abstract code, template and abstract namespace. This will generate code in the target language and the namespace in which the code will be executed.
  • At runtime, each object calls CodeObject.__call__() to execute the code.
Stages of code generation
Equations to abstract code

In the case of Equations, the set of equations are combined with a numerical integration method to generate an abstract code block (see below) which represents the integration code for a single time step.

An example of this would be converting the following equations:

eqs = '''
dv/dt = (v0-v)/tau : volt (unless refractory)
v0 : volt
'''
group = NeuronGroup(N, eqs, threshold='v>10*mV',
                    reset='v=0*mV', refractory=5*ms)

into the following abstract code using the exponential_euler method (which is selected automatically):

not_refractory = 1*((t - lastspike) > 0.005000)
_BA_v = -v0
_v = -_BA_v + (_BA_v + v)*exp(-dt*not_refractory/tau)
v = _v

The code for this stage can be seen in NeuronGroup.__init__(), StateUpdater.__init__, and StateUpdater.update_abstract_code (in brian2.groups.neurongroup), and the StateUpdateMethod classes defined in the brian2.stateupdaters package.

For more details, see State update.

Abstract code

‘Abstract code’ is just a multi-line string representing a block of code which should be executed for each item (e.g. each neuron, each synapse). Each item is independent of the others in abstract code. This allows us to later generate code either for vectorised languages (like numpy in Python) or using loops (e.g. in C++).

Abstract code is parsed according to Python syntax, with certain language constructs excluded. For example, there cannot be any conditional or looping statements at the moment, although support for this is in principle possible and may be added later. Essentially, all that is allowed at the moment is a sequence of arithmetical a = b*c style statements.

Abstract code is provided directly by the user for threshold and reset statements in NeuronGroup and for pre/post spiking events in Synapses.

Abstract code to snippet

We convert abstract code into a ‘snippet’, which is a small segment of code which is syntactically correct in the target language, although it may not be runnable on its own (that’s handled by insertion into a ‘template’ later). This is handled by the CodeGenerator object in brian2.codegen.generators. In the case of converting into python/numpy code this typically doesn’t involve any changes to the code at all because the original code is in Python syntax. For conversion to C++, we have to do some syntactic transformations (e.g. a**b is converted to pow(a, b)), and add declarations for certain variables (e.g. converting x=y*z into const double x = y*z;).

An example of a snippet in C++ for the equations above:

const double v0 = _ptr_array_neurongroup_v0[_neuron_idx];
const double lastspike = _ptr_array_neurongroup_lastspike[_neuron_idx];
bool not_refractory = _ptr_array_neurongroup_not_refractory[_neuron_idx];
double v = _ptr_array_neurongroup_v[_neuron_idx];
not_refractory = 1 * (t - lastspike > 0.0050000000000000001);
const double _BA_v = -(v0);
const double _v = -(_BA_v) + (_BA_v + v) * exp(-(dt) * not_refractory / tau);
v = _v;
_ptr_array_neurongroup_not_refractory[_neuron_idx] = not_refractory;
_ptr_array_neurongroup_v[_neuron_idx] = v;

The code path that includes snippet generation will be discussed in more detail below, since it involves the concepts of namespaces and variables which we haven’t covered yet.

Snippet to code block

The final stage in the generation of a runnable code block is the insertion of a snippet into a template. These use the Jinja2 template specification language. This is handled in brian2.codegen.templates.

An example of a template for Python thresholding:

# USES_VARIABLES { not_refractory, lastspike, t }
{% for line in code_lines %}
{{line}}
{% endfor %}
_return_values, = _cond.nonzero()
# Set the neuron to refractory
not_refractory[_return_values] = False
lastspike[_return_values] = t

and the output code from the example equations above:

# USES_VARIABLES { not_refractory, lastspike, t }
v = _array_neurongroup_v
_cond = v > 10 * mV
_return_values, = _cond.nonzero()
# Set the neuron to refractory
not_refractory[_return_values] = False
lastspike[_return_values] = t
Code block to executing code

A code block represents runnable code. Brian operates in two different regimes, either in runtime or standalone mode. In runtime mode, memory allocation and overall simulation control is handled by Python and numpy, and code objects operate on this memory when called directly by Brian. This is the typical way that Brian is used, and it allows for a rapid development cycle. However, we also support a standalone mode in which an entire project workspace is generated for a target language or device by Brian, which can then be compiled and run independently of Brian. Each mode has different templates, and does different things with the outputted code blocks. For runtime mode, in Python/numpy code is executed by simply calling the exec statement on the code block in a given namespace. For C++/weave code, the scipy.weave.inline function is used. In standalone mode, the templates will typically each be saved into different files.

Key concepts
Namespaces

In general, a namespace is simply a mapping/dict from names to values. In Brian we use the term ‘namespace’ in two ways: the high level “abstract namespace” maps names to objects based on the Variables or Function class. In the above example, v maps to an ArrayVariable object, tau to a Constant object, etc. This namespace has all the information that is needed for checking the consistency of units, to determine which variables are boolean or scalar, etc. During the CodeObject creation, this abstract namespace is converted into the final namespace in which the code will be executed. In this namespace, v maps to the numpy array storing the state variable values (without units) and tau maps to a concrete value (again, without units). See Equations and namespaces for more details.

Variable

Variable objects contain information about the variable they correspond to, including details like the data type, whether it is a single value or an array, etc.

See brian2.core.variables and, e.g. Group._create_variables, NeuronGroup._create_variables().

Templates

Templates are stored in Jinja2 format. They come in one of two forms, either they are a single template if code generation only needs to output a single block of code, or they define multiple Jinja macros, each of which is a separate code block. The CodeObject should define what type of template it wants, and the names of the macros to define. For examples, see the templates in the directories in brian2/codegen/runtime. See brian2.codegen.templates for more details.

Code guide

This section includes a guide to the various relevant packages and subpackages involved in the code generation process.

codegen

Stores the majority of all code generation related code.

codegen.functions
Code related to including functions - built-in and user-defined - in generated code.
codegen.generators
Each CodeGenerator is defined in a module here.
codegen.runtime
Each runtime CodeObject and its templates are defined in a package here.
core
core.variables
The Variable types are defined here.
equations
Everything related to Equations.
groups
All Group related stuff is in here. The Group.resolve methods are responsible for determining the abstract namespace.
parsing
Various tools using Python’s ast module to parse user-specified code. Includes syntax translation to various languages in parsing.rendering.
stateupdaters
Everything related to generating abstract code blocks from integration methods is here.

Devices

This document describes how to implement a new Device for Brian. This is a somewhat complicated process, and you should first be familiar with devices from the user point of view (Computational methods and efficiency) as well as the code generation system (Code generation).

We wrote Brian’s devices system to allow for two major use cases, although it can potentially be extended beyond this. The two use cases are:

  1. Runtime mode. In this mode, everything is managed by Python, including memory management (using numpy by default) and running the simulation. Actual computational work can be carried out in several different ways, including numpy, weave or Cython.
  2. Standalone mode. In this mode, running a Brian script leads to generating an entire source code project tree which can be compiled and run independently of Brian or Python.

Runtime mode is handled by RuntimeDevice and is already implemented, so here I will mainly discuss standalone devices. A good way to understand these devices is to look at the implementation of CPPStandaloneDevice (the only one implemented in the core of Brian). In many cases, the simplest way to implement a new standalone device would be to derive a class from CPPStandaloneDevice and overwrite just a few methods.

Memory management

Memory is managed primarily via the Device.add_array, Device.get_value and Device.set_value methods. When a new array is created, the add_array method is called, and when trying to access this memory the other two are called. The RuntimeDevice uses numpy to manage the memory and returns the underlying arrays in these methods. The CPPStandaloneDevice just stores a dictionary of array names but doesn’t allocate any memory. This information is later used to generate code that will allocate the memory, etc.

Code objects

As in the case of runtime code generation, computational work is done by a collection of CodeObject s. In CPPtandaloneDevice, each code object is converted into a pair of .cpp and .h files, and this is probably a fairly typical way to do it. For this device, it just uses the same code generation routines as for the runtime C++ device weave.

Building

The method Device.build is used to generate the project. This can be implemented any way you like, although looking at CPPStandaloneDevice.build is probably a good way to get an idea of how to do it.

Device override methods

Several functions and methods in Brian are decorated with the device_override decorator. This mechanism allows a standalone device to override the behaviour of any of these functions by implementing a method with the name provided to device_override. For example, the CPPStandaloneDevice uses this to override Network.run() as CPPStandaloneDevice.network_run.

Other methods

There are some other methods to implement, including initialising arrays, creating spike queues for synaptic propagation. Take a look at the source code for these.

Multi-threading with OpenMP

The following is an outline of how to make C++ standalone templates compatible with OpenMP, and therefore make them work in a multi-threaded environment. This should be considered as an extension to Code generation, that has to be read first. The C++ standalone mode of Brian is compatible with OpenMP, and therefore simulations can be launched by users with one or with multiple threads. Therefore, when adding new templates, the developers need to make sure that those templates are properly handling the situation if launched with OpenMP.

Key concepts

All the simulations performed with the C++ standalone mode can be launched with multi-threading, and make use of multiple cores on the same machine. Basically, all the Brian operations that can easily be performed in parallel, such as computing the equations for NeuronGroup, Synapses, and so on can and should be split among several threads. The network construction, so far, is still performed only by one single thread, and all created objects are shared by all the threads.

Use of #pragma flags

In OpenMP, all the parallelism is handled thanks to extra comments, added in the main C++ code, under the form:

#pragma omp ...

But to avoid any dependencies in the code that is generated by Brian when OpenMP is not activated, we are using functions that will only add those comments, during code generation, when such a multi-threading mode is turned on. By default, nothing will be inserted.

Translations of the #pragma commands

All the translations from openmp_pragma() calls in the C++ templates are handled in the file devices/cpp_standalone/codeobject.py In this function, you can see that all calls with various string inputs will generate #pragma statements inserted into the C++ templates during code generation. For example:

{{ openmp_pragma('static') }}

will be transformed, during code generation, into:

#pragma omp for schedule(static)

You can find the list of all the translations in the core of the openmp_pragma() function, and if some extra translations are needed, they should be added here.

Execution of the OpenMP code

In this section, we are explaining the main ideas behind the OpenMP mode of Brian, and how the simulation is executed in such a parallel context. As can be seen in devices/cpp_standalone/templates/main.cpp, the appropriate number of threads, defined by the user, is fixed at the beginning of the main function in the C++ code with:

{{ openmp_pragma('set_num_threads') }}

equivalent to (thanks to the openmp_pragam() function defined above): nothing if OpenMP is turned off (default), and to:

omp_set_dynamic(0);
omp_set_num_threads(nb_threads);

otherwise. When OpenMP creates a parallel context, this is the number of threads that will be used. As said, network creation is performed without any calls to OpenMP, on one single thread. Each template that wants to use parallelism has to add {{ openmp_pragma{('parallel')}} to create a general block that will be executed in parallel or {{ openmp_pragma{('parallel-static')}} to execute a single loop in parallel.

How to make your template use OpenMP parallelism

To design a parallel template, such as for example devices/cpp_standalone/templates/common_group.cpp, you can see that as soon as you have loops that can safely be split across nodes, you just need to add an openmp command in front of those loops:

{{openmp_pragma('parallel-static')}}
for(int _idx=0; _idx<N; _idx++)
{
    ...
}

By doing so, OpenMP will take care of splitting the indices and each thread will loop only on a subset of indices, sharing the load. By default, the scheduling use for splitting the indices is static, meaning that each node will get the same number of indices: this is the faster scheduling in OpenMP, and it makes sense for NeuronGroup or Synapses because operations are the same for all indices. By having a look at examples of templates such as devices/cpp_standalone/templates/statemonitor.cpp, you can see that you can merge portions of code executed by only one node and portions executed in parallel. In this template, for example, only one node is recording the time and extending the size of the arrays to store the recorded values:

{{_dynamic_t}}.push_back(_clock_t);

// Resize the dynamic arrays
{{_recorded}}.resize(_new_size, _num_indices);

But then, values are written in the arrays by all the nodes:

{{ openmp_pragma('parallel-static') }}
for (int _i = 0; _i < _num_indices; _i++)
{
    ....
}

In general, operations that manipulate global data structures, e.g. that use push_back for a std::vector, should only be executed by a single thread.

Synaptic propagation in parallel

General ideas

With OpenMP, synaptic propagation is also multi-threaded. Therefore, we have to modify the SynapticPathway objects, handling spike propagation. As can be seen in devices/cpp_standalone/templates/synapses_classes.cpp, such an object, created during run time, will be able to get the number of threads decided by the user:

_nb_threads = {{ openmp_pragma('get_num_threads') }};

By doing so, a SynapticPathway, instead of handling only one SpikeQueue, will be divided into _nb_threads SpikeQueues, each of them handling a subset of the total number of connections. All the calls to SynapticPathway object are performed from within parallel blocks in the synapses and synapses_push_spikes template, we have to take this parallel context into account. This is why all the function of the SynapticPathway object are taking care of the node number:

void push(int *spikes, unsigned int nspikes)
{
    queue[{{ openmp_pragma('get_thread_num') }}]->push(spikes, nspikes);
}

Such a method for the SynapticPathway will make sure that when spikes are propagated, all the threads will propagate them to their connections. By default, again, if OpenMP is turned off, the queue vector has size 1.

Preparation of the SynapticPathway

Here we are explaining the implementation of the prepare() method for SynapticPathway:

{{ openmp_pragma('parallel') }}
{
    unsigned int length;
    if ({{ openmp_pragma('get_thread_num') }} == _nb_threads - 1)
        length = n_synapses - (unsigned int) {{ openmp_pragma('get_thread_num') }}*n_synapses/_nb_threads;
    else
        length = (unsigned int) n_synapses/_nb_threads;

    unsigned int padding  = {{ openmp_pragma('get_thread_num') }}*(n_synapses/_nb_threads);

    queue[{{ openmp_pragma('get_thread_num') }}]->openmp_padding = padding;
    queue[{{ openmp_pragma('get_thread_num') }}]->prepare(&real_delays[padding], &sources[padding], length, _dt);
}

Basically, each threads is getting an equal number of synapses (except the last one, that will get the remaining ones, if the number is not a multiple of n_threads), and the queues are receiving a padding integer telling them what part of the synapses belongs to each queue. After that, the parallel context is destroyed, and network creation can continue. Note that this could have been done without a parallel context, in a sequential manner, but this is just speeding up everything.

Selection of the spikes

Here we are explaining the implementation of the peek() method for SynapticPathway. This is an example of concurrent access to data structures that are not well handled in parallel, such as std::vector. When peek() is called, we need to return a vector of all the neuron spiking at that particular time. Therefore, we need to ask every queue of the SynapticPathway what are the id of the spiking neurons, and concatenate them. Because those ids are stored in vectors with various shapes, we need to loop over nodes to perform this concatenate, in a sequential manner:

{{ openmp_pragma('static-ordered') }}
for(int _thread=0; _thread < {{ openmp_pragma('get_num_threads') }}; _thread++)
{
    {{ openmp_pragma('ordered') }}
    {
        if (_thread == 0)
            all_peek.clear();
        all_peek.insert(all_peek.end(), queue[_thread]->peek()->begin(), queue[_thread]->peek()->end());
    }
}

The loop, with the keyword ‘static-ordered’, is therefore performed such that node 0 enters it first, then node 1, and so on. Only one node at a time is executing the block statement. This is needed because vector manipulations can not be performed in a multi-threaded manner. At the end of the loop, all_peek is now a vector where all sub queues have written the id of spiking cells, and therefore this is the list of all spiking cells within the SynapticPathway.

Compilation of the code

One extra file needs to be modified, in order for OpenMP implementation to work. This is the makefile devices/cpp_standalone/templates/makefile. As one can simply see, the CFLAGS are dynamically modified during code generation thanks to:

{{ openmp_pragma('compilation') }}

If OpenMP is activated, this will add the following dependencies:

-fopenmp

such that if OpenMP is turned off, nothing, in the generated code, does depend on it.

Indices and tables