Introduction to Brian part 3: Simulations

If you haven’t yet read parts 1 and 2 on Neurons and Synapses, go read them first.

This tutorial is about managing the slightly more complicated tasks that crop up in research problems, rather than the toy examples we’ve been looking at so far. So we cover things like inputting sensory data, modelling experimental conditions, etc.

As before we start by importing the Brian package and setting up matplotlib for IPython:

Note

This tutorial is a static non-editable version. You can launch an interactive, editable version without installing any local files using the Binder service (although note that at some times this may be slow or fail to open): launchbinder

Alternatively, you can download a copy of the notebook file to use locally: 3-intro-to-brian-simulations.ipynb

See the tutorial overview page for more details.

Multiple runs

Let’s start by looking at a very common task: doing multiple runs of a simulation with some parameter that changes. Let’s start off with something very simple, how does the firing rate of a leaky integrate-and-fire neuron driven by Poisson spiking neurons change depending on its membrane time constant? Let’s set that up.

../../_images/3-intro-to-brian-simulations_image_4_0.png

Now if you’re running the notebook, you’ll see that this was a little slow to run. The reason is that for each loop, you’re recreating the objects from scratch. We can improve that by setting up the network just once. We store a copy of the state of the network before the loop, and restore it at the beginning of each iteration.

../../_images/3-intro-to-brian-simulations_image_6_0.png

That’s a very simple example of using store and restore, but you can use it in much more complicated situations. For example, you might want to run a long training run, and then run multiple test runs afterwards. Simply put a store after the long training run, and a restore before each testing run.

You can also see that the output curve is very noisy and doesn’t increase monotonically like we’d expect. The noise is coming from the fact that we run the Poisson group afresh each time. If we only wanted to see the effect of the time constant, we could make sure that the spikes were the same each time (although note that really, you ought to do multiple runs and take an average). We do this by running just the Poisson group once, recording its spikes, and then creating a new SpikeGeneratorGroup that will output those recorded spikes each time.

../../_images/3-intro-to-brian-simulations_image_8_0.png

You can see that now there is much less noise and it increases monotonically because the input spikes are the same each time, meaning we’re seeing the effect of the time constant, not the random spikes.

Note that in the code above, we created Network objects. The reason is that in the loop, if we just called run it would try to simulate all the objects, including the Poisson neurons P, and we only want to run that once. We use Network to specify explicitly which objects we want to include.

The techniques we’ve looked at so far are the conceptually most simple way to do multiple runs, but not always the most efficient. Since there’s only a single output neuron in the model above, we can simply duplicate that output neuron and make the time constant a parameter of the group.

WARNING    "tau" is an internal variable of group "neurongroup_2", but also exists in the run namespace with the value 10. * msecond. The internal variable will be used. [brian2.groups.group.Group.resolve.resolution_conflict]
../../_images/3-intro-to-brian-simulations_image_10_1.png

You can see that this is much faster again! It’s a little bit more complicated conceptually, and it’s not always possible to do this trick, but it can be much more efficient if it’s possible.

Let’s finish with this example by having a quick look at how the mean and standard deviation of the interspike intervals depends on the time constant.

../../_images/3-intro-to-brian-simulations_image_12_0.png

Notice that we used the spike_trains() method of SpikeMonitor. This is a dictionary with keys being the indices of the neurons and values being the array of spike times for that neuron.

Changing things during a run

Imagine an experiment where you inject current into a neuron, and change the amplitude randomly every 10 ms. Let’s see if we can model that using a Hodgkin-Huxley type neuron.

../../_images/3-intro-to-brian-simulations_image_14_0.png

In the code above, we used a loop over multiple runs to achieve this. That’s fine, but it’s not the most efficient way to do it because each time we call run we have to do a lot of initialisation work that slows everything down. It also won’t work as well with the more efficient standalone mode of Brian. Here’s another way.

../../_images/3-intro-to-brian-simulations_image_16_0.png

We’ve replaced the loop that had multiple run calls with a run_regularly. This makes the specified block of code run every dt=10*ms. The run_regularly lets you run code specific to a single NeuronGroup, but sometimes you might need more flexibility. For this, you can use network_operation which lets you run arbitrary Python code (but won’t work with the standalone mode).

../../_images/3-intro-to-brian-simulations_image_18_0.png

Now let’s extend this example to run on multiple neurons, each with a different capacitance to see how that affects the behaviour of the cell.

../../_images/3-intro-to-brian-simulations_image_20_0.png

So that runs, but something looks wrong! The injected currents look like they’re different for all the different neurons! Let’s check:

../../_images/3-intro-to-brian-simulations_image_22_0.png

Sure enough, it’s different each time. But why? We wrote group.run_regularly('I = rand()*50*nA', dt=10*ms) which seems like it should give the same value of I for each neuron. But, like threshold and reset statements, run_regularly code is interpreted as being run separately for each neuron, and because I is a parameter, it can be different for each neuron. We can fix this by making I into a shared variable, meaning it has the same value for each neuron.

../../_images/3-intro-to-brian-simulations_image_24_0.png

Ahh, that’s more like it!

Adding input

Now let’s think about a neuron being driven by a sinusoidal input. Let’s go back to a leaky integrate-and-fire to simplify the equations a bit.

../../_images/3-intro-to-brian-simulations_image_26_0.png

So far, so good and the sort of thing we saw in the first tutorial. Now, what if that input current were something we had recorded and saved in a file? In that case, we can use TimedArray. Let’s start by reproducing the picture above but using TimedArray.

../../_images/3-intro-to-brian-simulations_image_28_0.png

Note that for the example where we put the sin function directly in the equations, we had to use the method='euler' argument because the exact integrator wouldn’t work here (try it!). However, TimedArray is considered to be constant over its time step and so the linear integrator can be used. This means you won’t get the same behaviour from these two methods for two reasons. Firstly, the numerical integration methods exact and euler give slightly different results. Secondly, sin is not constant over a timestep whereas TimedArray is.

Now just to show that TimedArray works for arbitrary currents, let’s make a weird “recorded” current and run it on that.

../../_images/3-intro-to-brian-simulations_image_30_0.png

Finally, let’s finish on an example that actually reads in some data from a file. See if you can work out how this example works.

../../_images/3-intro-to-brian-simulations_image_32_0.png