Why donate?

- Tutorials, follow the NBT course

**Follow us **

**Most popular pages**

You are here: Frontpage : Introduction » Tutorials » Granger causality analysis of EEG data » Part I: Background and terminology

Table of Contents

The `dynamics.var`

class implements the machinery necessary for generating observations
from a Vector Autoregressive Model (VAR):

where is the realization at time of the
*VAR observation vector*, is the VAR coefficient matrix for lag
and is the realization at time of
the *innovation process*. The latter is assumed to be an
independent and identically distributed (i.i.d.)
random process.

It is important that you put the formula above in the context of the
lecture
dealing with methods for assessing information
flow between EEG-sources. Recall from that lecture that the
*innovation process* represents the \\intrinsic

activity generated at each
EEG-source, i.e. whatever is truly unique to each source. In the master/slave
clocks analogy that we used in the lecture the *innovation process* of each
source would be the random noises that are generated at both master and slave
and that introduce some random variations in the ticking patterns of the clocks.

Because we model the (mutually independent) *intrinsic EEG-source activations*
using the innovation processes we are implicitely
imposing an extra requirement to the VAR model: that the _innovation processes_
are not only i.i.d.
( tells you nothing about
for any ) but also that they are _mutually independent_ (i.e.
that tells you nothing about for any
and for any ).

*edges*. I have borrowed this terminology from graph theory,
which is becoming increasingly popular in Neuroscience and, especially,
in functional connectivity analyses of neuroimaging data.

`dynamics.var`

more generally models the innovation
process with a Generalized Normal Distribution. The
Generalized Normal Distribution is a family of probability
distributions that includes, among others, the Laplace and the
Normal distribution. However, in this tutorial we will only use
normally distributed innovations.

A useful property of of any network consisting of multiple
nodes is its `Topology`

. The topology of a network describes the way
network nodes are connected to each other. That is, it describes the pattern
of \\network edges

. In the case of a VAR model (implemented by our `dynamics.var`

class)
the `Topology`

property describes how the nodes of the model, the `s_i(t)`

,
send or receive information from/to other nodes. The topology of a `dynamics.var`

object
can be specified when you construct the object:

obj = var('NbDims', 3, 'Topology', 'tree');

which will create a `dynamics.var`

object having 3 nodes and with a `tree`

topology.
You can also modify the topology of the object later, using modifier method
`set_topology()`

:

obj = dynamics.var('NbDims', 5); % Default: 'random' topology obj = set_topology(obj, 'sparse'); % Now obj has 'sparse' topology

The figures below describe the topologies that class `dynamics.var`

implements. You
can click on the figures to zoom in. The numbers that appear on each edge
are just illustrative values of the \\VAR coefficients

of the model, i.e.
the values in matrix , assuming a model of order 1 ().