Why donate?

- Tutorials, follow the NBT course

**Follow us **

You are here: Frontpage : Introduction » Tutorials » Analysis of ongoing, evoked, and induced neuronal activity: Power spectra, wavelet analysis, and coherence

**This is an old revision of the document!**

Approximate time needed to complete the tutorial: 60 min.

- Learn classical and advanced techniques for analyzing oscillatory dynamics within and between neuronal populations (and different recording channels).
- Compute power and amplitude spectra of ongoing activity based on MATLAB build-in functions.
- Compute time-frequency representations of stimulus phase-locked and stimulus non-phase-locked oscillatory activity.
- Compute coherence between hemispheres.

The commands on this page ensure that the following exercise has all the necessary scripts and does not interfere with variables or graphics from previous exercises. If you come directly from another exercise, you may in principle skip certain commands, but it is safe to execute them again.

Start MATLAB and find your current working directory using 'pwd'.

>> pwd

In case your have just been working on another exercise, clean up your workspace and remove the figures:

>> clear all % Deletes all variables in your workspace. >> close all % Closes all figure windows.

Go to the directory 'C' using 'cd'.

>> cd C

Make a directory “Tutorial” using 'mkdir'.

>> mkdir('Tutorial')

Go to this directory using

>> cd('Tutorial')

and make a directory 'Scripts'.

>> mkdir('Scripts')

To run the rest of this exercise, you have to download some MATLAB functions from:

nbtwiki.nettutorial.power_spectra_wavelet_analysis_and_coherence.zip

And unzip the file to your folder: 'C:\Tutorial\Scripts'

Also download the file 'MEG_data.mat' from: http://sourceforge.net/projects/nbtoolbox/files/Tutorials/MEG_data.mat/download

>> addpath C:\Tutorial\Scripts

Go to the directory with the data and load the data using the 'load' command.

>> load MEG_data

The variables 'RSM' and 'LSM' (“right and left sensorimotor cortex”) are vectors containing the 360.000 samples obtained with an MEG sensor over right and left sensorimotor cortex, respectively, during a 20 min recoding at a sampling frequency of 300 Hz. The median nerve was stimulated every 3 s with a 0.2-ms current pulse, which caused the thumb to twitch and is known to evoke pronounced neuronal processing in somatosensory cortex. 'RSM_noise' and 'LSM_noise' are recordings from the same channels as 'RSM' and 'LSM' , but without a subject in the MEG device to estimate the noise level in the laboratory and instruments. The variable 'Trig' is a trigger channel containing a square pulse at times of stimulation, which will help us identify stimulus-evoked responses.

We now turn to the classical analysis of longer segments of ongoing activity, which to a large extent is the activity that we averaged away in the tutorial on averaged evoked responses, namely oscillations that are not phase-locked to the stimulus onset.

Let’s quantify the frequency distribution of 'power' in the brain and noise signals using 'pwelch.m'. The 'pwelch' function is a high-level build-in MATLAB function based on the FFT. The input to the function is:

- A data vector.
- A 'window' (the data are divided into segments of the length specified by this integer, which makes the assumption of stationarity more valid).
- 'noverlap' (again, averaging spectra from many partially overlapping segments provide a better estimate of spectral content than taking one long window; e.g., noverlap can typically be 50%, i.e., length(window)/2).
- 'nFFT' or “number of FFT points”. The nFFT determines the frequency resolution (∆f) of the power spectrum in that: ∆f = Fs/nFFT
- 'Fs' (the sampling frequency in order to express the frequencies in units of Hz).

The nFFT is a valuable parameter especially for long data, because (as we learned in the tutorial on The Discrete Fourier Transformation (DFT): Definition and numerical examples) you get as many frequency components in the frequency domain as you have samples in your input data in the time domain. Thus, for signals with many samples (hundreds of thousands or millions) your frequency resolution in a “raw” FFT is unreasonably (or unphysiologically!) high, which leads to a very noisy power spectrum.

Thus, a suitable frequency resolution in neurophysiological data with oscillations is ~0.3 Hz and corresponding to nFFT = 2^10.

>> Fs = 300; >> win_PSD = 5*Fs; >> noverlap_PSD = []; % Default of 50% overlap is taken with 'noverlap' empty. >> nFFT = 2^10; >> [PSD_RSM, f] = pwelch(RSM, win_PSD, noverlap_PSD, nFFT, Fs); >> [PSD_RSM_noise, f] = pwelch(RSM_noise, win_PSD, noverlap_PSD, nFFT, Fs);

The PSD function ('pwelch') outputs the spectral power 'PSD_RSM' at the frequencies 'f'. Let’s plot the power spectra:

>> plot(f, PSD_RSM) >> hold on % Keep the curve when plotting subsequent curves. >> plot(f, PSD_RSM_noise, 'r') >> grid % Add gridlines to the plot. >> axis([0 60 0 5e3]) % Scale the x and y axes. >> xlabel('Frequency [Hz]') % Add labels to axes. >> ylabel('Power spectrum')

The blue line is the power spectrum of the RSM brain data and the red line is the noise recording. The meaning of 'hold on', 'grid', 'axis', 'xlabel' and 'ylabel' should be clear from inspection of the figure window as you give the commands.

The amplitude spectrum is the square-root of the power spectrum and has the advantage over power spectra that large peaks are not “over-emphasized”. Use the “arrow-up” key to quickly plot the amplitude spectrum in a new window by slight modifications to the commands above:

>> figure >> plot(f,PSD_RSM_noise.^0.5,'r') >> hold on >> plot(f,PSD_RSM.^0.5) >> axis([0 60 0 75]) >> xlabel('Frequency [Hz]') >> ylabel('Amplitude spectrum')

Amplitude and power spectra are frequently plotted in logarithmic scales. This you can do by replacing 'plot' with 'loglog'.

A very high frequency resolution makes the power spectrum noisy, which is particularly inconvenient when measuring peak amplitudes or frequencies. To see this, try set nFFT = 2^18 and use a long window (recall: large number of samples allow for many frequency components):

>> nFFT = 2^18; >> [PSD_RSM, f] = pwelch(RSM, length(RSM), [], nFFT, Fs); >> f % AND one time “arrow-up”. >> p % AND two times “arrow-up”.

Note how few key strokes are needed to perform new calculations and visualizations!

**Question: Do you see the problem with determining peak frequencies?**

- Tallon-Baudry and Bertrand. Oscillatory gamma activity in humans and its role in object representation. TICS. 3:151–162, 1999.
- Sinkkonen, J. et al. Gabor filters: an informative way for analysing event-related brain activity J. Neurosci. Methods 56:99–104, 1995.

Let's compute the time-frequency representation of the spontaneous data just used in the power spectrum analysis. The 20 min of data is too much for our computers to work with, because the wavelet transformed data take up a lot of memory (we make a copy of our data for each new frequency being analyzed). Therefore, we focus on just 20 s of data:

>> Frq_low = 0.5; >> Frq_high = 30; >> Frq_step = 0.1; >> [W,t,frq] = Wavelet_1ch(RSM(30*Fs:50*Fs), Fs, Frq_low, Frq_high, Frq_step);

The output of Wavelet_1ch is a matrix of complex numbers 'W' containing a row for each frequency and in each column the amplitude ('abs(W)') and phase ('angle(W)') for a given frequency and sample (time point). For now we focus on the amplitude, which we plot in color because of the matrix essentially having three dimensions (time, frequency and amplitude).

>> figure >> imagesc(t, frq, flipud(abs(W))); axis xy >> colorbar

'flipud' is just to flip the matrix upside down to plot the frequencies from low to high instead of high to low.

Note the burst-like appearance of the oscillations. Clearly, the wavelet gives a richer picture of the oscillatory dynamics than the power spectrum, which regards the 10 Hz oscillations as a constant sine wave… Also note the variability in peak frequency from burst to burst and sometimes even within bursts of alpha oscillations if you zoom in on a burst using the “magnifying glass” button in the figure window. One of the weaknesses of color plots is that humans are not so good at judging relative differences of colors in a quantitative fashion. You may want to rescale the color scale to better see the smaller-amplitude 20 Hz oscillations relative to the 10 Hz. This is done with an additional (last) argument in brackets indicating the minimum and maximum amplitude values assigned with blue and red, respectively:

>> imagesc(t, frq, flipud(abs(W)),[0 1e3]); axis xy

Alternatively, you may want to plot the time-frequency plot of power instead of the amplitude (cf. the differences between power and amplitude spectra) by squaring the wavelet transformed data:

>> imagesc(t, frq, flipud(abs(W).^2),[0 3e6]); axis xy

which emphasizes the time-frequency location of large oscillations.

Time-frequency (TF) analysis is nowadays (since year 1999 or so) also widely used as a complement to classical averaged evoked response waveforms. Let's try this by applying the wavelet analysis to the evoked response that we analyzed earlier in the course. The following commands are copy-and-paste from previous tutorials:

>> Pre = 0.5; >> Post = 1.5; >> [M] = Single_trial_matrix(RSM, Trig, Fs, Pre, Post); >> Baseline_interval = 0.4; % I suggest a baseline interval of 400 ms. >> [M_baseline] = Single_trial_baseline(M, Fs, Pre, Baseline_interval); >> t_ER = -Pre:1/Fs:Post-1/Fs; % Time vector for evoked response (ER). >> figure >> subplot(2,1,1) >> plot(t_ER, mean(M_baseline)) % Plot the classical evoked response.

We now wavelet transform/filter the evoked response.

>> Frq_low = 0.1; >> Frq_high = 70 >> Frq_step = 0.1; >> [W,t,frq] = Wavelet_1ch(mean(M), Fs, Frq_low, Frq_high, Frq_step);

Whereas the broad-band evoked response had just one baseline, it is now natural to baseline each individual frequency band to see the changes in oscillatory amplitude at all frequencies relative to the baseline activity at those frequencies.

>> [W_baseline] = Single_trial_baseline(abs(W), Fs, Pre, Baseline_interval);

[The script was made to baseline individual trials in a “trial x time” matrix, but the procedure is the same for baselining individual frequencies in the “frequency x time” matrix].

>> subplot(2,1,2) >> imagesc(t_ER, frq, flipud(abs(W))); axis xy

Note:
- The broad distribution of frequencies.
- The non-causal effect of the wavelet filter (by comparison with the time-domain signal)!

One of the benefits of this analysis over the classical evoked response waveform is that you do not need to decide a priori the frequency content of interest and, thus, your filter settings. Here, you just filter at all frequencies and one may apply a statistical analysis to amplitudes in the time-frequency plane instead of to a waveform where the amplitude of distinct frequencies may overlap in time and blur the component of interest.

Stimulus-related changes in neuronal activity are not necessarily strictly time locked to the onset of a stimulus and the dynamics of oscillatory components may not be reflected in classical averaged evoked response waveforms nor in their time-frequency representation because a slight jitter in the onset of the oscillation will make negative and positive phases from different trials cancel out. See, e.g., Figure 1 in Tallon-Baudry & Bertrand (1999), Oscillatory gamma activity in humans and its role in object representation, Trends in Cognitive Sciences, 3(4), 151-161

However, if you first compute the amplitude in each frequency band and then average the amplitudes (which are all positive!) across trials, you will identify changes in amplitude of different oscillations relative to the baseline irrespective of their phases. We will do this, but it requires a wavelet transformation of the entire signal (1200 s) instead of just the evoked response (~2 s), so we increase the frequency steps and decrease the range of frequencies in order to fit the matrix in the memory (RAM) of the computer:

>> Frq_low = 3; >> Frq_high = 30; >> Frq_step = 1; >> [W,t,frq] = Wavelet_1ch(RSM, Fs, Frq_low, Frq_high, Frq_step);

We now build a 3D matrix of the dimensions (#trials x #frequencies x #samples) containing the TF responses of individual trials. This is done with a slightly modified version of the script from the tutorial on evoked response waveforms:

>> [WM] = Single_trial_matrix_TF(W, Trig, Fs, Pre, Post); >> whos

Compute the mean amplitude across trials for each time-frequency sample and baseline correct each frequency line:

>> [W_base] = Single_trial_baseline(squeeze(mean(WM)),Fs,Pre,Baseline_interval);

'mean' takes the average along the 1st dimension of the 3D matrix 'WM', which essentially makes the 3D matrix 2D. MATLAB, however, only treats the matrix as 2D after the command 'squeeze'.

Let’s compare the result with the TF-plot of the averaged evoked response.

>> figure >> imagesc(t_ER, frq, flipud(W_base),[-300 100]); axis xy >> colorbar

Note the values and sign on the color scale.

**Questions**

- Can you identify similarities between the TF plot of the averaged evoked response and the average of the TF plots?
- What is the somewhat surprising conclusion from this plot?

The phenomenon most clearly seen in the 10 Hz band is widely known as “event-related desynchronization” or ERD. This is an unfortunate term, but originates from the 1980's when the first researchers discovered the phenomenon and assigned the amplitude change to a change in synchrony between neurons (causing a smaller macroscopic field). Naturally, the oscillation amplitudes can also attenuate because of certain neurons transiently not firing action potentials or firing action potentials at a lower rate.

Coherence is a measure of phase correlations between oscillatory signals of identical frequency. The coherence between two oscillatory signals with stable phase relations attains the maximum value of 1 whereas the coherence of oscillations with highly random phase relations is 0. Spatially distinct neuronal oscillations are unlikely to align their phases with respect to each other if there is no interaction between the neuronal populations. Coherence, therefore, is a popular method for detecting neuronal sources that are working together in a spatially distributed network and to determine how strongly they couple (“collaborate”). It should be mentioned, however, that in many recording situations (especially scalp EEG or MEG) different channels may pick up activity from the same source and thus lead to a significant coherence, which nevertheless should not be interpreted as an interaction between distinct neuronal sources. In real data, you cannot expect coherence around 1 over long periods of time and, vice versa, the coherence will only approach zero when signals come from neuronal populations that are not interacting.

To see whether left and right sensorimotor regions communicate, we compute the coherence between the signals from left and right sensorimotor regions.

>> nFFT = 2^9; >> overlap = nFFT/2; >> [Coherence_LSM_RSM,f] = mscohere(LSM,RSM,hanning(nFFT),overlap,nFFT,Fs); >> plot(f,Coherence_LSM_RSM,'k') >> axis([0 60 0 0.1]) >> xlabel('Frequency [Hz]') >> ylabel('Coherence')

Note the peak at around 12 Hz and at 50 Hz, the first corresponds to the so-called sensorimotor mu rhythm, the latter reflects that the 50-Hz, which is always present in recordings of electrical activity, has a very stable phase in all sensors (why is this?).

To verify that the 12 Hz coherence peak is from the brain, try to compute and plot the coherence for the noise recordings

>> [ % AND “arrow-up”: replace LSM/RSM with LSM_noise/RSM_noise.

Hold the previous plot etc. Not surprisingly, the coherence at 50 Hz is still high.

Also try to shift the data of the two channels in time (e.g., compute the coherence between the first half of the data in RSM with the second half of the data in LSM).

What do you see and why? Ask if you are in doubt how to shift the samples in the two signals relative to each other.