The commands on this page ensure that the following exercise has all the necessary scripts and does not interfere with variables or graphics from previous exercises. If you come directly from another exercise, you may in principle skip certain commands, but it is safe to execute them again.
Start MATLAB and find your current working directory using 'pwd'.
In case your have just been working on another exercise, clean up your workspace and remove the figures:
>> clear all % Deletes all variables in your workspace.
>> close all % Closes all figure windows.
Go to the directory 'C' using 'cd'. If you do not have write access, try another drive, e.g., 'H'.
>> cd C:\
Make a directory “Tutorial” using 'mkdir'.
Go to this directory using
and make a directory 'Scripts'.
To run the rest of this exercise, you have to download some MATLAB functions from:
Go to the directory with the data and load the data using the 'load' command.
>> load MEG_data
The variables 'RSM' and 'LSM' (“right and left sensorimotor cortex”) are vectors containing the 360.000 samples obtained with an MEG sensor over right and left sensorimotor cortex, respectively, during a 20 min recoding at a sampling frequency of 300 Hz. The median nerve was stimulated every 3 s with a 0.2-ms current pulse, which caused the thumb to twitch and is known to evoke pronounced neuronal processing in somatosensory cortex.
'RSM_noise' and 'LSM_noise' are recordings from the same channels as 'RSM' and 'LSM' , but without a subject in the MEG device to estimate the noise level in the laboratory and instruments. The variable 'Trig' is a trigger channel containing a square pulse at times of stimulation, which will help us identify stimulus-evoked responses.
We now turn to the classical analysis of longer segments of ongoing activity, which to a large extent is the activity that we averaged away in the tutorial on averaged evoked responses, namely oscillations that are not phase-locked to the stimulus onset.
Let’s quantify the frequency distribution of 'power' in the brain and noise signals using 'pwelch.m'. The 'pwelch' function is a high-level build-in MATLAB function based on the FFT. The input to the function is:
A data vector.
A 'window' (the data are divided into segments of the length specified by this integer, which makes the assumption of stationarity more valid).
'noverlap' (again, averaging spectra from many partially overlapping segments provide a better estimate of spectral content than taking one long window; e.g., noverlap can typically be 50%, i.e., length(window)/2).
'nFFT' or “number of FFT points”. The nFFT determines the frequency resolution (∆f) of the power spectrum in that: ∆f = Fs/nFFT
'Fs' (the sampling frequency in order to express the frequencies in units of Hz).
The nFFT is a valuable parameter especially for long data, because (as we learned in the tutorial on The Discrete Fourier Transformation (DFT): Definition and numerical examples) you get as many frequency components in the frequency domain as you have samples in your input data in the time domain. Thus, for signals with many samples (hundreds of thousands or millions) your frequency resolution in a “raw” FFT is unreasonably (or unphysiologically!) high, which leads to a very noisy power spectrum.
Thus, a suitable frequency resolution in neurophysiological data with oscillations is ~0.3 Hz and corresponding to nFFT = 2^10.
The PSD function ('pwelch') outputs the spectral power 'PSD_RSM' at the frequencies 'f'.
Let’s plot the power spectra:
>> plot(f, PSD_RSM)
>> hold on % Keep the curve when plotting subsequent curves.
>> plot(f, PSD_RSM_noise, 'r')
>> grid % Add gridlines to the plot.
>> axis([0 60 0 5e3]) % Scale the x and y axes.
>> xlabel('Frequency [Hz]') % Add labels to axes.
>> ylabel('Power spectrum')
The blue line is the power spectrum of the RSM brain data and the red line is the noise recording. The meaning of 'hold on', 'grid', 'axis', 'xlabel' and 'ylabel' should be clear from inspection of the figure window as you give the commands.
The amplitude spectrum is the square-root of the power spectrum and has the advantage over power spectra that large peaks are not “over-emphasized”. Use the “arrow-up” key to quickly plot the amplitude spectrum in a new window by slight modifications to the commands above:
>> hold on
>> axis([0 60 0 75])
>> xlabel('Frequency [Hz]')
>> ylabel('Amplitude spectrum')
Amplitude and power spectra are frequently plotted in logarithmic scales. This you can do by replacing 'plot' with 'loglog'.
A very high frequency resolution makes the power spectrum noisy, which is particularly inconvenient when measuring peak amplitudes or frequencies.
To see this, try set nFFT = 2^18 and use a long window (recall: large number of samples allow for many frequency components):
>> nFFT = 2^18;
>> [PSD_RSM, f] = pwelch(RSM, length(RSM), , nFFT, Fs);
>> f % AND one time “arrow-up”.
>> p % AND two times “arrow-up”.
Note how few key strokes are needed to perform new calculations and visualizations!
Question: Do you see the problem with determining peak frequencies?
Power spectra clearly suffer from a “time localization” problem: The frequency content of a signal may change over time, which will lead to a smearing or broadening of the power spectrum or if oscillations are only present during parts of the recording, the power spectrum will not alert you of such important changes. An ideal method would allow different window sizes depending on the scales that one is interested in. Wavelet analysis attempts to solve these problems by decomposing a time series into a time-frequency space simultaneously. With wavelet analysis, you can get information on both the amplitude and phase of any oscillatory signal within the series, and how these amplitudes and phases vary with time.
Wavelet transformation to most people sounds more fancy or complicated on the first encounter than need be. You may think of wavelet transformation as a repeated band-pass filtering at multiple overlapping frequencies and to visualize the result compactly, usually the results are plotted in a 2D color image, a so-called time-frequency representation (or “TF-plot”).
Let's compute the time-frequency representation of the spontaneous data just used in the power spectrum analysis. The 20 min of data is too much for our computers to work with, because the wavelet transformed data take up a lot of memory (we make a copy of our data for each new frequency being analyzed). Therefore, we focus on just 20 s of data:
The output of Wavelet_1ch is a matrix of complex numbers 'W' containing a row for each frequency and in each column the amplitude ('abs(W)') and phase ('angle(W)') for a given frequency and sample (time point). For now we focus on the amplitude, which we plot in color because of the matrix essentially having three dimensions (time, frequency and amplitude).
'flipud' is just to flip the matrix upside down to plot the frequencies from low to high instead of high to low.
Note the burst-like appearance of the oscillations. Clearly, the wavelet gives a richer picture of the oscillatory dynamics than the power spectrum, which regards the 10 Hz oscillations as a constant sine wave…
Also note the variability in peak frequency from burst to burst and sometimes even within bursts of alpha oscillations if you zoom in on a burst using the “magnifying glass” button in the figure window.
One of the weaknesses of color plots is that humans are not so good at judging relative differences of colors in a quantitative fashion. You may want to rescale the color scale to better see the smaller-amplitude 20 Hz oscillations relative to the 10 Hz. This is done with an additional (last) argument in brackets indicating the minimum and maximum amplitude values assigned with blue and red, respectively:
Time-frequency (TF) analysis is nowadays (since year 1999 or so) also widely used as a complement to classical averaged evoked response waveforms. Let's try this by applying the wavelet analysis to a somatosensory evoked response (which you may just have computed in a previous tutorials). Compute and plot the evoked response using the following commands:
>> Pre = 0.5;
>> Post = 1.5;
>> [M] = Single_trial_matrix(RSM, Trig, Fs, Pre, Post);
>> Baseline_interval = 0.4; % I suggest a baseline interval of 400 ms.
>> [M_baseline] = Single_trial_baseline(M, Fs, Pre, Baseline_interval);
>> t_ER = -Pre:1/Fs:Post-1/Fs; % Time vector for evoked response (ER).
>> plot(t_ER, mean(M_baseline)) % Plot the classical evoked response.
We now wavelet transform/filter the evoked response.
Whereas the broad-band evoked response had just one baseline, it is now natural to baseline each individual frequency band to see the changes in oscillatory amplitude at all frequencies relative to the baseline activity at those frequencies.
- The broad distribution of frequencies.
- The non-causal effect of the wavelet filter (by comparison with the time-domain signal)!
One of the benefits of this analysis over the classical evoked response waveform is that you do not need to decide a priori the frequency content of interest and, thus, your filter settings. Here, you just filter at all frequencies and one may apply a statistical analysis to amplitudes in the time-frequency plane instead of to a waveform where the amplitude of distinct frequencies may overlap in time and blur the component of interest.
However, if you first compute the amplitude in each frequency band and then average the amplitudes (which are all positive!) across trials, you will identify changes in amplitude of different oscillations relative to the baseline irrespective of their phases.
We will do this, but it requires a wavelet transformation of the entire signal (1200 s) instead of just the evoked response (~2 s), so we increase the frequency steps and decrease the range of frequencies in order to fit the matrix in the memory (RAM) of the computer:
We now build a 3D matrix of the dimensions (#trials x #frequencies x #samples) containing the TF responses of individual trials. This is done with a slightly modified version of the script from the tutorial on evoked response waveforms:
Note the values and sign on the color scale. Questions
Can you identify similarities between the TF plot of the averaged evoked response and the average of the TF plots?
What is the somewhat surprising conclusion from this plot?
The phenomenon most clearly seen in the 10 Hz band is widely known as “event-related desynchronization” or ERD. This is an unfortunate term, but originates from the 1980's when the first researchers discovered the phenomenon and assigned the amplitude change to a change in synchrony between neurons (causing a smaller macroscopic field). Naturally, the oscillation amplitudes can also attenuate because of certain neurons transiently not firing action potentials or firing action potentials at a lower rate.
Coherence is a measure of phase correlations between oscillatory signals of identical frequency. The coherence between two oscillatory signals with stable phase relations attains the maximum value of 1 whereas the coherence of oscillations with highly random phase relations is 0. Spatially distinct neuronal oscillations are unlikely to align their phases with respect to each other if there is no interaction between the neuronal populations. Coherence, therefore, is a popular method for detecting neuronal sources that are working together in a spatially distributed network and to determine how strongly they couple (“collaborate”). It should be mentioned, however, that in many recording situations (especially scalp EEG or MEG) different channels may pick up activity from the same source and thus lead to a significant coherence, which nevertheless should not be interpreted as an interaction between distinct neuronal sources. In real data, you cannot expect coherence around 1 over long periods of time and, vice versa, the coherence will only approach zero when signals come from neuronal populations that are not interacting.
To see whether left and right sensorimotor regions communicate, we compute the coherence between the signals from left and right sensorimotor regions.
Note the peak at around 12 Hz and at 50 Hz, the first corresponds to the so-called sensorimotor mu rhythm, the latter reflects that the 50-Hz, which is always present in recordings of electrical activity, has a very stable phase in all sensors (why is this?).
To verify that the 12 Hz coherence peak is from the brain, try to compute and plot the coherence for the noise recordings
>> [ % AND “arrow-up”: replace LSM/RSM with LSM_noise/RSM_noise.
Hold the previous plot etc.
Not surprisingly, the coherence at 50 Hz is still high.
Also try to shift the data of the two channels in time (e.g., compute the coherence between the first half of the data in RSM with the second half of the data in LSM).
What do you see and why?
Ask if you are in doubt how to shift the samples in the two signals relative to each other.
tutorial/power_spectra_wavelet_analysis_and_coherence.txt · Last modified: 2013/11/29 08:47 by Klaus Linkenkaer-Hansen