This is the third FEAT practical. It leads you through some of the more advanced usage and concepts in both single-session and higher-level FEAT analyses.
In this section, we look at the ways we can correct for structured noise within FEAT. By adding specific regressors to the GLM we can mitigate the effects of motion to some extent, and we can pursue a similar strategy using PNM to correct for physiological noise —provided physiological recordings were acquired during the scan!
To demonstrate this we acquired two data sets: two repetitions of the pyramids & palm trees task (as seen in the FEAT 2 practical) in the same subject, but where in one scan the subject deliberately moved and breathed irregularly. These are referred to as the naughty and nice data from here on in.
Note that we will deal with ICA-based clean-up strategies in a later prac, but these are a complementary approach.
Take a moment to re-familiarise yourself with the key contrasts and typical responses under normal conditions, and satisfy yourself that the subject was still for the duration of the nice scan. Then look at the data contaminated by motion—the differences should be obvious!
cd ~/fsl_course_data/fmri3/motion/ firefox sub-01/fsl_feat/ses-01_nice/nice.feat/report.html & & firefox sub-01/fsl_feat/ses-02_naughty/naughty.feat/report.html & &
open sub-01/fsl_feat/ses-01_nice/nice.feat/report.html open sub-01/fsl_feat/ses-02_naughty/naughty.feat/report.html
The simplest form of motion correction we can apply is to add the estimated motion traces from MCFLIRT to the GLM as nuisance regressors. That ensures that any of the BOLD signal that correlates with the temporal dynamics of the motion can be treated as noise. To do this, we simply select Standard Motion Parameters from the drop-down menu in the Stats tab in FEAT. Take a look at how this changes the results below:
firefox sub-01/fsl_feat/ses-02_naughty/naughty_motion.feat/report_poststats.html &
open sub-01/fsl_feat/ses-02_naughty/naughty_motion.feat/report_poststats.html &
We are now going to use PNM to generate a set of EVs that relate to the physiological recordings we collected during the scans.
Pnm &
or Pnm_gui &
on a mac
Select sub-01/pnm/ses-02_naughty_recordings.txt
as the
Input Physiological Recordings and the 4D data (
sub-01/func/ses-02_naughty/sub-01_ses-02_naughty.nii.gz
) as the
Input TimeSeries. sub-01/pnm/slice_timings.txt.
. Note that if you are doing this using the pop up window, you will need to delete
IMAGE
in the
Filter
box at the top to display the text file.sub-01/pnm/mypnm
. Under EVs, select RVT and then press
Go! Once this has printed
Done
in the terminal, open the report using firefox.
firefox sub-01/pnm/mypnm_pnm1.html &
OR (on a mac)
open -a /Applications/Firefox.app sub-01/pnm/mypnm_pnm1.html
.
The PNM report will only open properly in firefox. If you do not have Firefox installed, then you can see an example of a PNM report here:
https://users.fmrib.ox.ac.uk/~seanf/mypnm_pnm1.html
Look at the report PNM generates. You should be able to see an unusual event in the respiratory trace!
The second step of PNM takes the processed physiological data and makes the EVs for FEAT—we
will show you how to use these later. To generate these, run the command:
./sub-01/pnm/mypnm_pnm_stage2
We ran an analysis for you that included the physiological confound EVs generated by PNM (either only using PNM, or using PNM in combination with the standard motion parameter approach described above). Have a look at how this changes the results:
firefox sub-01/fsl_feat/ses-02_naughty/naughty_pnm.feat/report_poststats.html & firefox sub-01/fsl_feat/ses-02_naughty/naughty_motion+pnm.feat/report_poststats.html &
open sub-01/fsl_feat/ses-02_naughty/naughty_pnm.feat/report_poststats.html & open sub-01/fsl_feat/ses-02_naughty/naughty_motion+pnm.feat/report_poststats.html &
As a last resort, we can completely ignore volumes that have been irreparably corrupted by motion. This
is very similar to the concept of 'scrubbing', which just deletes any particularly bad volumes.
However, deleting volumes is problematic as it disrupts the modelling of temporal autocorrelations.
Instead, we can add another set of EVs to the GLM that indicate which volumes we want to ignore.
We use
fsl_motion_outliers
to do this using the commands below:
fsl_motion_outliers-i sub-01/func/ses-02_naughty/sub-01_ses-02_naughty.nii.gz -o sub-01/func/ses-02_naughty/sub-01_ses-02_naughty_my_outliers.txt -v
This may take a few minutes to run as this is multiband data. The
-v
flags simply prints some extra information, including the volumes that
fsl_motion_outliers
identifies as noisy. Open
sub-01/func/ses-02_naughty/sub-01_ses-02_naughty.nii.gz
in FSLeyes and check a few of these volumes.
Finally, we are ready to put this all together! Open FEAT and follow the instructions below to perform all the above motion corrections.
sub-01/fsl_feat/ses-02_naughty/naughty.feat/design.fsf
. This is the first design with no noise correction.sub-01/pnm/mypnm_evlist.txt
.sub-01/func/ses-02_naughty/sub-01_ses-02_naughty_my_outliers.txt
.We have run this analysis for you, so take a look with:
firefox sub-01/fsl_feat/ses-02_naughty/naughty_kitchen+sink.feat/report.html &
open sub-01/fsl_feat/ses-02_naughty/naughty_kitchen+sink.feat/report.html &
Take a look at the design on the Stats page, which should now contain a smorgasbord of additional EVs. Finally, compare the results to both the nice data and the naughty data without any correction. Are the FSL tools for motion and physiological noise correction on Santa's naughty or nice list this year?
How can we investigate the way activation changes as a function of, for example, differing stimulus intensities? To demonstrate this, we will use a data set where words were presented at different frequencies. Sentences were presented one word at a time, at frequencies ranging from 50 words per minute (wpm) to 1250 wpm, and the participant just had to read the words as they were presented. This is an example of a parametric experimental design. The hypothesis is that certain brain regions respond more strongly to the optimum reading speed compared to the extremely slow and extremely fast word presentation rates (i.e. you might expect to find an inverted U-shape for the response to the five different levels).
cd ~/fsl_course_data/fmri3/parametric/ firefox parametric.feat/report_stats.html &
open parametric.feat/report_stats.html
To begin with, we perform the f-test based analysis described in the lecture. Familiarise yourself with the way this is set up in the design file (ignore contrasts 5 to 8 for now). What is the f-test looking for? Answer.
Looking at the Post-stats, the f-test passes significance in large swathes of the brain. But what shape of response is driving this result? To investigate this, we can inspect the raw parameter estimates (PEs) directly.
fslmerge -t response_shapes.nii.gz parametric.feat/stats/pe[13579].nii.gz
Let's take a look at the response shapes in FSLeyes:
fsleyes parametric.feat/example_func.nii.gz \ parametric.feat/thresh_zfstat1.nii.gz \ response_shapes.nii.gz &
Open the timeseries display and turn on
response_shapes
only. Turn this off in the main view, and adjust the colour of
thresh_zfstat1
so you have a representative view of the f-stats. As you click around within the brain, the time
series should now display the responses at that voxel for each of the five word presentation rates.
Keep this FSLeyes window open!
Can you find brain regions where the responses exhibit a U-shape? Or an inverted-U? How might one interpret these types of responses in light of the experimental paradigm? Answer.
It should be obvious that, in some regions, the parametric responses are very structured. How then, could we quantify these?
To begin with, reopen the FEAT report and look at the design again. Contrasts 5 to 8 encode two simple models for the response: linear and quadratic trends. Satisfy yourself with how we encode these as contrast weights.
Next, look at results in the
Post-stats tab. Again, we can explore these further by loading the negative quadratic z-stats
into FSLeyes as a new overlay in the window we had opened (
parametric.feat/thresh_zstat8.nii.gz
). As you click around within the significant regions of this contrast, note the shape of the frequency
response in the time series plot. If you have time, take a look at the linear contrasts too. Are
different regions displaying different trends?
Answer.
In summary, we have run an exemplar set of parametric analyses. We used an f-test to find any regions that showed different responses to different frequencies, and visualised what shape these responses took using the response time courses. We also quantified these responses in terms of a set of linear and quadratic trends to give an idea of the more complex analyses that can be run on this type of data.
In this section we will look for interaction effects between a visual and a motor task condition. During the visual condition, subjects passively watched a video of colourful abstract shapes. The motor condition involved uncued, sequential tapping of the fingers of the right hand against the thumb. Subjects were scanned for 10 minutes, which contained twelve 30s task blocks: four visual blocks, four motor blocks, and four blocks containing both conditions.
To begin with, we have run a simple analysis in one subject that models the visual and motor conditions, but not the interaction between them. Take a look at the FEAT report and familiarise yourself with the task, the analysis, and the responses to the two conditions.
cd ~/fsl_course_data/fmri3/interactions/ firefox sub-01/ses-01/fsl_feat/sub-01_ses-01_initial_analysis.feat/report.html &
open sub-01/ses-01/fsl_feat/sub-01_ses-01_initial_analysis.feat/report.html &
We will now run an analysis looking for interactions using this subject's data. Open FEAT and follow the instructions below:
sub-01/ses-01/fsl_feat/sub-01_ses-01_initial_analysis.feat/design.fsf
(Note that if you are doing this using the pop up window, you may need to delete IMAGE in the
Filter box at the top to display the text file) and change from a
Full analysis to
Statistics in the drop down box at the top. To save time, we will use the data that has already
been preprocessed during the first analysis, and limit ourselves to a few slices.sub-01/ses-01/func/sub-01_ses-01_preprocessed_slices.nii.gz
as the 4D data, and change the output name to
sub-01/ses-01/fsl_feat/sub-01_ses-01_interaction_analysis.feat
. Ignore any warnings about BET and preprocessing options. sub-01/ses-01/func/timings/motor.txt.
. For the Filename of EV2 choose sub-01/ses-01/func/timings/visual.txt
. Add a third EV and set its shape to
Interaction. Add the appropriate contrasts for the positive and negative interaction effects.What interaction effects do you see in this subject? How do you interpret them? Answer.
We have run a straightforward group analysis of this data on a set of nine subjects. Familiarise yourself with the results by looking at the FEAT report:
firefox group/group.gfeat/report.html &
open group/group.gfeat/report.html
And take a closer look at the results for the interaction contrasts in FSLeyes:
fsleyes -std \ group/group.gfeat/cope5.feat/thresh_zstat1.nii.gz -cm red-yellow -dr 3.1 6.0 \ group/group.gfeat/cope6.feat/thresh_zstat1.nii.gz -cm blue-lightblue -dr 3.1 6.0 &
What interaction effects do we observe at the group level? How do you interpret them? Answer.
Note that in this case, the interaction contrast gave us a relatively straightforward set of results. However, this is primarily because we were looking at the interaction between two simple, distinct conditions in a relatively small data set in order to run things in this session. In targeted experiments, interaction based designs can be very powerful and the analysis pipeline is exactly as presented here.
Differential contrasts and F-tests are sensitive to positive and negative changes in BOLD. To separate out positively driven from negatively driven results we use Contrast Masking. For example, in a differential contrast like A − B, a significant result occurs whenever A − B > 0, but this could be driven by either A > B where both are positive, or by B < A where both are negative (i.e. B is more negative than A).
We will look at the
Shad > Gen contrast (word shadowing
greater than generation) from the fMRI fluency dataset (from the first FEAT practical) in
order to see if the result is associated with positive or negative shadowing and generation responses.
In
contrast_masking
you will see a copy of the analysis we asked you to run in an earlier practical. Back up these
results with the command:
cd ~/fsl_course_data/fmri3/contrast_masking cp -r sub-01/ses-01/fsl_feat/sub-01_ses-01_fmri.feat sub-01_ses-01_fmri_orig.feat
Quickly review the results of this analysis (and in particular, the Shad > Gen contrast) to refresh your memory.
We can apply contrast masking without re-running the whole analysis by starting the FEAT GUI and doing the following:
sub-01/ses-01/fsl_feat/sub-01_ses-01_fmri.feat/fmri.feat
directory. Ignore any warnings. sub-01/ses-01/func/timings/word_generation.txt
. For the Filename of EV2 choose sub-01/ses-01/func/timings/word_shadowing.txt
.
Note that it is difficult to determine this directionality in other ways, such as looking at timeseries
plots. However, it can be confirmed by loading the appropriate COPE images into FSLeyes, as these
will show negative values in the
stats/cope1
image (the
Generation condition) in the area associated with the medial posterior cluster.
The End.