This is the third FEAT practical. It leads you through some of the more advanced usage and concepts in both single-session and higher-level FEAT analyses. Feel free to do the latter three sections in a different order if you are particularly interested in any of them.
There is far more to FEAT than we have time to cover here! There are a few more sections in the "Extras" practical, but we do not expect you to do these! However, if you think that any of the concepts outlined below are likely to be more relevant to you than what is in this practical, then feel free to substitute sections.
In this section, we look at the ways we can correct for structured noise within FEAT. By adding specific regressors to the GLM we can mitigate the effects of motion to some extent, and we can pursue a similar strategy using PNM to correct for physiological noise—provided physiological recordings were acquired during the scan!
To demonstrate this we acquired two data sets: two repetitions of the pyramids & palm trees task (as seen in the FEAT 2 practical) in the same subject, but where in one scan the subject deliberately moved and breathed irregularly. These are referred to as the naughty and nice data from here on in.
Note that we will deal with ICA-based clean-up strategies tomorrow, but these are a complementary approach.
Take a moment to re-familiarise yourself with the key contrasts and typical responses under normal conditions, and satisfy yourself that the subject was still for the duration of the nice scan. Then look at the data contaminated by motion—the differences should be obvious!
cd ~/fsl_course_data/fmri3/motion/ firefox nice.feat/report.html & firefox naughty.feat/report.html &
The simplest form of motion correction we can apply is to add the estimated motion traces from MCFLIRT to the GLM as nuisance regressors. That ensures that any of the BOLD signal that correlates with the temporal dynamics of the motion can be treated as noise. To do this, we simply select Standard Motion Parameters from the drop-down menu in the Stats tab in FEAT. Take a look at how this changes the results below:
firefox naughty_motion.feat/report_poststats.html &
We are now going to use PNM to generate a set of EVs that relate to the physiological recordings we collected during the scans.
pnm/naughty_recordings.txtas the Input Physiological Recordings and the 4D data (
naughty.nii.gz) as the Input TimeSeries.
pnm/slice_timings.txt. Note that if you are doing this using the pop up window, you will need to delete
IMAGEin the Filter box at the top to display the text file.
pnm/mypnm. Under EVs, select RVT and then press Go! Once this has printed
Donein the terminal, open the report using
firefox pnm/mypnm_pnm1.html &
Look at the report PNM generates. You should be able to see several unusual events in the respiratory trace!
The second step of PNM takes the processed physiological data and makes the
EVs for FEAT—we will show you how to use these later. To generate these, run the
command listed at the bottom of the web page (or simply
We ran an analysis for you that included the physiological confound EVs generated by PNM (either only using PNM, or using PNM in combination with the standard motion parameter approach described above). Have a look at how this changes the results:
firefox naughty_pnm.feat/report_poststats.html & firefox naughty_motion+pnm.feat/report_poststats.html &
As a last resort, we can completely ignore volumes that have been irreparably
corrupted by motion. This is very similar to the concept of 'scrubbing', which
just deletes any particularly bad volumes. However, deleting volumes is
problematic as it disrupts the modelling of temporal autocorrelations. Instead,
we can add another set of EVs to the GLM that indicate which volumes we want
to ignore. We use
fsl_motion_outliers to do this using the
fsl_motion_outliers -i naughty.nii.gz -o my_outliers.txt -v
This may take a few minutes to run as this is multiband data.
-v flags simply prints some extra information, including the volumes that
fsl_motion_outliers identifies as noisy. Open
naughty.nii.gz in FSLeyes and check a few of these volumes.
Finally, we are ready to put this all together! Open FEAT and follow the instructions below to perform all the above motion corrections.
naughty.feat/design.fsf. This is the first design with no noise correction.
We have run this analysis for you, so take a look with:
firefox naughty_kitchen+sink.feat/report.html &
Take a look at the design on the Stats page, which should now contain a smorgasbord of additional EVs. Finally, compare the results to both the nice data and the naughty data without any correction. Are the FSL tools for motion and physiological noise correction on Santa's naughty or nice list this year?
How can we investigate the way activation changes as a function of, for example, differing stimulus intensities? To demonstrate this, we will use a data set where words were presented at different frequencies. Sentences were presented one word at a time, at frequencies ranging from 50 words per minute (wpm) to 1250 wpm (see e.g. Zap Reader), and the participant just had to read the words as they were presented. This is an example of a parametric experimental design. The hypothesis is that certain brain regions respond more strongly to the optimum reading speed compared to the extremely slow and extremely fast word presentation rates (i.e. you might expect to find an inverted U-shape for the response to the five different levels).
cd ~/fsl_course_data/fmri3/parametric/ firefox parametric.feat/report_stats.html &
To begin with, we perform the f-test based analysis described in the lecture. Familiarise yourself with the way this is set up in the design file (ignore contrasts 5 to 8 for now). What is the f-test looking for? Answer.
Looking at the Post-stats, the f-test passes significance in large swathes of the brain. But what shape of response is driving this result? To investigate this, we can inspect the raw parameter estimates (PEs) directly.
fslmerge -t response_shapes.nii.gz parametric.feat/stats/pe.nii.gz
pe1.nii.gz contains the beta values from the GLM for the 50 wpm
stimuli. In other words, this is a map of the strength of the BOLD response
to words presented at 50 wpm (before statistical correction). The above
command concatenates the PEs for all 5 EVs (ignoring the even numbered EVs
which represent the temporal derivatives). This allows us to explore the
specific response shapes in more detail.
fsleyes parametric.feat/example_func.nii.gz \ parametric.feat/thresh_zfstat1.nii.gz \ response_shapes.nii.gz &
Open the timeseries display and turn on
Turn this off in the main view, and adjust the colour of
thresh_zfstat1 so you have a representative view of the f-stats.
As you click around within the brain, the time series should now display the
responses at that voxel for each of the five word presentation rates. Keep
this FSLeyes window open!
Can you find brain regions where the responses exhibit a U-shape? Or an inverted-U? How might one interpret these types of responses in light of the experimental paradigm? Answer.
It should be obvious that, in some regions, the parametric responses are very structured. How then, could we quantify these?
To begin with, reopen the FEAT report and look at the design again. Contrasts 5 to 8 encode two simple models for the response: linear and quadratic trends. Satisfy yourself with how we encode these as contrast weights.
Next, look at results in the Post-stats tab. Again, we can explore these s
further by loading the negative quadratic z-stats into FSLeyes as a new overlay
in the window we had opened (
As you click around within the significant regions of this contrast, note the shape
of the frequency response in the time series plot. If you have time, take a look at
the linear contrasts too. Are different regions displaying different trends?
In summary, we have run an exemplar set of parametric analyses. We used an f-test to find any regions that showed different responses to different frequencies, and visualised what shape these responses took using the response time courses. We also quantified these responses in terms of a set of linear and quadratic trends to give an idea of the more complex analyses that can be run on this type of data.
In this section we will look for interaction effects between a visual and a motor task condition. During the visual condition, subjects passively watched a video of colourful abstract shapes. The motor condition involved uncued, sequential tapping of the fingers of the right hand against the thumb. Subjects were scanned for 10 minutes, which contained twelve 30s task blocks: four visual blocks, four motor blocks, and four blocks containing both conditions.
To begin with, we have run a simple analysis in one subject that models the visual and motor conditions, but not the interaction between them. Take a look at the FEAT report and familiarise yourself with the task, the analysis, and the responses to the two conditions.
cd ~/fsl_course_data/fmri3/interactions/ firefox 001/initial_analysis.feat/report.html &
We will now run an analysis looking for interactions using this subject's data. Open FEAT and follow the instructions below:
001/initial_analysis.feat/design.fsf(Note that if you are doing this using the pop up window, you may need to delete IMAGE in the Filter box at the top to display the text file) and change from a Full analysis to Statistics in the drop down box at the top. To save time, we will use the data that has already been preprocessed during the first analysis, and limit ourselves to a few slices.
001/preprocessed_slices.nii.gzas the 4D data, and change the output name to
interaction_analysis.feat. Ignore any warnings about BET and preprocessing options.
What interaction effects do you see in this subject? How do you interpret them? Answer.
We have run a straightforward group analysis of this data on a set of nine subjects. Familiarise yourself with the results by looking at the FEAT report and some of the key contrasts:
firefox group/group.gfeat/report.html & fsleyes -std \ group/group.gfeat/cope5.feat/thresh_zstat1.nii.gz -cm red-yellow -dr 3.1 6.0 \ group/group.gfeat/cope6.feat/thresh_zstat1.nii.gz -cm blue-lightblue -dr 3.1 6.0 &
What interaction effects do we observe at the group level? How do you interpret them? Answer.
Note that in this case, the interaction contrast gave us a relatively straightforward set of results. However, this is primarily because we were looking at the interaction between two simple, distinct conditions in a relatively small data set in order to run things in this session. In targeted experiments, interaction based designs can be very powerful and the analysis pipeline is exactly as presented here.
Differential contrasts and F-tests are sensitive to positive and negative changes in BOLD. To separate out positively driven from negatively driven results we use Contrast Masking. For example, in a differential contrast like A − B, a significant result occurs whenever A − B > 0, but this could be driven by either A > B where both are positive, or by B < A where both are negative (i.e. B is more negative than A).
We will look at the Shad > Gen contrast (word shadowing
greater than generation) from the fMRI fluency dataset (from the
first FEAT practical) in order to see if the result is associated with
positive or negative shadowing and generation responses. In
contrast_masking you will see a copy of the analysis
we asked you to run in an earlier practical. Back up these results with
cd ~/fsl_course_data/fmri3/contrast_masking cp -r fmri.feat fmri_orig.feat
Quickly review the results of this analysis (and in particular, the Shad > Gen contrast) to refresh your memory.
We can apply contrast masking without re-running the whole analysis by starting the FEAT GUI and doing the following:
fmri.featdirectory. Ignore any warnings.
Note that it is difficult to determine this directionality in other ways, such
as looking at timeseries plots. However, it can be confirmed by loading the
appropriate COPE images into FSLeyes, as these will show negative values in
stats/cope1 image (the Generation condition) in the
area associated with the medial posterior cluster.