FEAT - FMRIB's Easy Analysis Tool - User Guide

FEAT Version 4.03

Introduction - Contributors - Before Running - Basics - Detail - Output - Programs - Overview of GLM - Design Matrix Rules

IntroductionExample GUI view

(See also the central FEAT research web page.)

FEAT is a software tool for high quality FMRI data analysis, with an easy-to-use graphical user interface (GUI). FEAT is part of FSL (FMRIB's Software Library). FEAT automates as many of the analysis decisions as possible, and allows easy (though still robust, efficient and valid) analysis of simple experiments whilst giving enough flexibility to also allow sophisticated analysis of the most complex experiments.

Analysis for a simple experiment can be set up in less than 1 minute, whilst a highly complex experiment need take no longer than 5 minutes to set up. The FEAT programs then typically take 5-15 minutes to run (per first-level session), producing a web page analysis report, including images with activation blobs shown in colour (overlaid on one of the initial FMRI images) and plots of data time-course vs model.

FEAT contains two types of analysis - a model-based method (using General Linear Modelling - GLM) and a semi-model-free method:

FEAT saves many images to file - various filtered data, statistical output and colour rendered output images - into a separate FEAT directory for each session. If you want to re-run the final stages of analysis ("contrasts, thresholding and rendering"), you can do so without re-running any of the computationally intensive parts of FEAT, by telling FEAT to look in a FEAT directory for all of the raw statistical images which it needs to do this. The results of this re-run of the final stages are put in a new FEAT directory, named similarly to the original FEAT directory, but with an extra "+" included in the name.

FEAT can also carry out the registration of the low resolution functional images to a high resolution scan, and registration of the high resolution scan to a standard (e.g. Talairached) image. Registration is carried out using FLIRT. FEAT can then carry out multi-session or multi-subject statistics, using fixed- and random-effects analyses.

There is a brief overview of GLM analysis in Appendix A and an overview of how the design matrix is setup in FEAT in Appendix B.

Example colour rendered output Example time series plot

FEAT Contributors

There have been many contributions of various kinds from members of the FMRIB analysis group and collaborators mentioned on the FSL page.

For more detail on FEAT and updated journal references, see the FEAT web page. If you use FEAT in your research, please quote the journal references listed there.


Before Running FEAT

Before calling the FEAT GUI, you need to prepare one or more 4D (Analyze format) files on disk (there are utilities in fsl/bin called avwmerge and avwsplit to convert between multiple 3D images and a single 4D image). If the data requires any preprocessing (for example, scanner-specific corrections for artefacts such as slice dropouts or to achieve slice timing correction), this should be applied to the data before running FEAT.


FEAT Basics

To call the FEAT GUI, either run Feat, or run fsl and press the FEAT button.

Now set the filename of the 4D input image (e.g. /users/sibelius/origfunc.hdr) by pressing Select 4D data. You can setup FEAT to process many input images, one after another, as long as they all require exactly the same analysis. Each one will generate its own FEAT directory, the name of which is based on the input data's filename. Note that if you are going to re-run later parts of an analysis (or run group stats separately from the first-level analyses) the selection of 4D data changes to the selection of FEAT directories. In this case first change the Full first-level analysis option to whatever is appropriate (e.g. Contrasts, Thresholding, Rendering) inside Advanced options and then select the FEAT directory or directories.

Note that in this case you should select the FEAT directories before setting up anything else in FEAT (such as changing the thresholds). This is because quite a lot of FEAT settings are loaded from the first selected FEAT directory, possibly over-writing any settings which you wish to change!

If you are running FEAT from within MEDx and the number of File-based first-level analyses is set to 0 then FEAT will use the selected group page inside the current MEDx folder for analysis, instead of a file on disk.

Total volumes (including volumes to be deleted) is automatically set from the input files chosen.

Now set Delete volumes. These should be the volumes that are not wanted because steady-state imaging has not yet been reached - typically one or two volumes. These volumes are deleted as soon as FEAT is started, so any 4D data output by FEAT will not contain the deleted volumes. Note that Delete volumes should not be used to correct for the time lag between stimulation and the measured response - this is corrected for in the design matrix by convolving the input stimulation waveform with a blurring-and-delaying haemodynamic response function. Most importantly, remember when setting up the design matrix that the timings in the design matrix start at t=0 seconds, and this corresponds to the start of the first image taken after the deleted scans. In other words, the design matrix starts after the deleted scans have been deleted.

Set the TR (time from the start of one volume to the start of the next).

Now set the High pass filter cutoff point (seconds), that is, the longest temporal period that you will allow. A sensible setting in the case of an ABAB or ABACABAC type block design is 1.5 * the (A+B) or (A+B+A+C) total cycle time. If, instead, you have a single-event design, you can normally reduce this cutoff - a typical value for a single-event paradigm is 75s.

Now choose an analysis type - either FILM model-based (general linear modelling) or IRVA semi-model-free. You now need to setup the particular details of the analysis, depending on the analysis type you have chosen. See below for more details.

Note that virtually every timing input in FEAT is set in seconds, not volumes. Total volumes, Delete volumes and entries in IRVA are exceptions.

When FEAT setup is complete and the Go button is pressed, the setup gets saved in a temporary FEAT setup file. Then a script (called feat - note the lower case) is run which uses the setup file and carries out all the FMRI analysis steps asked for, starting by creating a FEAT results directory, and copying the setup file into here, named design.fsf (this setup file can be later loaded back into FEAT using the Load button).

Once the script has started running, the GUI can be exited (unless FEAT is called from within MEDx). The analysis will continue until completion, printing text information about its progress to the terminal from which you started FEAT.


FEAT in Detail

Each section of FEAT is now explained in detail. The Stats (paradigm design) part of the GUI is not inside Advanced options because paradigm specification is the one part of the setup that needs user specification (rather than being automatable). However, it is described below in between Pre-stats and Thresholding and rendering because that is where it logically fits in the chain of processing.

Full Analysis or Partial Analysis?

Groups of analysis steps can be turned on and off. For example, it is possible to run the pre-stats stages (motion correction and various filtering steps) only, or it is possible to only run stats, assuming that all filtering has already taken place, etc. The default is Full first-level analysis. (Here first-level means a single session from a single subject. If you want to to a multi-subject (group) analysis you run all the first-level analyses first, then do a second level analysis by turning on Group stats; FEAT allows you to set all of this up in one go.)

If the first group of steps that is chosen is Pre-Stats or Stats, the input to FEAT is a 4D Analyze file on disk. However, if the only group of steps selected is Contrasts, Thresholding, Rendering (i.e., the final option on the pull-down menu) then the input to FEAT is a FEAT directory, which was produced on disk by a previous FEAT run. Thus you can, for example, re-run the thresholding step with a new threshold, using an old FEAT analysis. If this final option is selected then the Select FEAT directory button must be used to select an existing FEAT directory. The same is true if No first-level analysis (registration and group stats only) is selected - in this case, registration and/or group stats are carried out using existing FEAT directories.

Misc

The Delay before starting (hours) entry allows you to tell FEAT how long to wait before beginning processing (after pressing the Go button). This can be used to make FEAT run at night.

Balloon help (the popup help messages in the FEAT GUI) can be turned off once you are familiar with FEAT.

There are various places in FEAT where an automatic threshold is calculated to discriminate between brain and background. This is calculated as a percentage of the maximum intensity in the image, and is set, by default, to 10%. Brain/background threshold, % can be used to change this default percentage.

Pre-Stats

Motion correction: FEAT now optionally carries out motion correction using MCFLIRT. MCFLIRT is very robust, accurate and fast, and good at "preserving end slices". Note that there is no "spin history" (AKA "correction for movement") option with MCFLIRT. This is because this is still a poorly understood correction method which is under further investigation.

BET: FEAT now optionally runs BET (Brain Extraction Tool) to get rid of non-brain parts of the FMRI data. The same mask is applied to all volumes; BET is setup to produce a relatively liberal mask, to make sure none of the brain is removed.

Spatial smoothing: This determines the extent of the spatial smoothing, carried out on each volume of the FMRI data set separately. This is intended to reduce noise without reducing valid activation; this is successful as long as the underlying activation area is larger than the extent of the smoothing. Thus if you are looking for very small activation areas then you should maybe reduce smoothing from the default of 5mm, and if you are looking for larger areas, you can increase it, maybe to 10 or even 15mm.

Intensity normalization This forces every FMRI volume to have the same mean intensity. For each volume it calculates the mean intensity and then rescales the intensity of each voxel in the volume so that the global mean becomes a preset constant. This step is normally discouraged - hence is turned off by default. When this step is not carried out, the whole 4D data set is still normalised by a single scaling factor ("grand mean scaling") - every volume is scaled by the same amount. This is so that inter-subject and inter-session second-level analyses are valid.

Temporal filtering: Nonlinear Highpass temporal filtering uses a local fit of a straight line (Gaussian-weighted within the line to give a smooth response) to remove low frequency artefacts. This is preferable to standard linear filtering as it does not introduce new autocorrelations into the data. This is applied by default, both to the data and to the design matrix. Lowpass filtering reduces high frequency noise with Gaussian smoothing. It is turned off by default, as its value is questionable in the case of block-design experiments, and can be deleterious in the case of single-event designs. With IRVA, Gaussian smoothing invalidates the statistics, so in this case, by default, only high pass filtering is applied by FEAT.

Stats

FILM General Linear Model

General linear modelling allows you to describe one or more stimulation types, and, for each voxel, a linear combination of the modelled response to these stimulation types is found which minimises the unmodelled noise in the fitting. If you are not familiar with the concepts of the GLM and contrasts of parameter estimates, then you should now read Appendix A.

You can setup FILM easily for simple designs by using the FILM "wizard" - press the Simple model setup button. Then choose whether to setup ABAB... or ABACABAC... designs (block or single-event). The A blocks will normally be rest (or control) conditions. Enter the timings (in seconds) for these periods and press Process; FILM will be automatically setup for you.

If you want to setup a more complex model, or adjust the setup created by the wizard, press Full model setup button. This is now described in detail.

EVs

First set the Number of EVs (explanatory variables). This is the number of different effects that you wish to model - one for each modelled stimulation type, and one for each modelled confound.

Now you need to setup each EV separately. Choose the basic shape of the waveform that describes the stimulus or confound that you wish to model. The basic waveform should be exactly in time with the applied stimulation, i.e., not lagged at all; the measured (time-series) response will be delayed with respect to the stimulation, and this delay is modelled in the design matrix by convolution of the basic waveform with a suitable haemodynamic response function (see appropriate bubble-help).

For an on/off (or a regularly-spaced single-event) experiment this would normally be a square wave, controlled by parameters which also need setting up. To model single-event experiments with this method, the On periods will probably be small - e.g., 1s or even less.

For a single-event experiment with irregular timing for the stimulations, a custom text file can be used. With Custom (1 entry per volume), the custom file should be a list of numbers, separated by spaces or newlines, with one number for each volume (don't forget not to include any deleted time-points in this list). These numbers can either all be 0s and 1s, or can take a range of values. The former case would be appropriate if the same stimulus was applied at varying time points; the latter would be appropriate, for example, if recorded subject responses are to be inserted as an effect to be modelled. Note that it may or may not be appropriate to convolve this particular waveform with an HRF - in the case of single-event, it is.

For even finer control over the input waveform, choose Custom (3 column format). In this case the custom file consists of triplets of numbers; you can input any number of triplets. Each triplet describes a short period of time and the value of the input function during that time. The first number in each triplet is the onset (in seconds) of the period, the second number is the duration (in seconds) of the period, and the third number is the value of the input during that period. The same comments as above apply, about whether these numbers are all set at 1 (which you would normally use), or are allowed to vary. All points in the time series not falling within a specified period from one of the triplets are set to zero. The timings, in seconds, start at t=0, corresponding to the start of the first non-deleted volume.

Note that whilst all EVs are demeaned before model fitting (i.e., shifted up or down in intensity to end up with zero mean), custom input will not be rescaled (multiplied by a scaling factor). One reason is that if you have more than one custom waveform, you may want to keep relative scalings in the set values.

If you select Interaction then you can set which EVs the current EV models the interaction between. This EV is produced by taking the minimum value of all selected EVs, at each time point, and allows the modelling of the non-additive interaction between the selected EVs.

If you have chosen a Square or Sinusoid basic shape, you then need to specify what the timings of this shape are. Skip is the initial period of zeros (in seconds) before the waveform commences. Off is the duration (seconds) of the "Off" periods in the square wave. On is the duration (seconds) of the "On" periods in the square wave. Period is the period (seconds) of the Sinusoid waveform. Phase is the phase shift (seconds) of the waveform; by default, after the Skip period, the square wave starts with a full Off period and the Sinusoid starts by falling from zero. However, the wave can be brought forward in time according to the phase shift. Thus to start with half of a normal Off period, enter the Phase as half of the Off period. To start with a full On period, enter the same as the Off period. Stop after is the total duration (seconds) of the waveform, starting after the Skip period. "-1" means do not stop. After stopping a waveform, all remaining values in the model are set to zero.

Under Advanced, it possible to change various settings relating to the processing of the raw EV waveform described above:

Convolution sets the form of the HRF (haemodynamic response function) convolution that will be applied to the basic waveform. This blurs and delays the original waveform, in an attempt to match the difference between the input function (original waveform, i.e., stimulus waveform) and the output function (measured FMRI haemodynamic response). If the original waveform is already in an appropriate form, e.g., was sampled from the data itself, None should be selected. The other options are all somewhat similar blurring and delaying functions. Gaussian is simply a Gaussian kernel, whose width and lag can be altered. Gamma is a Gamma variate (in fact a normalisation of the probability density function of the Gamma function); again, width and lag can be altered. SPM99 HRF is a preset function which is a mixture of two Gamma functions - a standard positive function at normal lag, and a small, delayed, inverted Gamma, which attempts to model the late undershoot. Unless you understand the differences between the above options, and what effect the numerical parameters have, it is safest to use the default settings.

You should normally apply the same temporal filtering to the model as you have applied to the data, as the model is designed to look like the data before temporal filtering was applied. In this way any extra smoothing is correctly modelled, and long-time-scale components in the model will be dealt with correctly. This is set with the Apply temporal filtering option.

Adding a fraction of the temporal derivative of the blurred original waveform is equivalent to shifting the waveform slightly in time, in order to achieve a slightly better fit to the data. Thus adding in the temporal derivative of a waveform into the design matrix allows a better fit for the whole model, reducing unexplained noise, and increasing resulting statistical significances. Thus, setting Add temporal derivative produces a new waveform in the final design matrix (next to the waveform from which it was derived).

Orthogonalising an EV with respect to other EVs means that it is completely independent of the other EVs, i.e. contains no component related to them. Most sensible designs are already in this form - all EVs are orthogonal to all others. However, this may not automatically be the case, so you can use the Orthogonalise WRT (with respect to) EVs facility to force an EV to be orthogonal to some or all lower numbered EVs. This is achieved by subtracting from the EV that part which is related to the other EVs selected here. An example use would be if you had a lower numbered EV which was a constant height spike train, and the current EV is derived from this other one, but with a linear increase in spike height imposed, to model an increase in response during the experiment for any reason. You would not want the linear trend EV to contain any component of the constant height EV, so you would orthogonalise the current EV wrt the other.

Each EV (explanatory variable, i.e., waveform) in the design matrix results in a PE (parameter estimate) image. This estimate tells you how strongly that waveform fits the data at each voxel - the higher it is, the better the fit (and it can also be negative for a negative - i.e., inverted - fit). For an unblurred (0/1) square wave input, the PE image is equivalent to a "mean difference image". To convert from a PE to a t statistic image, the PE is divided by it's standard error, which is derived from the residual noise after the complete model has been fit. The t image is then transformed into a Z statistic via standard statistical transformation.

Contrasts

As well as Z stat images arising from single EVs, it is possible to combine different EVs (waveforms) - for example, to see where one has a bigger effect than another. To do this, one PE is subtracted from another, a combined standard error is calculated, and a new Z image is created. All of the above is controlled by you, by setting up contrasts. Each output Z statistic image is generated by setting up a contrast vector; thus set the number of outputs that you want, using Number of contrasts. To convert a single EV into a Z statistic image, set it's contrast value to 1 and all others to 0. Thus the simplest design, with one EV only, has just one contrast vector, and only one entry in this contrast vector: 1. To compare two EVs, for example, to subtract one stimulus type (EV 1) from another type (EV 2), set EV 1's contrast value to -1 and EV 2's to 1. A Z statistic image will be generated according to this request, answering the question "where is the response to stimulus 2 significantly greater than the response to stimulus 1?"

F-tests

F-tests enable you to investigate several contrasts at the same time, for example to see whether any of them (or any combination of them) is significantly non-zero. This is thus a "generalisation" of the T-test where only one contrast is tested. Also, the F-test allows you to compare the contribution of each contrast to the model and decide on significant and non-significant ones.

One example of F-test usage is if a particular stimulation is to be represented by several EVs, each with the same input function (e.g. square wave or custom timing) but all with different HRF convolutions - i.e. several "basis functions". Putting all relevant resulting parameter estimates together into an F-test allows the complete fit to be tested against zero without having to specify the relative weights of the basis functions (as one would need to do with a single contrast). So - if you had three basis functions (EVs 1,2 and 3) the wrong way of combining them is a single (T-test) contrast of [1 1 1]. The right way is to make three contrasts [1 0 0] [0 1 0] and [0 0 1] and enter all three contrasts into an F-test.

You can carry out as many F-tests as you like. Each test includes the particular contrasts that you specify by clicking on the appropriate buttons.

Buttons

Example design matrixTo view the current state of the design matrix, press View design. This is a graphical representation of the design matrix and parameter contrasts. The bar on the left is a representation of time, which starts at the top and points downwards. The white marks show the position of every 10th volume in time. The red bar shows the period of the longest temporal cycle which was passed by the highpass filtering. The main top part shows the design matrix; time is represented on the vertical axis and each column is a different EV. Both the red lines and the greyscale intensities represent the same thing - the variation of the waveform in time. Below this is shown the requested contrasts; each column refers to the weighting of the relevant explanatory variable (often either just 1, 0 or -1), and each row is a different contrast vector. Thus each row will result in it's own Z statistic image. If F-tests have been specified, these appear to the right of the contrasts; each column is a different F-test, with the inclusion of particular contrasts depicted by a red square instead of a black one.

If you press Covariance you will see a graphical representation of the covariance matrix of the design matrix. The first matrix shows the (absolute value) of the normalised correlation of each EV with each EV. If a design is well-conditioned (i.e. not approaching rank deficiency) then the diagonal elements should be white and all others darker. So - if there are any very bright elements off the diagonal, you can immediately tell which EVs are too similar to each other - for example, if element [1,3] (and [3,1]) is bright then columns 1 and 3 in the design matrix are possibly too similar. Note that this includes all final EVs, including any added temporal derivatives. The second matrix shows a similar thing after the design matrix has been run through SVD (singular value decomposition). All non-diagonal elements will be zero and the diagonal elements are given by the eigenvalues of the SVD, so that a poorly-conditioned design is obvious if any of the diagonal elements are black.

When you have finished setting up the design matrix, press Done. This will dismiss the GLM GUI, and will give you a final view of the design matrix.

IRVA

IRVA (Inter-Repetition Variance Analysis) allows the analysis of experiments where you do not have a good idea what the expected response to the stimulation will be. It works by looking for a similar response curve across all repeats of the stimulation, and comparing the variance within this average curve with the variance across repetitions.

If the paradigm consists of many repeats of the same stimulation cycle (e.g., ON/OFF blocks, or repeated single-events with constant spacing), then choose Cyclic and enter the number of Volumes for complete cycle (e.g., ON+OFF count, or the cycle period for single-event stimulation). If, instead, there are unevenly spaced start points for the cycles to be compared, then choose Triggered. You need to setup a file containing the start points of the cycles (in volumes, starting at zero for the first volume), and also enter the number of Volumes per event.

For more detail, see the IRVA page.

Contrasts, Thresholding, Rendering

Contrasts are generated by taking linear combinations of parameter estimates produced by the GLM (see Appendix A). These contrasts can be changed inside the GLM popup. You can change both the number of contrasts and the values of them. Every contrast that is asked for generates a raw Z statistic image (saved in the stats subdirectory in the FEAT directory). This raw Z statistic image is then passed on for thresholding and colour rendering. If model-free analysis was carried out instead of GLM, a single raw Z statistic was produced; in this case, there is no such thing as "contrasts", but thresholding and colour rendering is run (and can be re-run with new settings).

Thresholding: The statistical (Z score) image is now thresholded according to either Cluster analysis (default), or Voxel level multiple comparison correction.

Colour rendering: if at least one voxel has survived thresholding, this is carried out. Significant voxels are overlaid in colour on top of one of the original FMRI images. The Z statistic range selected for rendering is automatically calculated by default, to run from red (minimum Z statistic after thresholding) to yellow (maximum Z statistic). If more than one colour rendered image is to be produced (i.e., when multiple constrasts are created) then the overall range of Z values is automatically found from all of the Z statistic images, for consistent Z statistic colour-coding. If multiple analyses are to be carried out, Use preset Z min/max should be chosen, and the min/max values set by hand. Again, this ensures consistency of Z statistic colour-coding - if several experiments are to be reported side-by-side, colours will refer to the same Z statistic values in each picture. When using this option, you should choose a conservatively wide range for the min and max (e.g., min=1, max=15), to make sure that you do not carry out unintentional thresholding via colour rendering. With Solid colours you don't see any sign of the background images within the colour blobs; with Transparent colours you will see through the colour blobs to the background intensity.

Registration

Before any multi-session or multi-subject analyses can be carried out, the different sessions need to be registered to each other. This is made easy with FEAT, by saving the appropriate transformations inside the FEAT directories; the transformations are then applied when group statistics is carried out, to tranform any relevant statistic images into the common space. By doing this (saving the relevant registration transformations and only applying them to the stats images later) a lot of disk space is saved.

Registration inside FEAT uses FLIRT (FMRIB's Linear Image Registration Tool). This is a very robust affine registration program which can register similar type images (intra-modal) or different types (inter-modal).

Typically, registration in FEAT is a two-stage process. First an example FMRI low resolution image is registered to an example high resolution image (often the same subject's T1-weighted structural). The transformation for this is saved into the FEAT directory. Then the high res image is registered to a standard image (normally a T1-weighted image in Talairach space, such as the MNI 152 average image). This transformation, also, is saved. Finally, the two transformations are combined into a third, which will take the low resolution FMRI images (and the statistic images derived from the first-level analyses) into standard space, when applied later, during group analysis.

You can carry out registration for each first-level analysis at the same time as the original analysis, or get FEAT to "register" a pre-existing FEAT directoty, at a later time. In the latter case, change the Full first-level analysis to No first-level analysis. In either case, press the Registration button.

Now select the Standard image (though a sensible default will already be set for you). This should be a 3D Analyze format file on disk.

Next, select the Subject's high resolution image. This should be a 3D Analyze format file on disk. If FEAT is setup to carry out more than one analysis, you will need to press the Select high res images button to get a GUI for selecting a different high resolution image for each analysis. Note that registration of FMRI low res to high res is often improved if BET has been used on the high res images, to remove non-brain parts of the image.

Note that you can, if you wish, do without the high res image(s). If this is turned off, the FMRI images will be registered directly to the standard image, i.e. the two-stage registration will not be used.

All registration uses the "correlation ratio" cost function, and a full affine (12 degrees-of-freedom) transformation. The only exception is when the functional volume only contains a few slices; in this case the Z scaling would be poorly conditioned, so only 7 degrees-of-freedom are used. The criterion for this is whether the minimmum field-of-view in any direction (X, Y or Z) is less than 120mm.

If you carry out the registration step, each original cluster table is processed to give a second cluster table, this time giving co-ordinates in Talairach space.

Group Statistics

If you are carrying out several first-level analyses (or have done so previously), you can carry out both fixed- and random-effects analyses. FEAT assumes that you are either doing one- or two-group comparisons - i.e. testing for an "average single group effect", or testing for a "average group B minus average group A" effect. In the Group stats section, set the Number in group A (control group) to zero for single-group testing, and to greater than 0 for two-group testing.

The order that the first-level sessions are entered into FEAT (in the Misc section - see above) determines which sessions are in group A and which in group B; the first number-in-group-A sessions are in group A and the rest are in group B.

FEAT will generate two images for each contrast that was originally run in the first-level analyses; a fixed-effects group analysis and a random-effects. The fixed-effects generally gives more activation, but is not generalisable to the population outside of the subjects tested, whilst a random-effects analysis is generalisable, but gives less activation.

FEAT produces a new directory containing the group stats results, with a directory name derived from the name of the first first-level FEAT directory. The suffix .gfeat is used.

If the first-level analyses did not produce COPE images, i.e. only produced Z stat images, then the group stats carried out will be limited to "simplified fixed-effects", i.e. based on sum(Z)/sqrt(n). This is the case for IRVA analyses.

Bottom Row of Buttons

When you have finished setting up FEAT, press Go to run the analysis. Once FEAT is running, you can either Exit the GUI, or setup further analyses.

The Save and Load buttons enable you to save and load the complete FEAT setup to and from file. The filename should normally be chosen as design.fsf - this is also the name which FEAT uses to automatically save the setup inside a FEAT directory. Thus you can load in the setup that was used on old analyses by loading in this file from old FEAT directories.

The Utils button produces a menu of FEAT-related utilities:


FEAT Output

FEAT finds the directory name and filename associated with the 4D input data. If the associated directory is writable by you then a related whatever.feat directory is created into which all FEAT output is saved. If not, the FEAT directory is created in your home directory. In either case, if the appropriately named FEAT directory already exists, a "+" is added before the .feat suffix to give a new FEAT directory name.

For example, if you run FEAT on the 4D Analyze file pair /usr/people/bloggs/data/rawdata.hdr & /usr/people/bloggs/data/rawdata.img, the chosen FEAT directory will be /usr/people/bloggs/data/rawdata.feat. If you run FEAT a second time on this file, the new directory will be /usr/people/bloggs/data/rawdata+.feat.

If you rerun contrasts, thresholding and rendering only, the FEAT directory on which this rerun is based is copied to a new FEAT directory (with an extra "+" included in the name). The results of this re-run of the final stages are then put into the new FEAT directory.

All results get saved in the FEAT directory. The current list of files is:

If you have run F-tests as well as T-tests then there will also be many other files produced by FEAT, with filenames similar to those above, but with zfstat appearing in the filename.

The web page report includes the motion correction plots, the 2D colour rendered stats overlay picture for each contrast, the data vs model plots, registration overlay results and a description of the analysis carried out.


FEAT Programs


Appendix A: Brief Overview of GLM Analysis

General Linear Modelling (more correctly known simply as "linear modelling") sets up a model (i.e., what you expect to see in the data) and fits it to the data. If the model is derived from the stimulation that was applied to the subject in the MRI scanner, then a good fit between the model and the data means that the data was indeed caused by the stimulation.

The GLM used here is univariate. This means that the model is fit to each voxel's time-course separately. (Multivariate would mean that a much more complex analysis would take place on all voxels' time-courses at the same time, and interactions between voxels would be taken into account. Independent Components Analysis - ICA - is an example of multivariate analysis.) For the rest of this section, you can imagine that we are only talking about one voxel, and the fitting of the model to that voxel's timecourse. Thus the data comprises a single 1D vector of intensity values.

A very simple example of linear modelling is y(t)=a*x(t)+b+e(t). y(t) is the data, and is a 1D vector of intensity values - one for each time point, i.e., is a function of time. x(t) is the model, and is also a 1D vector with one value for each time point. In the case of a square-wave block design, x(t) might be a series of 1s and 0s - for example, 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 etc. a is the parameter estimate for x(t), i.e., the value that the square wave (of height 1) must be multiplied by to fit the square wave component in the data. b is a constant, and in this example, would correspond to the baseline (rest) intensity value in the data. e is the error in the model fitting.

If there are two types of stimulus, the model would be y=a1*x1+a2*x2+b+e. Thus there are now two different model waveforms, corresponding to the two stimulus timecourses. There are also two interesting parameters to estimate, a1 and a2. Thus if a particular voxel reponds strongly to model x1 the model-fitting will find a large value for a1; if the data instead looks more like the second model timecourse, x2, then the model-fitting will give a2 a large value.

Example design matrixGLM is normally formulated in matrix notation. Thus all of the parameters are grouped together into a vector A, and all of the model timecourses are grouped together into a matrix X. However, it isn't too important to follow the matrix notation, except to understand the layout of the design matrix, which is the matrix X. The main part of the image shown here contains two such model timecourses. Each column is a different model timecourse, with time going down the image vertically. Thus the left column is x1, for example, the timecourse associated with visual stimulation, and the right column is x2, e.g., auditory stimulation, which has a different timecourse to the visual stimulation. Note that each column has two representations of the model's value - the black->white intensity shows the value, as does the red line plot. Make sure that you are comfortable with both representations.

When the model is fit to the data, for each voxel there will be found an estimate of the "goodness of fit" of each column in the model, to that voxel's timecourse. In the visual cortex, the first column will generate a high first parameter estimate, and the second column will generate a low second parameter estimate, as this part of the model will not fit the voxel's timecourse well. Each column will be known as an explanatory variable (EV), and in general represents a different stimulus type.

To convert estimates of parameter estimates (PEs) into statistical maps, it is necessary to divide the actual PE value by the error in the estimate of this PE value. This results in a t value. If the PE is low relative to its estimated error, the fit is not significant. Thus t is a good measure of whether we can believe the estimate of the PE value. All of this is done separately for each voxel. To convert a t value into a P (probability) or Z statistic requires standard statistical transformations; however, t, P and Z all contain the same information - they tell you how significantly the data is related to a particular EV (part of the model). Z is a "Gaussianised t", which means that a Z statistic of 2 is 2 standard deviations away from zero.

As well as producing images of Z values which tell you how strongly each voxel is related to each EV (one image per EV), you can compare parameter estimates to see if one EV is more "relevant" to the data than another. This is known as contrasting EVs, or producing contrasts. To do this, one PE is subtracted from another, a combined standard error is calculated, and a new Z image is created. All of the above is controlled by you, by setting up contrasts. Each output Z statistic image is generated by setting up a contrast vector; thus set the number of outputs that you want. To convert a single EV into a Z statistic image, set it's contrast value to 1 and all others to 0. Thus the simplest design, with one EV only, has just one contrast vector, and only one entry in this contrast vector: 1. To compare two EVs, for example, to subtract one stimulus type (EV 1) from another type (EV 2), set EV 1's contrast value to -1 and EV 2's to 1. A Z statistic image will be generated according to this request, answering the question "where is the response to stimulus 2 significantly greater than the response to stimulus 1?"

The bottom part of the above image shows the requested contrasts; each column refers to the weighting of the relevant EV (often either just 1, 0 or -1), and each row is a different contrast vector. Thus each row will result in it's own Z statistic image. Here the contrasts are [1 0] and [0 1]. Thus the first Z stat image produced will show response to stimulus type 1, relative to rest, and the second will show the response to stimulus type 2.

Example design matrixIf you want to model nonlinear interactions between two EVs (for example, when you expect the response to two different stimuli when applied simultaneously to give a greater response than predicted by adding up the responses to the stimuli when applied separately), then an extra EV is necessary. The simplest way of doing this is to setup the two originals EVs, and then add an interaction term, which will only be "up" when both of the original EVs are "up", and "down" otherwise. In the example image, EV 1 could represent the application of drug, and EV 2 could represent visual stimulation. EV 3 will model the extent to which drug+visual is greater than the sum of drug-only and visual-only. The third contrast will show this measure, whilst the fourth contrast [0 0 -1] shows where negative interaction is occurring.

All of the EVs have to be independent of each other. This means that no EV can be a sum (or weighted sum) of other EVs in the design. The reason for this is that the maths which are used to fit the model to the data does not work properly unless the design matrix is of "full rank", i.e. all EVs are independent. A common mistake is to model both rest and activation waveforms, making one an upside-down version of the other; in this case EV 2 is -1 times EV 1, and therefore linearly dependent on it. It is only necessary to model the activation waveform.

With "parametric designs", there might be several different levels of stimulation, and you probably want to find the response to each level separately. Thus you should use a separate EV for each stimulation level. (If, however, you are very confident that you know the form of the response, and are not interested in confirming this, then you can create a custom waveform which will match the different stimulation levels, and only use one EV.) If you want to create different contrasts to ask different questions about these responses, then: [1 0 0] shows the response of level 1 versus rest (likewise [0 1 0] for level 2 vs rest and [0 0 1] for level 3). [-1 1 0] shows where the response to level 2 is greater than that for level 1. [-1 0 1] shows the general linear increase across all three levels. [1 -2 1] shows where the increase across all three levels deviates from being linear (this is derived from (l3-l2)-(l2-l1)=l3-2*l2+l1).

Thus there often exists a natural hierarchy in contrast vectors. In the above example, [1 1 1] shows overall activation, [-1 0 1] shows linear increase in activation and [1 -2 1] shows (quadratic) deviation from the linear trend. Note that each contrast is orthogonal to the others (e.g. -1*1 + 0*1 + 1*1 = 0) - this is important as it means that each is independent of the others. A common mistake might be, for example, to model the linear trend with [1 2 3], which is wrong as it mixes the average activation with the linear increase.


Appendix B: Design Matrix Rules

This section describes the rules which are followed in order to take the FEAT setup and produce a design matrix, for use in the FILM GLM processing.

Here HTR model means "high temporal resolution" model - a time series of values that is used temporarily to create a model and apply the relevant HRF convolution before resampling down in time to match the temporal sampling of the FMRI data.

Note that it is assumed that every voxel was captured instantaneously in time, and at the same time, exactly halfway through a volume's time period, not at the beginning. This minimises timing errors, if slice-timing correction has not been applied. (Thus, if slice timing is to be applied, before FEAT is run, it should take account of the above assumption.)

No constant column is added to the model - instead, each EV is demeaned, and each voxel's time-course is demeaned before the GLM is applied.


for each EV
(
  if ( square waveform )
    fill HTR model with 0s or 1s
  else if ( sinusoidal waveform )
    fill HTR model with sinusoid scaled to lie in the range 0:1
  else if ( custom waveform )
    fill HTR model with custom information, with 0s outside of
    specified periods
  else if ( to model nonlinear interaction between other EVs )
    model = MIN(the other EVs which are interacting)

  create blurring+delaying HRF convolution kernel, renormalised so
  that the sum of values is 1

  convolve HTR model with HRF convolution kernel (values in HTR model
  for t<0 are set to 0 to allow simple convolution)

  subsample HTR model to match the temporal resolution of the data;
  take the value in the centre of each volume's time period

  apply the same high-pass and low-pass temporal filtering that was
  applied to the data

  orthogonalise current EV wrt earlier EVs if requested (form
  temporary matrix from selected EVs, carry out SVD, and subtract
  projection of current EV onto each vector in SVD output)

  create a new EV, calculated as the temporal derivative of the
  current EV
)

demean all EVs


Copyright © 2000, University of Oxford. Written by S. Smith.