FEAT - FMRIB's Easy Analysis Tool - Version 3 |
FEAT is a software tool for high
quality FMRI data analysis, with an easy-to-use graphical user
interface (GUI). FEAT automates as many of the analysis decisions as
possible, and allows easy analysis of simple experiments whilst giving
enough flexibility to also allow analysis of the most complex
experiments. Analysis for a simple experiment can be set up in less
than 1 minute, whilst a highly complex experiment need take no longer
than 10 minutes to set up, assuming that any required stimulus timing
files are already prepared. The FEAT programs then typically take
10-60 minutes to run, producing a web page analysis report, including
images with activation blobs shown in colour (overlaid on one of the
initial FMRI images) and plots of data time-course vs model.
FEAT contains two types of analysis - General Linear Modelling (GLM) and model-free. FEAT's GLM analysis owes a lot to SPM (Statistical Parametric Mapping, produced at the Functional Imaging Lab by Karl Friston et al.). GLM in FEAT allows you to tell FEAT all about the experimental design, and FEAT will create a model that should fit the data, telling you where the brain has activated in response to the stimuli. The model-free method (also known as ANOVA), requires you to simply tell it when stimulation periods begin, but needs no detailed modelling of the brain's response to the stimulation, and can therefore be very useful in certain circumstances, for example, when you do not have a good idea of the expected reponse, or as an exploratory tool.
FEAT saves many images to file -
various filtered data, statistical output and colour rendered output
images - into a separate FEAT directory for each session. If you want
to re-run the final stages of analysis ("contrasts, thresholding and
rendering"), you can do so without re-running any of the
computationally intensive parts of FEAT, by telling FEAT to look in a
FEAT directory for all of the raw statistical images which it needs to
do this. The results of this re-run of the final stages get saved in
the original FEAT directory, with names similar to the original
results, but with "+" included in the filenames ("++" for a second
re-run, etc).
Note; there is a brief overview of GLM analysis in Appendix A and an overview of how the design matrix is setup in FEAT in Appendix B.
FEAT was created principally by Mark Woolrich and Steve Smith, and Stuart Clare created the model-free method. However, there were many other contributions of various kinds from the other members of the FMRIB analysis group and collaborators mentioned on the FSL page.
For more detail on FEAT and updated journal references, see the FEAT web page. If you use FEAT in your research, please quote the journal references listed there.
Before calling the FEAT GUI, you either need to load a group image into a MEDx folder, or prepare one or more 4D (Analyze format) files on disk. If the data requires any preprocessing (for example, scanner-specific corrections for artefacts such as slice dropouts or to achieve slice timing correction), this should be applied to the data before running FEAT.
To call FEAT, use FSL->FEAT.
Set the TR (time from the start of one volume to the start of the next) and Total volumes (including volumes to be deleted). Now set Delete volumes. These should be the volumes that are not wanted because steady-state imaging has not yet been reached - typically one or two volumes. These volumes are deleted as soon as FEAT is started, so any data produced by FEAT will not contain the deleted volumes. Note that Delete volumes should not be used to correct for the time lag between stimulation and the measured response - this is corrected for in the design matrix by convolving the input stimulation waveform with a blurring-and-delaying haemodynamic response function. Most importantly, remember when setting up the design matrix that the timings in the design matrix start at t=0 seconds, and this corresponds to the start of the first image taken after the deleted scans. In other words, the design matrix starts after the deleted scans have been deleted.
Now choose an analysis type - either GLM or model-free. The ABAB amd ABACABAC options are just quick ways of setting up GLM modelling; if your experiment follows one of these simple paradigms, you can fill in the appropriate boxes, press Process, and the GLM will be automatically setup for you.
Note that virtually every timing input in FEAT is set in seconds, not volumes. Total volumes, Delete volumes and entries in the model-free method are the exceptions.
Each section of FEAT is now explained in detail. The Paradigm part of the GUI is secribed in the Stats section, as that is where it logically falls. (The paradigm design part of the GUI is not inside Advanced options because, whilst for many analyses, none of the advanced options need changing, the paradigm certainly does!)
Most of the following options appear under the Advanced options section of the GUI, once opened.
Decide whether you want to run the analysis as quickly as possible or with the best possible quality. The Fastest option turns off motion correction, and runs the linear modelling (FILM) in a faster and slightly less accurate mode (it runs ordinary least-squares and carries out autocorrelation correction globally instead of locally).
Groups of analysis steps can be turned on and off. For example, it is possible to run the pre-stats stages (motion correction and various filtering steps) only, or it is possible to only run stats, assuming that all filtering has already taken place, etc. The default is Full analysis.
If the first group of steps that is chosen is Pre-Stats or Stats, the input to FEAT is either a group page in a MEDx folder or a 4D Analyze file on disk (for choosing between these, see the Misc section below). However, if the only group of steps selected is Contrasts, Thresholding, Rendering (i.e., the final option on the pull-down menu) then the input to FEAT is a FEAT directory, which was produced on disk by a previous FEAT run. Thus you can, for example, re-run the thresholding step with a new threshold, using an old FEAT analysis. If this final option is selected then the Select FEAT directory button must be used in the Misc section, to select an existing FEAT directory.
FEAT can be setup to perform the same analysis on several FMRI series (one at a time; this is not a second-level analysis, just an easier way of analysing several series when they require the same FEAT setup). This requires that for each series to be analysed, a 4D Analyze file has been saved to disk previously. To use this feature, change the File-based analyses. The default of zero tells FEAT that no file-based analyses will be run, i.e., the group page in the current MEDx folder should be used as input. If this number is increased from zero, the Select 4D data button appears. After setting the correct number of analyses, press the button and you will be prompted to enter the filenames of the appropriate number of 4D Analyze files. If only Contrasts, Thresholding, Rendering are to be run (see previous section), then this button changes to Select FEAT directory; this is because, in this case, FEAT does not start with 4D files, but with existing FEAT directories.
The Delay before starting (hours) entry allows you to tell FEAT how long to wait before beginning processing (after pressing the Go button). This can be used to make FEAT run at night.
Save filtered data, which is on by default, tells FEAT to save the 4D FMRI data to file (inside the FEAT directory) after all pre-stats processing. Save folder, which is off by default, tells FEAT to save the MEDx folder to file (inside the FEAT directory) after completing its analysis. This takes a lot of disk space, and is not generally needed.
Balloon help (the popup help messages in the FEAT GUI) can be turned off once you are familiar with FEAT.
There are various places in FEAT where an automatic threshold is calculated to discriminate between brain and background. This is calculated as a percentage of the maximum intensity in the image, and is set, by default, to 10%. Brain/background threshold, % can be used to change this default percentage.
Motion correction: FEAT now optionally carries out motion correction using AIR or SPM. The default is to use AIR, and auto-decide between 2D and 3D. The Auto-decide option means that FEAT will use the number of slices per volume to decide whether to carry out AIR 2D (<7 slices) or AIR 3D (>=7 slices). FEAT chooses volume 10 from the 4D data set to use as the target in motion correction, and for drawing the final colour blobs onto. However, SPM motion correction uses volume 1 as the target. If using SPM motion correction, it is normally recommended that SPM adjustment for movement (often referred to as "spin history correction") be applied. This attempts to remove temporal effects which are correlated with the estimated motion correction parameters, as these are likely to be motion-induced rather than stimulation-induced. FEAT automatically sets the correct type of adjustment for movement.
Spatial smoothing: This determines the extent of the spatial smoothing, carried out on each volume of the FMRI data set separately. This is intended to reduce noise without reducing valid activation; this is successful as long as the underlying activation area is larger than the extent of the smoothing. Thus if you are looking for very small activation areas then you should maybe reduce smoothing from the default of 5mm, and if you are looking for larger areas, you can increase it, maybe to 10 or even 15mm.
Intensity normalization This forces every FMRI volume to have the same mean intensity. For each volume it calculates the mean intensity and then rescales the intensity of each voxel in the volume so that the global mean becomes a preset constant. This step is normally discouraged - hence is turned off by default. When this step is not carried out, the whole 4D data set is still normalised by a single scaling factor ("grand mean scaling") - every volume is scaled by the same amount. This is so that inter-subject and inter-session second-level analyses are valid.
Temporal filtering: Nonlinear Highpass temporal filtering uses a local fit of a straight line (Gaussian-weighted within the line to give a smooth response) to remove low frequency artefacts. This is preferable to standard linear filtering as it does not introduce new autocorrelations into the data. This is applied by default, both to the data and to the design matrix. Lowpass filtering reduces high frequency noise with Gaussian smoothing. It is turned off by default, as its value is questionable in the case of block-design experiments, and can be deleterious in the case of single-event designs. With the ANOVA software used for model-free analysis, Gaussian smoothing invalidates the statistics, so in this case, by default, only high pass filtering is applied by FEAT.
General linear modelling allows you to describe one or more stimulation types, and, for each voxel, a linear combination of the modelled response to these stimulation types is found which minimises the unmodelled noise in the fitting. If you are not familiar with the concepts of the GLM and contrasts of parameter estimates, then you should now read Appendix A.
First set the number of explanatory variables (EVs) This is the number of different effects that you wish to model - one for each modelled stimulation type, and one for each modelled confound.
Now set the High pass filter cutoff point (seconds), that is, the longest temporal period that you will allow. A sensible setting in the case of an ABAB or ABACABAC type block design is 1.5 * the (A+B) or (A+B+A+C) total cycle time. If, instead, you have a single-event design, you can normally reduce this cutoff - a typical value for a single-event paradigm is 75s. When you Preview the design matrix, you will see the effect that the high-pass filtering has.
Now you need to setup each EV separately. Choose the basic shape of the waveform that describes the stimulus or confound that you wish to model. The basic waveform should be exactly in time with the applied stimulation, i.e., not lagged at all; the measured (time-series) response will be delayed with respect to the stimulation, and this delay is modelled in the design matrix by convolution of the basic waveform with a suitable haemodynamic response function (see appropriate bubble-help).
For an on/off (or a regularly-spaced single-event) experiment this would normally be a square wave, controlled by parameters which also need setting up. To model single-event experiments with this method, the On periods will probably be small - e.g., 1s or even less.
For a single-event experiment with irregular timing for the stimulations, a custom text file can be used. With Custom (1 entry per volume), the custom file should be a list of numbers, separated by spaces or newlines, with one number for each volume (don't forget not to include any deleted time-points in this list). These numbers can either all be 0s and 1s, or can take a range of values. The former case would be appropriate if the same stimulus was applied at varying time points; the latter would be appropriate, for example, if recorded subject responses are to be inserted as an effect to be modelled. Note that it may or may not be appropriate to convolve this particular waveform with an HRF - in the case of single-event, it is.
For even finer control over the input waveform, choose Custom (3 column format). In this case the custom file consists of triplets of numbers; you can input any number of triplets. Each triplet describes a short period of time and the value of the input function during that time. The first number in each triplet is the onset (in seconds) of the period, the second number is the duration (in seconds) of the period, and the third number is the value of the input during that period. The same comments as above apply, about whether these numbers are all set at 1 (which you would normally use), or are allowed to vary. All points in the time series not falling within a specified period from one of the triplets are set to zero. The timings, in seconds, start at t=0, corresponding to the start of the first non-deleted volume.
Note that whilst all EVs are demeaned before model fitting (i.e., shifted up or down in intensity to end up with zero mean), custom input will not be rescaled (multiplied by a scaling factor). One reason is that if you have more than one custom waveform, you may want to keep relative scalings in the set values.
If you select Interaction then you can set which EVs the current EV models the interaction between. This EV is produced by taking the minimum value of all selected EVs, at each time point, and allows the modelling of the non-additive interaction between the selected EVs.
If you have chosen a Square or Sinusoid basic shape, you then need to specify what the timings of this shape are. Skip is the initial period of zeros (in seconds) before the waveform commences. Off is the duration (seconds) of the "Off" periods in the square wave. On is the duration (seconds) of the "On" periods in the square wave. Period is the period (seconds) of the Sinusoid waveform. Phase is the phase shift (seconds) of the waveform; by default, after the Skip period, the square wave starts with a full Off period and the Sinusoid starts by falling from zero. However, the wave can be brought forward in time according to the phase shift. Thus to start with half of a normal Off period, enter the Phase as half of the Off period. To start with a full On period, enter the same as the Off period. Stop after is the total duration (seconds) of the waveform, starting after the Skip period. "-1" means do not stop. After stopping a waveform, all remaining values in the model are set to zero.
Under Advanced, it possible to change various settings relating to the processing of the raw EV waveform described above:
Convolution sets the form of the HRF (haemodynamic response function) convolution that will be applied to the basic waveform. This blurs and delays the original waveform, in an attempt to match the difference between the input function (original waveform, i.e., stimulus waveform) and the output function (measured FMRI haemodynamic response). If the original waveform is already in an appropriate form, e.g., was sampled from the data itself, None should be selected. The other options are all somewhat similar blurring and delaying functions. Gaussian is simply a Gaussian kernel, whose width and lag can be altered. Gamma is a Gamma variate (in fact a normalisation of the probability density function of the Gamma function); again, width and lag can be altered. SPM99 HRF is a preset function which is a mixture of two Gamma functions - a standard positive function at normal lag, and a small, delayed, inverted Gamma, which attempts to model the late undershoot. Unless you understand the differences between the above options, and what effect the numerical parameters have, it is safest to use the default settings.
You should normally apply the same temporal filtering to the model as you have applied to the data, as the model is designed to look like the data before temporal filtering was applied. In this way any extra smoothing is correctly modelled, and long-time-scale components in the model will be dealt with correctly. This is set with the Apply temporal filtering option.
Adding a fraction of the temporal derivative of the blurred original waveform is equivalent to shifting the waveform slightly in time, in order to achieve a slightly better fit to the data. Thus adding in the temporal derivative of a waveform into the design matrix allows a better fit for the whole model, reducing unexplained noise, and increasing resulting statistical significances. Thus, setting Add temporal derivative produces a new waveform in the final design matrix (next to the waveform from which it was derived).
Each EV (explanatory variable, i.e., waveform) in the design matrix results in a PE (parameter estimate) image. This estimate tells you how strongly that waveform fits the data at each voxel - the higher it is, the better the fit (and it can also be negative for a negative - i.e., inverted - fit). For an unblurred (0/1) square wave input, the PE image is equivalent to a "mean difference image". To convert from a PE to a t statistic image, the PE is divided by it's standard error, which is derived from the residual noise after the complete model has been fit. The t image is then transformed into a Z statistic via standard statistical transformation.
As well as Z stat images arising from single EVs, it is possible to combine different EVs (waveforms) - for example, to see where one has a bigger effect than another. To do this, one PE is subtracted from another, a combined standard error is calculated, and a new Z image is created. All of the above is controlled by you, by setting up contrasts. Each output Z statistic image is generated by setting up a contrast vector; thus set the number of outputs that you want, using Number of contrasts. To convert a single EV into a Z statistic image, set it's contrast value to 1 and all others to 0. Thus the simplest design, with one EV only, has just one contrast vector, and only one entry in this contrast vector; 1. To compare two EVs, for example, to subtract one stimulus type (EV 1) from another type (EV 2), set EV 1's contrast value to -1 and EV 2's to 1. A Z statistic image will be generated according to this request, answering the question "where is the response to stimulus 2 siginificantly greater than the response to stimulus 1?"
To view the current state of the
design matrix, press Preview. This is a graphical
representation of the design matrix and parameter contrasts. The bar
on the left is a representation of time, which starts at the top and
points downwards. The white marks show the position of every 10th
volume in time. The red bar shows the period of the longest temporal
cycle which was passed by the highpass filtering. The main top part
shows the design matrix; time is represented on the vertical axis and
each column is a different EV. Both the red lines and the greyscale
intensities represent the same thing - the variation of the waveform
in time. The bottom part shows the requested contrasts; each column
refers to the weighting of the relevant EV (often either just 1, 0 or
-1), and each row is a different contrast vector. Thus each row will
result in it's own Z statistic image.
When you have finished setting up the design matrix, press Done. This will dismiss the GLM GUI, and will give you a final view of the design matrix.
The model-free method allows the analysis of experiments where you do not have a good idea what the expected response to the stimulation will be. It works by looking for a similar response curve across all repeats of the stimulation, and comparing the variance within this average curve with the variance across repetitions.
Running ANOVA in FEAT does not use the ANOVA GUI - it uses a fairly equivalent set of entries to provide similar user-control over the analysis. If the paradigm consists of many repeats of the same stimulation cycle (e.g., ON/OFF blocks, or repeated single-events with constant spacing), then choose Cyclic and enter the number of Volumes for complete cycle (e.g., ON+OFF count, or the cycle period for single-event stimulation). If, instead, there are unevenly spaced start points for the cycles to be compared, then choose Triggered. You need to setup a file containing the start points of the cycles (in volumes, starting at zero for the first volume), and also enter the number of Volumes per event.
For more detail, see the ANOVA page.
Contrasts are generated by taking linear combinations of parameter estimates produced by the GLM (see Appendix A). These contrasts can be changed inside the GLM popup. You can change both the number of contrasts and the values of them. Every contrast that is asked for generates a raw Z statistic image (saved in the stats subdirectory in the FEAT directory). This raw Z statistic image is then passed on for thresholding and colour rendering. If model-free analysis was carried out instead of GLM, a single raw Z statistic was produced; in this case, there is no such thing as "contrasts", but thresholding and colour rendering is run (and can be re-run with new settings).
Thresholding: The statistical (Z score) image is now thresholded according to either Cluster analysis (default), or Voxel level multiple comparison correction.
Note: MEDx assumes two-tailed tests in its P<->Z transforms, (and also in its t->P transforms) whereas it is almost always a one-tailed test that is of interest. Thus when you enter a probability threshold in the Critical Threshold GUI, the P value is actually wrong by a factor of 2. This is equally true for all of the voxel-level thresholding types (Bonferroni, resel and uncorrected). MEDx is in effect halving the P value when it uses it to calculate Z values. Thus when you enter a P value you should really enter twice the value that you want. If FEAT is asked to carry out critical thresholding, it will make this adjustment for you, i.e., it will double the P value which you entered into the FEAT GUI. Note that the above considerations are not relevant when carrying out cluster detection instead of critical thresholding, as the "two-tailed" assumption does not come into this calculation.
Colour rendering: if at least one voxel has survived thresholding, this is carried out. Significant voxels are overlaid in colour on top of one of the original FMRI images. The Z statistic range selected for rendering is automatically calculated by default, to run from red (minimum Z statistic after thresholding) to yellow (maximum Z statistic). If more than one colour rendered image is to be produced (i.e., when multiple constrasts are created) then the overall range of Z values is automatically found from all of the Z statistic images, for consistent Z statistic colour-coding. If multiple analyses are to be carried out, Use preset Z min/max should be chosen, and the min/max values set by hand. Again, this ensures consistency of Z statistic colour-coding - if several experiments are to be reported side-by-side, colours will refer to the same Z statistic values in each picture. When using this option, you should choose a conservatively wide range for the min and max (e.g., min=1, max=15), to make sure that you do not carry out unintentional thresholding via colour rendering. With Solid colours you don't see any sign of the background images within the colour blobs; with Transparent colours you will see through the colour blobs to the background intensity.
When you have finished setting up FEAT, press Go to run the analysis.
The Save and Load buttons enable you to save and load the complete FEAT setup to and from file. The filename should normally be chosen as design.fsf - this is also the name which FEAT uses to automatically save the setup inside a FEAT directory. Thus you can load in the setup that was used on old analyses by loading in this file from old FEAT directories.
FEAT should be able to find the directory name and filename associated with the input group image. This is true whether the input group image was already in a MEDx folder, or loaded from file. If the associated directory is writable by you then a related whatever.feat directory is created into which all FEAT output is saved. If not, the FEAT directory is created in your home directory. In either case, if the appropriately named FEAT directory already exists, a "+" is added before the .feat suffix to give a new FEAT directory name. For example, if you run FEAT on the 4D Analyze file pair /usr/people/bloggs/data/rawdata.hdr & /usr/people/bloggs/data/rawdata.img, the chosen FEAT directory will be /usr/people/bloggs/data/rawdata.feat. If you run FEAT a second time on this file, the new directory will be /usr/people/bloggs/data/rawdata+.feat.
All results get saved in this FEAT directory. The current list of files is:
General Linear Modelling (more correctly known simply as "linear modelling") sets up a model (i.e., what you expect to see in the data) and fits it to the data. If the model is derived from the stimulation that was applied to the subject in the MRI scanner, then a good fit between the model and the data means that the data was indeed caused by the stimulation.
The GLM used here is univariate. This means that the model is fit to each voxel's time-course separately. (Multivariate would mean that a much more complex analysis would take place on all voxels' time-courses at the same time, and interactions between voxels would be taken into account. Independent Components Analysis - ICA - is an example of multivariate analysis.) For the rest of this section, you can imagine that we are only talking about one voxel, and the fitting of the model to that voxel's timecourse. Thus the data comprises a single 1D vector of intensity values.
A very simple example of linear modelling is y(t)=a*x(t)+b+e(t). y(t) is the data, and is a 1D vector of intensity values - one for each time point, i.e., is a function of time. x(t) is the model, and is also a 1D vector with one value for each time point. In the case of a square-wave block design, x(t) might be a series of 1s and 0s - for example, 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 etc. a is the parameter estimate for x(t), i.e., the value that the square wave (of height 1) must be multiplied by to fit the square wave component in the data (if one exists). b is a constant, and in this example, would correspond to the baseline (rest) intensity value in the data. e is the error in the model fitting, and ideally is very small.
If there are two types of stimulus, the model would be y=a1*x1+a2*x2+b+e. Thus there are now two different model waveforms, corresponding to the two stimulus timecourses. There are also two parameters to estimate, a1 and a2. Thus if a particular voxel reponds strongly to model x1 the model-fitting will find a large value for a1; if the data instead looks more like the second model timecourse, x2, then the model-fitting will give a2 a large value.
GLM is normally formulated in matrix
notation. Thus all of the parameters are grouped together into a
vector A, and all of the model timecourses are grouped together
into a matrix X. However, it isn't too important to follow the
matrix notation, except to understand the layout of the design matrix,
which is the matrix X. The main part of the image shown here
contains two such model timecourses. Each column is a different model
timecourse, with time going down the image vertically. Thus the left
column is x1, for example, the timecourse associated with
visual stimulation, and the right column is x2, e.g., auditory
stimulation, which has a different timecourse to the visual
stimluation. Note that each column has two representations of the
model's value - the black->white intensity shows the value, as does
the red line plot. Make sure that you are comfortable with both
representations.
When the model is fit to the data, for each voxel there will be found an estimate of the "goodness of fit" of each column in the model, to that voxel's timecourse. In the visual cortex, the first column will generate a high first parameter estimate, and the second column will generate a low second parameter estimate, as this part of the model will not fit the voxel's timecourse well. Each column will be known as an explanatory variable (EV), and in general represents a different stimulus type.
To convert estimates of parameters estimates (PEs) into statistical maps, it is necessary to divide the actual PE value by the error in the estimate of this PE value. This results in a t value. If the PE is high, the fit is not significant if the estimated error in this PE is also high. Thus t is a good measure of whether we can believe the estimate of the PE value. All of this is done separately for each voxel. To convert a t value into a P (probability) or Z statistic requires standard statistical transformations; however, t, P and Z all contain the same information - they tell you how significantly the data is related to a particular EV (part of the model).
As well as producing images of Z values which tell you how strongly each voxel is related to each EV (one image per EV), you can compare parameter estimates to see if one EV is more "relevant" to the data than another. This is known as contrasting EVs, or producing contrasts. To do this, one PE is subtracted from another, a combined standard error is calculated, and a new Z image is created. All of the above is controlled by you, by setting up contrasts. Each output Z statistic image is generated by setting up a contrast vector; thus set the number of outputs that you want. To convert a single EV into a Z statistic image, set it's contrast value to 1 and all others to 0. Thus the simplest design, with one EV only, has just one contrast vector, and only one entry in this contrast vector; 1. To compare two EVs, for example, to subtract one stimulus type (EV 1) from another type (EV 2), set EV 1's contrast value to -1 and EV 2's to 1. A Z statistic image will be generated according to this request, answering the question "where is the response to stimulus 2 siginificantly greater than the response to stimulus 1?"
The bottom part of the above image shows the requested contrasts; each column refers to the weighting of the relevant EV (often either just 1, 0 or -1), and each row is a different contrast vector. Thus each row will result in it's own Z statistic image. Here the contrasts are [1 0] and [0 1]. Thus the first Z stat image produced will show response to stimulus type 1, relative to rest, and the second will show the response to stimulus type 2.
If you want to model
nonlinear interactions between two EVs (for example, when you expect
the response to two different stimuli when applied simultaneously to
give a greater response than predicted by adding up the responses to
the stimuli when applied separately), then an extra EV is
necessary. The simplest way of doing this is to setup the two
originals EVs, and then add an interaction term, which will only be
"up" when both of the original EVs are "up", and "down" otherwise. In
the example image, EV 1 could represent the application of drug, and
EV 2 could represent visual stimulation. EV 3 will model the extent to
which drug+visual is greater than the sum of drug-only and
visual-only. The third contrast will show this measure, whilst the
fourth contrast [0 0 -1] shows where negative interaction is
occurring.
All of the EVs have to be independent of each other. This means that no EV can be a sum (or weighted sum) of other EVs in the design. The reason for this is that the maths which are used to fit the model to the data does not work properly unless the design matrix is of "full rank", i.e. all EVs are independent. A common mistake is to model both rest and activation waveforms, making one an upside-down version of the other; in this case EV 2 is -1 times EV 1, and therefore linearly dependent on it. It is only necessary to model the activation waveform.
With "parametric designs", there migh be several different levels of stimulation, and you probably want to find the response to each level separately. Thus you should use a separate EV for each stimulation level. (If, however, you are very confident that you know the form of the response, and are not interested in confirming this, then you can create a custom waveform which will match the different stimulation levels, and only use one EV.) If you want to create different contrasts to ask different questions about these responses, then: [1 0 0] shows the response of level 1 versus rest (likewise [0 1 0] for level 2 vs rest and [0 0 1] for level 3). [-1 1 0] shows where the response to level 2 is greater than that for level 1. [-1 0 1] shows the general linear increase across all three levels. [1 -2 1] shows where the increase across all three levels deviates from being linear (this is derived from (l3-l2)-(l2-l1)=l3-2*l2+l1).
This section describes the rules which are followed in order to take the FEAT setup and produce a design matrix, for use in the FILM GLM processing.
Here HTR model means "high temporal resolution" model - a time series of values that is used temporarily to create a model and apply the relevant HRF convolution before resampling down in time to match the temporal sampling of the FMRI data.
Note that it is assumed that every voxel was captured instantaneously in time, and at the same time, exactly halfway through a volume's time period, not at the beginning. This minimises timing errors, if slice-timing correction has not been applied. (Thus, if slice timing is to be applied, before FEAT is run, it should take account of the above assumption.)
No constant column is added to the model - instead, each EV is demeaned, and each voxel's time-course is demeaned before the GLM is applied.
for each EV ( if ( square waveform ) fill HTR model with 0s or 1s else if ( sinusoidal waveform ) fill HTR model with sinusoid scaled to lie in the range 0:1 else if ( custom waveform ) fill HTR model with custom information, with 0s outside of specified periods else if ( to model nonlinear interaction between other EVs ) model = MIN(the other EVs which are interacting) create blurring+delaying HRF convolution kernel, renormalised so that the sum of values is 1 convolve HTR model with HRF convolution kernel (values in HTR model for t<0 are set to 0 to allow simple convolution) subsample HTR model to match the temporal resolution of the data; take the value in the centre of each volume's time period apply the same high-pass and low-pass temporal filtering that was applied to the data create a new EV, calculated as the temporal derivative of the current EV ) demean all EVs