| SIENA - Structural Brain Change Analysis - User
Guide (Structural Image Evaluation, using Normalisation, of Atrophy)
SIENA Version 1.5
|
|
Introduction
SIENA is a package for both single-time-point ("cross-sectional")
and two-time-point ("longitudinal") analysis of brain change, in
particular, the estimation of atrophy (volumetric loss of brain
tissue). SIENA has already been used in many clinical studies.
siena estimates percentage brain
volume change (PBVC) betweem two input images, taken of the same
subject, at different points in time. It calls a series of image
analysis programs (supplied with FSL) to
strip the non-brain tissue from the two images, register the two
brains (under the constraint that the skulls are used to hold the
scaling constant during the registration) and analyse the brain change
between the two time points.
sienax estimages total brain tissue
volume, from a single image, after registration to standard
(Talairach) space. It calls a series of FSL programs: It first strips
non-brain tissue, and then uses the brain and skull images to estimate
the scaling between the subject's image and Talairach space. It then
runs tissue segmentation to estimate the volume of brain tissue, and
multiplies this by the estimated scaling factor, to reduce
head-size-related variability between subjects.
Contributors: There have been many contributions of various
kinds from members of the FMRIB analysis group and collaborators
mentioned on the FSL page.
For more detail on SIENA and updated journal references, see the SIENA web
page. If you use SIENA in your research, please quote the journal
references listed there.
FSL Tools used
This section briefly describes the generic FSL programs that SIENA
uses.
bet - Brain Extraction Tool. This
automatically removes all non-brain tissue from the image. It can
optionally output the binary brain mask that was derived during this
process, and output an estimate of the external surface of the skull,
for use as a scaling constraint in later registration.
pairreg, a script supplied with FLIRT - FMRIB's Linear Image
Registration Tool. This script calls FLIRT with a special optimisation
schedule, to register two brain images whilst at the same time using
two skull images to hold the scaling constant (in case the brain has
shrunk over time, or the scanner calibration has changed). The script
first calls FLIRT to register the brains as fully as possible. This
registration is then applied to the skull images, but only the scaling
and skew are allowed to change. This is then applied to the brain
images, and a final pass optimally rotates and translates the brains
to get the best final registration.
fast - FMRIB's Automated
Segmentation Tool. This program automatically segments a brain-only
image into different tissue types (normally background, grey matter,
white matter, CSF and other). It also corrects for bias field. It is
used in various ways in the SIENA scripts. Note that both siena
and sienax allow you to choose between segmentation of grey
matter and white matter as separate classes or a single class. It is
important to choose the right option here, depending on whether there
is or is not reasonable grey-white contrast in the image.
Two-Time-Point Estimation
The script siena (see usage) is run simply by typing
siena <input1_fileroot> <input2_fileroot>
where the two input fileroots are analyze images without the .hdr
or .img extensions.
siena carries out the following steps:
Run bet on the two input images, producing as output, for
each input: extracted brain, binary brain mask and skull image. If you
need to call BET with a different threshold than the default of 0.5,
use -f <threshold>.
Run siena_flirt, a separate script, to register the two
brain images. This first calls the FLIRT-based registration script
pairreg (which uses the brain and skull images to carry out
constrained regrstration). It then deconstructs the final transform
into two half-way transforms which take the two brain images into a
space halfway between the two, so that they both suffer the same
amount of interpolation-related blurring. Finally the script produces
a multi-slice gif picture showing the registration quality, with one
transformed image as the background and edges from the other
transformed image superimposed in red.
The final step is to carry out change analysis on the registered
brain images. This is done using the program
siena_diff. (However, in order to improve slightly the accuracy
of the siena_diff program, a self-calibration script
siena_cal is run first. This is described later in this
section.) siena_diff carries out the following steps:
Transforms original whole head images and brain masks for each
time point into the space halfway between them, using the two halfway
transforms previously generated.
- Combines the two aligned masks using logical OR (if either is 1
then the output is 1).
- The combined mask is used to mask the two aligned head
images, resulting in aligned brain images.
- The change between the two aligned brain images is now estimated,
using the following method (note that options given to the
siena script are passed on to siena_diff): Apply tissue
segmentation to the first brain image. At all points which are
reported as boundaries between brain and non-brain (including internal
brain-CSF boundaries), compute the distance that the brain surface has
moved between the two time points. This motion of the brain edge
(perpendicular to the local edge) is calulated on the basis of
sub-voxel correlation (matching) of two 1D vectors; these are taken
from the 3D images, a fixed distance either side of the surface point,
and perpendicular to it, and are differentiated before correlation,
allowing some variation in the two original images. Compute mean
perpendicular surface motion and convert to PBVC.
- Other options to siena_diff: -2: don't segment grey+white
separately (because there is poor grey-white contrast). -e:
before applying the joint mask, erode it by a large amount. This means
that most of the reported edges are at the ventricles, rather than the
external brain surface. -d: save various images - edge images,
flow output image and don't remove temporary images. -c
<corr>: apply as a correction factor to the final
PBVC (see below). -i: ignore the flow in Z - this is useful if
the Z scaling is uncertain - for example, if the top of the head is
missing from the images. Thus only atrophy in X and Y is
estimated. -t,-b <n>: ignore <n> top/bottom slices.
- To convert from mean perpendicular edge motion to PBVC, it is
necessary to assume certain relationship between real brain surface
area, number of estimated edge points and real brain volume. This
number can be estimated for general images, but will vary according to
slice thickness, image sequence type, etc, causing small scaling
errors in the final PBVC. In order to correct for this,
self-calibration is applied, in which siena calls
siena_cal. This script runs siena_diff on one of the
input images relative to a scaled version of itself, with the scaling
pre-determined (and therefore known). Thus the final PBVC is known in
advance and the estimated value can be compared with this to get a
correction factor for the current image. This is done for both input
images and the average taken, to give a correction factor to be fed
into siena_diff.
Note that all output is in the same directory as the input, so this
must be writable by the user. The output files are (assuming the input
images are called "A" and "B"):
- A_brain.hdr and B_brain.hdr the extracted brains from the input
images.
- A_brain_mask.hdr and B_brain_mask.hdr the associated binary brain
masks.
- A_brain_skull.hdr and B_brain_skull.hdr the associated skull
surface images.
- B_to_A.mat the transformation taking B to A, using the brain and
skull images.
- B_to_A.mat_avscale a file derived from the .mat file, including
various information such as the halfway transforms to the central
spatial position.
- B_regto_A.gif a gif image showing the results of the registration,
using one transformed image as the background and the other as the
coloured edges foreground.
- A_halfwayto_B.mat and B_halfwayto_A.mat the transformations taking
the images to the halfway positions.
- A_halfwayto_B.hdr and B_halfwayto_A.hdr the transformed input
images, in the halfway position.
- A_halfwayto_B_mask.hdr and B_halfwayto_A_mask.hdr the transformed
brain mask images, in the halfway position.
- A_halfwayto_B_brain.hdr the transformed image A, masked by the
combined mask, ready for input to tissue segmentation.
- A_halfwayto_B_brain_seg.hdr the output of this segmentation, with a
different intensity for each separate tissue type.
- A_halfwayto_B_brain.vol the file containing text output from the
tissue segmentation.
- A_to_B_flow.hdr the image encoding the perpendicular brain edge
motion at each point, all other points being zero.
- A_to_B_flowneg.hdr the negation of this, used in the colour
rendering (see next entry).
- A_halfwayto_B_render.hdr a colour rendered image of edge motion
superimposed on the halfway A image.
- A_to_B.siena the output information from the siena script.
After completion, many of these files are deleted unless siena
was called with the -d option.
Single-Time-Point Estimation
The script sienax (see usage) is run simply by typing
sienax <input_fileroot>
where the input fileroot is an analyze image without the .hdr
or .img extension.
sienax carries out the following steps:
- Run bet on the single input image, outputting the
extracted brain, and the skull image. If you need to call BET with a
different threshold than the default of 0.5, use -f
<threshold>.
- Run pairreg (which uses the brain and skull images to
carry out constrained registration); the MNI152 standard brain is
the target (reference), using brain and skull images derived from
the MNI152. Thus, as with two-time-point atrophy, the brain is
registered (this time to the standard brain), again using the skull
as the scaling constraint. Thus brain tissue volume (estimated
below) will be relative to a "normalised" skull size. (Ignore the
"WARNING: had difficulty finding robust limits in histogram"
message; this appears because FLIRT isn't too happy with the unusual
histograms of skull images, but is nothing to worry about in this
context.) Note that all later steps are in fact carried out on the
original (but stripped) input image, not the registered input image;
this is so that the original image does not need to be resampled
(which introduces blurring). Instead, to make use of the
normalisation described above, the brain volume (estimated by the
segmentation step described below) is scaled by a scaling factor
derived from the normalising transform, before being reported as the
final normalised brain volume.
- A standard brain image mask, (derived from the MNI152 and
slightly dilated) is transformed into the original image space (by
inverting the normalising transform found above) and applied to the
brain image. This helps ensure that the original brain extraction
does not include artefacts such as eyeballs.
- Segmentation is now run on the masked brain using
fast. If there is reasonable grey-white contrast, grey
matter and white matter volumes are reported separately, as well as
total brain volume (this is the default behaviour). Otherwise
(i.e. if sienax was called with the -2 option), just
brain/CSF/background segmentation is carried out, and only brain
volume is reported. Before reporting, all volumes are scaled by the
normalising scaling factor, as described above, so that all
subjects' volumes are reported relative to a normalised skull size.
Note that all output is in the same directory as the input, so this
must be writable by the user. The output files are (assuming the input image is called "A"):
- A_brain.hdr the main output from BET - all non-brain
tissue should have been removed from the image.
- A_brain_skull.hdr the estimate of the external surface of the
skull produced by BET.
- A2tal.mat the transformation that takes the input image into
standard space.
- A2tal_inv.mat the inverse of this transformation, taking standard
space into that of the original image.
- A2tal.avscale a file (derived from A2tal.mat) containing
transformation information, such as the volumetric scaling factor
between the original image and standard space.
- A_talmask.hdr the (inverse) transformed standard space mask.
- A_talmaskbrain.hdr the input brain image after it has been
additionally masked by the transformed standard space mask.
- A_talmaskbrain_seg.hdr the output of the segmentation, containing
different intensities for the different class types.
- A_talmaskbrain.pve-N.hdr the probability output images for each
class in the segmentation, where N is 0,1,2.... - i.e., one for each
class.
- A_render.hdr a colour rendered image showing the segmentation
output superimposed on top of the original image.
- A.sienax the output information from the sienax script.
After completion, many of these files are deleted unless sienax
was called with the -d option.
Manual Segmentation Correction
If you want to manually edit the segmentation image, to correct for
any segmentation errors, you should follow the steps below. The
following assumes the use of MEDx to edit images. The main steps
described are for correcting sienax, followed by a description
of how to adapt this procedure for siena.
- Load the ...seg.hdr image into MEDx.
- In lightbox mode, make sure the dimensionality is 3D.
- In "look-at-voxel-values" mode rather than the "graphics-edit"
mode (controlled by the button to the right of the 3D/2D button),
check the intensities of the classes of interest - for example,
0=background, 1=CSF, 2=grey, 3=white.
- Using Toolbox->Measurement->Statistics, set the lower and
upper thresholds to give the current count of G+W (eg lower=1.5,
upper=3.5) and measure the Area (in fact the volume). This is
slightly different (and less accurate) than that reported by fast
as it does not directly take into account partial volume effect.
- In lightbox mode, change dimensionality from 3D to 2D.
- Using the pencil tool from the toolbox, outline all regions of
one particular type that you wish to change to a different type.
- Use Folder->Graphic->Select All->2D Graphics to highlight
all graphics that you've just drawn. Use
Folder->Graphic->Grouping->Derive 3D Contour to convert these
into one 3D object.
- Bring up the image calculator and enter the intensity of the
class that you wish to set all highlighted voxels to (eg 2). Press
OK. Delete the 3D contour with
Folder->Graphic->Delete, and reset the display range using
Folder->Display->Display Range->Compute.
- Now repeat the last 4 steps if you wish to also change other
areas to a different class type to that which you have just set.
- Now re-run the measurement of "Area" using
statistics. Subtract the two volumes, multiply by the VSCALING value
(from the .sienax output file) and amend the BRAIN volume reported
at the end of this file.
If you want to amend the segmentation carried out within siena,
carry out the above (without bothering with any of the statistics
steps) and save the image, overwriting the original segmentation
output. Now if you re-run siena the edited segmentation result
will be used (as siena doesn't re-run segmentation if a
segmentation output image is already in existence).
Copyright © 2000, University of
Oxford. Written by S. Smith.