#This is a template for a FSL tool sub-section # Create subpages as ToolName/blah # Remember to add this page to the appropriate category = WARNING this page is being edited in preparation of a new release and may be in an inconsistent state = <> = The eddy executables = `eddy` is a very computationally intense application, so in order to speed things up it has been parallelised. This has been done in two ways, resulting in different executables * `eddy_openmp`: This executable has been parallelised through OpenMP, which allows `eddy` to use more than one core/CPU when running. * `eddy_cuda8.0`: an `eddy` executable that has been parallelised with CUDA v8.0, which allows `eddy` to use an Nvidia GPU if one is available on the system. * `eddy_cuda9.1`: an `eddy` executable that has been parallelised with CUDA v9.1, which allows `eddy` to use an Nvidia GPU if one is available on the system. * `eddy_cuda`: On Linux, this is a convenience sym-link that points to the version of eddy_cuda that has been setup for you, or that you can configure for your system. Within FMRIB, `eddy_cuda` will point to the newest CUDA installed on the cluster. For all other users, it is not trivial to automatically detect where your system's CUDA installation is located without searching the entire file system. Therefore, some users may need to create the `eddy_cuda` sym-link on their system, and for their desired CUDA version. See below for examples. This is a one-time setup step per FSL installation. To create `eddy_cuda` if you do not have it on your system, simply link it to either `eddy_cuda8.0`, or `eddy_cuda9.1`. Your choice of CUDA version will depend on the CUDA version installed on your system. Examples: For CUDA v8.0 `ln -sf ${FSLDIR}/bin/eddy_cuda8.0 ${FSLDIR}/bin/eddy_cuda` For CUDA v9.1 `ln -sf ${FSLDIR}/bin/eddy_cuda9.1 ${FSLDIR}/bin/eddy_cuda` There is no longer an executable named `eddy` and when I refer to the `eddy`-command in the rest of this users guide it is implied that this is either `eddy_openmp` or `eddy_cuda`. The `eddy_cuda` version is potentially '''much''' faster than `eddy_openmp` and not all new features will be available for the OpenMP version. This is because the slow speed makes it almost impossible to test the more time-consuming options thoroughly. I warmly recommend investing in a couple of CUDA cards. = Running eddy = Running `eddy` is a little bit more complicated than running for example its predecessor `eddy_correct`. The reasons for this are * `eddy` attempts to combine the correction for susceptibility and eddy currents/movements so that there is only one single resampling. This means we need to "inform" `eddy` of the results from [[topup]] (which is used to calculate the susceptibility distortions). * Unlike `eddy_correct`, `eddy` attempts to model the diffusion signal. This means that `eddy` needs to be informed of the diffusion direction/weighting that was used for each volume. * `eddy` can utilise the information from different acquisitions that modulate how off-resonance translates into distortions. An example of this would be acquisitions with different polarity of the phase-encoding. Hence we need to inform `eddy` about how each volume was acquired. * Some of the options in `eddy`, like for example the outlier detection and the slice-to-volume movement model, needs to know which slices were acquired together (in the case of multi-band) and the order in which this occurred. The need to pass more information to `eddy` results in a more complicated command line. Here I will outline a typical use of [[topup]] and `eddy` (they are really intended to be used together) on a "typical" data set suited for use with `eddy`. == The data == The data for this example consists of one set of volumes acquired with phase-encoding A>>P consisting of 5 b=0 volumes and 59 diffusion weighted volumes ||`data.nii.gz` || || {{attachment:A2P_five_volumes.jpeg||align="top",width="800"}} || ||First b=0 volume and the first four dwis of the A>>P data || and one single b=0 volume with phase-encoding P>>A. ||`P2A_b0.nii.gz` || || {{attachment:P2A_b0.jpeg||align="top",width="160"}} || ||The P>>A data || Note how the shape of the b=0 scan is different for the two different acquisitions. This is what [[topup]] will use in order to calculate the susceptibility induced off-resonance field. == Running topup on the b=0 volumes == The first thing we do is to run [[topup]] to estimate the susceptibility induced off-resonance field. In order to prepare the data for [[topup]] we issue the following commands {{{ fslroi data A2P_b0 0 1 fslmerge -t A2P_P2A_b0 A2P_b0 P2A_b0 printf "0 -1 0 0.0646\n0 1 0 0.0646" > acqparams.txt }}} The first two commands will produce a file called `A2P_P2A_b0.nii.gz` containing the two b=0 volume, and the third command will create a file (named `acqparams.txt`) that informs [[topup]]/`eddy` of how the data was collected. This file is described [[topup/TopupUsersGuide/#A--datain|here]], [[#A--acqp|here]] and in more detail [[eddy/Faq#How_do_I_know_what_to_put_into_my_--acqp_file|here]]. Now it is time to run [[topup]] which we do with the command `topup --imain=A2P_P2A_b0 --datain=acqparams.txt --config=b02b0.cnf --out=my_topup_results --iout=my_hifi_b0` which will give us as our main result a file named `my_topup_results_fieldcoef.nii.gz` which contains an estimate of the susceptibility induced off-resonance field. == Running eddy == Before we can run `eddy` we need to do a couple of more preparations. First of all we need a mask that separate brain from non-brain. This is no different from for example the mask that [[FDT/UserGuide#DTIFIT|`dtifit`]] needs. Since `eddy` will work in a non-distorted space we will base the mask on `my_hifi_b0.nii.gz` (the secondary output from our [[topup]] command above). We generate this mask with the commands {{{ fslmaths my_hifi_b0 -Tmean my_hifi_b0 bet my_hifi_b0 my_hifi_b0_brain -m }}} which results in the file `my_hifi_b0_brain_mask.nii.gz`. It may be a good idea to check this stage to ensure `bet` has done a good job of extracting the brain. The final thing we need to do is to create an index file that tells `eddy` which line/of the lines in the `acqparams.txt` file that are relevant for the data passed into `eddy`. In this case all the volumes in `data.nii.gz` are acquired A>>P which means that the first line of `acqparams.txt` describes the acquisition for all the volume. We specify that by passing a text file with as many ones as there are volumes in `data.nii.gz`. One way of creating such a file would be to type the following commands {{{ indx="" for ((i=1; i<=64; i+=1)); do indx="$indx 1"; done echo $indx > index.txt }}} where 64 is the total number of volumes in `data.nii.gz` and needs to be replaced by the number of volumes in your data. We are now in a position to run `eddy` using the command {{{ eddy --imain=data --mask=my_hifi_b0_brain_mask --acqp=acqparams.txt --index=index.txt --bvecs=bvecs --bvals=bvals --topup=my_topup_results --out=eddy_corrected_data }}} You may be in for quite a long wait as `eddy` is quite CPU intensive and also memory hungry. It has been written using OpenMP to take advantage of multiple processors and this may/may not be available to you depending on how it was built in your system. A rule of thumb for how much memory `eddy` will use is 8*m*n,,x,,n,,y,,n,,z,, bytes where m is the number of volumes in `--imain`, n,,x,, is the matrix-size in the x-direction, n,,y,, is the matrix size in the y-direction and n,,z,, is the number of slices. == eddy with outlier replacement == When (not if) a subject makes a movement that coincides in time with the diffusion encoding part of the sequence, there will be partial or complete signal dropout. The dropout will affect the whole (the most common case) or parts of a slice. In the presence of out-of-plane rotations these slices will turn into diagonal bands when `eddy` rotates the volume back. If uncorrected this will affect any measures derived from the data. As of version 5.0.10 (or the 5.0.9 eddy-patch) `eddy` has a mechanism for detecting these dropout-slices and replacing them with Gaussian Process predictions. All one needs to do for example in the example above is to add `--repol` to the command line. {{{ eddy --imain=data --mask=my_hifi_b0_brain_mask --acqp=acqparams.txt --index=index.txt --bvecs=bvecs --bvals=bvals --topup=my_topup_results --repol --out=eddy_corrected_data }}} The exact details of how the outlier replacement is performed can be specified by the user, and in particular if ones data has been acquired with multi-band it can be worth taking a look [[#A--ol_type|here]]. The pertinent reference for when using the `--repol` functionality is at the main [[eddy/#Referencing|eddy]] page. ||Example of before and after outlier replacement || || {{attachment:OLR_before_after.gif||align="left",width="450"}} || ||This mini-movie flips between before and after outlier replacement<
>for two consecutive planes. In the before case one can<
>clearly see the telltale diagonal bands caused by "missing" slices<
> being rotated out of plane. The after case shows how the<
>missing slices were replaced in the native space before being<
> rotated into the reference space (the first volume of `--imain`). || == Correcting slice-to-volume movement with eddy == A common assumption for movement correction methods is that the subject remains still during the time is takes to acquire a volume (TR, typically 2-8 seconds for diffusion) and that any movement occurs between volumes. This is of course not true, but as long as the movement is slow relative the TR, it is a surprisingly good approximation. However, for some subjects (for example small children) that assumption no longer holds. The result is a "corrupted" volume where the individual slices no longer stacks up to a "valid" volume. The leads to a telltale zig-zag pattern in a coronal or sagittal view when the slices are acquired in an interleaved fashion (that is the typical way to acquire diffusion data). ||Example of zig-zag pattern from within-volume movement || || {{attachment:ExampleOfS2VZigZagPattern.001.jpeg||align="left",width="400"}} || ||This sagittal slice along the midline demonstrates<
>the typical zig-zag pattern associated with<
>within-plane movement and interleaved acquisition.<
>Note that this is '''not''' signal dropout<
>(except for one of the slices), but a case<
>of signal being rotated in or out of the mid-sagittal plane.<
>Hence, the signal is not lost and only needs to be<
>relocated to its "proper" location. || Version 5.0.11 of `eddy` has the ability to correct for such within-volume (or "slice-to-volume") movement. This is done by setting the `--mporder` option to a value greater than 0. If one for example specifies `--mporder=4` the movement during each volume is modelled by the 5 first terms of a DCT basis-set (and is hence defined by 6*5=30 parameters instead of the usual 6 rigid-body parameters). Note that the DCT-set is a function of time rather than slice, which means that the user needs to specify the relation between time and slice. A command line for correcting intra-volume movement in a data set with lots of movement can look something like {{{ eddy --imain=data --mask=my_hifi_b0_brain_mask --acqp=acqparams.txt --index=index.txt --bvecs=bvecs --bvals=bvals --topup=my_topup_results --niter=8 --fwhm=10,8,4,2,0,0,0,0 --repol --out=eddy_corrected_data --mporder=6 --slspec=my_slspec.txt --s2v_niter=5 --s2v_lambda=1 --s2v_interp=trilinear }}} where the 5 last options are all related to the intra-volume correction. ||Example of before and after intra-volume movement correction || || {{attachment:olr_vs_s2v_movie.gif||align="left",width="500"}} || ||This example shows a sagittal plane of a volume that was either <
>corrected for "everything" (susceptibility, eddy currents,<
> inter-volume movement and outliers) '''except''' intra-volume movement<
> ''or'' corrected for "everything" including intra-volume movement.<
> The little movie flips back and forth between the two cases. || The pertinent reference for when using the slice-to-volume functionality is at the main [[eddy/#Referencing|eddy]] page. == Correcting susceptibility-by-movement interactions with eddy == The off-resonance field that is caused by the subject head itself disrupting the main field in the scanner is referred to as the ''susceptibility induced'' field. This is the field that we estimate from blip-up-blip-down data using [[topup]], or measure using a dual echo-time gradient echo [[FUGUE|sequence]]. If/when the subject moves the field will, as a first approximation, move with the subject. To make it concrete, let us say we acquired blip-up-blip-down data and estimated the field with the subject in one location, and that the subject subsequently move 5mm in the z-direction. To a first approximation the new field is identical to the measured field, but translated 5mm in the z-direction. However, if the movement includes a rotation around an axis that is non-parallel to the magnetic flux (z-axis) the field will change such that it is no longer sufficient to rotate the measured field to the new position. To estimate the field as a function of these rotations (pitch and roll) may seem like an intractable problem. But it has been shown that in all parts of the brain the field dependence on pitch and roll is linear, or very close to linear, over a range of tens of degrees of rotation. That means that the field as a function of pitch and roll can be very well approximated by a Taylor expansion around the measured field. The unknowns in such an expansion are two "derivative fields", one representing the rate of change of the field with respect to pitch and one the rate of change of field with respect to roll. Both these fields can be scaled to Hz/degree. `eddy` uses the observed variance in all scans, after correcting for gross movement and eddy currents, along with the estimated movement parameters to estimate the rate-of-change fields. Hence no additional measurements are needed. The estimated rate-of-change fields are then multiplied by the movement parameters to obtain a unique susceptibility induced field for each volume. When combined with the slice-to-volume [[#ACorrecting_slice-to-volume_movement_with_eddy|functionality]] `eddy` only uses volumes with "little" intra-volume movement for the estimation of the rate-of-change fields, assuming a single "position" for each of these volumes. For the application of these fields it will then use the slice-to-volume estimates to calculate a unique field for each slice/MB-group. The pertinent reference for when using the susceptibility-by-movement functionality is at the main [[eddy/#Referencing|eddy]] page. == Understanding eddy output == The [[#A--out|--out]] parameter specifies the basename for all output files of `eddy`. It is used as the name for all `eddy` output files, but with different extensions. If we assume that user specified `--out=my_eddy_output`, the files that are always written are * my_eddy_output.nii.gz <
> This is the main output and consists of the input data after correction for eddy currents and subject movement, and for susceptibility if `--topup` or `--field` was specified, and for signal dropout if `--repol` was set. Chances are this is the only output file you will be interested in (in the context of `eddy`). * my_eddy_output.eddy_parameters <
> This is a text file with one row for each volume in `--imain` and one column for each parameter. The first six columns correspond to subject movement starting with three translations followed by three rotations. The remaining columns pertain to the EC-induced fields and the number and interpretation of them will depend of which EC [[#A--flm|model]] was specified. * my_eddy_output.rotated_bvecs <> <
> When a subject moves such that it constitutes a rotation around some axis and this is subsequently reoriented, it will create an inconsistency in the relationship between the data and the "bvecs" (directions of diffusion weighting). This can be remedied by using the `my_eddy_output.rotated_bvecs` file for subsequent analysis. For the rotation to work correctly the `bvecs` need to be "correct" for FSL before being fed into `eddy`. The easiest way to check that this is the case for your data is to run [[FDT]] and display the _V1 files in [[http://fsl.fmrib.ox.ac.uk/fsl/fslview/|fslview]] or [[FSLeyes]] to make sure that the eigenvectors line up across voxels. * my_eddy_output.eddy_movement_rms <
> A summary of the "total movement" in each volume is created by calculating the displacement of each voxel and then averaging the squares of those displacements across all intracerebral voxels (as determined by [[#A--mask|--mask]] and finally taking the square root of that. The file has two columns where the first contains the RMS movement relative the first volume and the second column the RMS relative the previous volume. * my_eddy_output.eddy_restricted_movement_rms <
> There is an inherent ambiguity between any EC component that has a non-zero mean across the FOV and subject movement (translation) in the PE direction. They will affect the data in identical (or close to identical if a susceptibility field is specified) ways. That means that both these parameters are estimated by `eddy` with large uncertainty. This doesn't matter for the correction of the images, it makes no difference if we estimate a large constant EC components and small movement or if we estimate a small EC component and large movement. The corrected images will be (close to) identical. ''But'' it matters if one wants to know how much the subject moved. We therefore supplies this file that estimates the movement RMS as above, but which disregards translation in the PE direction. * my_eddy_output.eddy_post_eddy_shell_alignment_parameters <> <
> This is a text file with the rigid body movement parameters between the different shells as estimated by a post-hoc mutual information based registration (see [[#A--dont_peas|--dont_peas]] for details). These parameters will be estimated even if `--dont_peas` has been set, but in that case they have not been applied to the corrected images in `my_eddy_output.nii.gz`. * my_eddy_output.eddy_post_eddy_shell_PE_translation_parameters <
> This is a text file with the translation along the PE-direction between the different shells as estimated by a post-hoc mutual information based registration (see [[#A--dont_sep_offs_move|--dont_sep_offs_move]] for details). These parameters will be estimated even if `--dont_sep_offs_move` has been set, but in that case they have not been applied to the corrected images in `my_eddy_output.nii.gz`. * my_eddy_output.eddy_outlier_report <
> This is a text-file with a plain language report on what outlier slices `eddy` has found. This file is always created, as are the other `my_eddy_output.eddy_outlier_*` files described below, even if the `--repol` flag has not been set. Internally `eddy` will always detect and replace outliers to make sure they don't affect the estimation of EC/movement, and if `--repol` has not been set it will re-introduce the original slices before writing `my_eddy_output.nii.gz`. * my_eddy_output.eddy_outlier_map * my_eddy_output.eddy_outlier_n_stdev_map * my_eddy_output.eddy_outlier_n_sqr_stdev_map <
> These are numeric matrices in ASCII format that all have the same general layout. They consist of an initial line of text, after which there is one row for each volume and one column for each slice. Each row corresponding to a b=0 volume is all zeros since `eddy` don't consider outliers in these. The meaning of the numbers is different for the three files. * .eddy_outlier_map <
> All numbers are either 0, meaning that scan-slice is not an outliers, or 1 meaning that is is. * .eddy_outlier_n_stdev_map <
> The numbers denote how many standard deviations off the mean difference between observation and prediction is. * .eddy_outlier_n_sqr_stdev_map <
> The numbers denote how many standard deviations off the square root of the mean squared difference between observation and prediction is. The following file is only written if the `--repol` flag was set. * my_eddy_output.eddy_outlier_free_data.nii.gz <
> This is the original data given by `--imain` ''not'' corrected for susceptibility or EC-induced distortions or subject movement, ''but'' with outlier slices replaced by the Gaussian Process predictions. This file is generated for anyone who might want to use `eddy` for outlier correction but who want to use some other method to correct for distortions and movement. Though why anyone would want to do that is not clear to us. The following file is only written if the `--mporder` option has been set to a value greater than zero. * my_eddy_output.eddy_movement_over_time <
> This is a text files with one row of six values for each ''excitation''. The values in a row are translation (mm) in the x-, y- and z-directions followed by rotations (radians) around the x-, y- and z-axes. If one for example has a data set of 36 volumes with 66 slices each and an MB-factor (Multi Band) of 2 the `.eddy_movement_over_time` file will have 36*66/2=1188 lines. The following files are only written if the corresponding flags are set. * my_eddy_output.eddy_cnr_maps <
> Written iff `--cnr_maps` is set. This is a 4D image file with N+1 volumes where N is the number of non-zero ''b''-value shells. The first volume contains the voxelwise SNR for the ''b''=0 shell and the remaining volumes contain the voxelwise CNR (Contrast to Noise Ratio) for the non-zero ''b''-shells in order of ascending ''b''-value. For example if your data consists of 5 ''b''=0, 48 ''b''=1000 and 64 ''b''=2000 volumes, my_eddy_output.eddy_cnr_maps will have three volumes where the first is the SNR for the ''b''=0 volumes, followed by CNR maps for ''b''=1000 and ''b''=2000. The SNR for the ''b''=0 shell is defined as mean(b0)/std(b0). The CNR for the DWI shells is defined as std(GP)/std(res) where std is the standard deviation of the Gaussian Process (GP) predictions and std(res) is the standard deviation of the residuals (the difference between the observations and the GP predictions). The `my_eddy_output.eddy_cnr_maps` can be useful for assessing the overall quality of the data. * my_eddy_output.eddy_residuals <
> Written iff `--residuals` is set. This is a 4D image file with the same number of volumes as the input (`--imain`) data. The order of the volumes is the same as for the input data and each voxel contains the difference between the observation and the prediction. = Categorised list of parameters = * Parameters that specify input files . [[#A--imain|--imain=filename]] <
> Name of a file with input images. E.g. `all_my_images.nii`. Compulsory. . [[#A--mask|--mask=filename]] <
> Name of a file with mask specifying brain vs no-brain. E.g. `my_brain_mask.nii`. Compulsory. . [[#A--acqp|--acqp=filename]] <
> Name of text file with information about the acquisition of the images in `--imain`. E.g. `my_scan_pars.txt`. Compulsory. . [[#A--index|--index=filename]] <
> Name of text file specifying the relationship between the images in --imain and the information in --acqp and --topup. E.g. `index.txt`. Compulsory. . [[#A--bvecs|--bvecs=filename]] <
> Name of text-file with [[FDT/UserGuide#DTIFIT|normalised diffusion gradients]]. Compulsory. . [[#A--bvals|--bvals=filename]] <
> Name of text-file with [[FDT/UserGuide#DTIFIT|b-values]]. Compulsory. . [[#A--topup|--topup=filename]] <
> Name of output from a previous [[topup]] run. Should be the same as the argument given to `topup`'s [[topup/TopupUsersGuide/#A--out|--out]]. Optional. . [[#A--field|--field=filename]] <
> Name of image volume representing the susceptibility field. Should be in Hz. Optional. . [[#A--field_mat|--field_mat=filename]] <
> Name of rigid body matrix specifying the relative positions of `--field` and `--imain`. Optional. * Parameters specifying names of output-files . [[#A--out|--out=basename]] <
> Basename for output-files. The corrected images will be named `"basename".nii.gz`. Compulsory. * Parameters specifying how `eddy` should be run . [[#A--flm|--flm=linear/quadratic/cubic]] <
> Spatial model for the field generated by eddy currents. Default `quadratic`. . [[#A--slm|--slm=none/linear/quadratic]] <
> Model for how diffusion gradients generate eddy currents. Default `none`. . [[#A--fwhm|--fwhm="fwhm in mm"]] <
> Filter width to use for pre-filtering of data for the estimation process. Default 0. . [[#A--niter|--niter="required number of iterations"]] <
> Specifies how many iterations should be run. Default 5. . [[#A--fep|--fep]] <
> Fill Empty Planes. Default false. . [[#A--interp|--interp=spline/trilinear]] <
> Specifies interpolation model during estimation. Default `spline`. . [[#A--resamp|--resamp=jac/lsr]] <
> Specifies final resampling strategy. Default `jac`. . [[#A--nvoxhp|--nvoxhp="number of voxels"]] <
> Specifies number of voxels to use for GP hyperparameter estimation. Default 1000. . [[#A--ff|--ff="number between 1 and 10"]] <
> Fudge factor that imposes Q-space smoothing during estimation. Default 10. . [[#A--dont_sep_offs_move|--dont_sep_offs_move]] <
> Do '''not''' attempt to separate subject movement from field DC component. Default false. . [[#A--dont_peas|--dont_peas]] <
> Do '''not''' end with an alignment of shells to each other. Default false. * Parameters pertaining to outlier replacement . [[#A--repol|--repol]] <
> Replace outliers. Default false. . [[#A--ol_nstd|--ol_nstd]] <
> No. of standard deviations away a slice must be to qualify as an outlier. Default 4. . [[#A--ol_nvox|--ol_nvox]] <
> The minimum no. of intracerebral voxels in a slice to consider it as an outlier. Default 250. . [[#A--ol_type|--ol_type]] <
> Base outlier detection on slices (sw) multi-band groups (mb) or both (both). Default sw. . [[#A--ol_pos|--ol_pos]] <
> Consider both positive and negative outliers. Default false. . [[#A--ol_sqr|--ol_sqr]] <
> Consider outliers in sum-of-squared distribution. Default false. . [[#A--mb|--mb]] <
> Specifies multi-band factor (number of simultaneous slices). Default 1. . [[#A--mb_offs|--mb_offs]] <
> Specifies missing slices at top or bottom. Default 0. * Parameters pertaining to intra-volume (slice-to-vol) movement correction . [[#A--mporder|--mporder]] <
> Temporal order of movement. Default 0. . [[#A--s2v_niter|--s2v_niter]] <
> No. iterations for estimating slice-to-vol movement. Default 5, if --mporder > 0. . [[#A--s2v_lambda|--s2v_lambda]] <
> Strength of temporal regularisation of slice-to-vol movement. Default 1. . [[#A--s2v_interp|--s2v_interp]] <
> Interpolation model for estimation of slice-to-vol movement. Default 'trilinear'. . [[#A--slspec|--slspec]] <
> Text-file describing slice acquisition order. * Parameters pertaining to susceptibility-by-movement correction . [[#A--estimate_move_by_susceptibility|--estimate_move_by_susceptibility]] <
> When specified `eddy` estimates how the susceptibility field changes with subject movement. Default false. . [[#A--mbs_niter|--mbs_niter]] <
> Number of iterations for susceptibility-by-movement estimation. Default 10. . [[#A--mbs_lambda|--mbs_lambda]] <
> Specifies the balance between data and smoothness for the susceptibility rate-of-change fields. Default 10. . [[#A--mbs_ksp|--mbs_ksp]] <
> Specifies the spline knot-spacing of the susceptibility rate-of-change fields. Default 10 (mm). * Miscellaneous parameters . [[#A--data_is_shelled|--data_is_shelled]] <
> Do '''not''' check that data is shelled. Trust the user. Default false. . `--verbose` <
> Print progress information to the screen while running. Can be useful to pipe into file before reporting problems . `--very_verbose` <
> Print very rich progress information to the screen while running. Can be useful to pipe into file before reporting problems . `--help` <
> Take a wild stab. = Parameters explained = == --imain == Should specify a 4D image file with ''all'' your images acquired as part of a diffusion protocol. I.e. it should contain both your dwis and your ''b''=0 images. If you have collected your data with reversed phase-encode blips, data for both blip-directions should be in this file. == --mask == Single volume image file with ones and zeros specifying brain (one) and no-brain (zero). Typically obtained by running [[BET/UserGuide|BET]] on the first ''b''=0 image. If you have previously run [[topup]] on your data I suggest you run [[BET/UserGuide|BET]] on the first volume (or the average of all volumes) of the [[topup/TopupUsersGuide#A--iout|--iout]] output and use that. == --acqp == A text-file describing the acquisition parameters for the different images in [[#A--imain|--imain]]. The format of this file is identical to that used by [[topup/TopupUsersGuide/#A--datain|topup]] (though the parameter is called `--datain` there) and described in detail [[eddy/Faq#How_do_I_know_what_to_put_into_my_--acqp_file|here]]. == --index == A text-file that determines the relationship between on the one hand the images in [[#A--imain|--imain]] and on the other hand the acquisition parameters in [[#A--acqp|--acqp]] and (optionally) the subject movement information in [[#A--topup|--topup]]. It should be a single column (or row) with one entry per volume in [[#A--imain|--imain]]. We will use a small (simplified) example to make it clear. {{attachment:eight_original_images.png||align="top",width="800"}} The image above shows a selected slice from each of the eight volumes in [[#A--imain|--imain]]. The associated [[#A--acqp|--acqp]] file is `-1 0 0 0.051`<
>`1 0 0 0.051` which specifies that phase-encoding is performed in the ''x''-direction, sometimes traversing ''k''-space left->right (-1) and sometimes right->left (1). Finally the `--index` file is `1 1 1 1 2 2 2 2` which specifies that the first four volumes in [[#A--imain|--imain]] were acquired using the acquisition parameters on the first row (index 1) of the [[#A--acqp|--acqp]] file, and that volumes 5--8 were acquired according to the second row (index 2). There are cases when there may be advantageous to have more than two lines in the [[#A--acqp|--acqp]] file and in these cases there will be more than two different index values in the `--index` file. These cases are explained [[eddy/Faq#Why_do_I_need_more_than_two_rows_in_my_--acqp_file|here]] == --bvecs == A text file with normalised vectors describing the direction of the diffusion weighting. This is the same file that you would use for [[FDT/UserGuide#DTIFIT|FDT]]. == --bvals == A text file with ''b''-values () describing the "amount of" diffusion weighting. This is the same file that you would use for [[FDT/UserGuide#DTIFIT|FDT]]. == --topup == This should only be specified if you have previously run [[topup]] on your data and should be the same name that you gave as an argument to the [[topup/TopupUsersGuide#A--out|--out]] parameter when you ran `topup`. == --field == If there is no [[topup]] output available for your study you may alternatively use a "traditional" fieldmap in its place. This can for example be a dual echo-time fieldmap that has been prepared using [[FUGUE|PRELUDE]]. Note that in contrast to for example [[FUGUE]] it expects the fieldmap to be scaled in Hz. For boring reasons the filename has to be given without an extension. For example `--field=my_field`, '''not''' `--field=my_field.nii.gz`. /!\ There are two important caveats with `--field`, which is why we strongly recommend using a [[topup]] derived field if at all possible. These are * If one uses the same [[eddy/Faq#How_do_I_know_what_to_put_into_my_--acqp_file|--acqp]] file for both `topup` and `eddy` it doesn't matter if one get the total acquisition time or the PE-polarity wrong. The errors in `topup` and `eddy` will cancel out and the end results will still be correct. Our experience is that getting these two numbers right is the biggest problem with using [[FUGUE]]. * If the first volume input to `topup` is the same as the first volume in [[#A--imain|--imain]] to `eddy`, the fieldmap will automatically be in the reference space of `eddy`. When using the `--field` option the user is responsible for making sure the fieldmap is registered to the `eddy` data. == --field_mat == Specifies a [[flirt]] style rigid body matrix that specifies the relative locations of the field specified by `--field` and the first volume in the file specified by `--imain`. If `my_field` is the field specified by `--field`, `my_ima` is the first volume of `--imain` and `my_mat` is the matrix specified by `--field_mat`. Then the command `flirt -ref my_ima -in my_field -init my_mat -applyxfm` should be the command that puts `my_field` in the space of `my_ima`. == --out == Specifies the basename of the output. Let us say `--out="basename"`. The output will then consist of a 4D image file named `.nii.gz` containing all the corrected volumes and a text-file named `.eddy_parameters` with parameters defining the field and movement for each scan. == --flm == This parameter takes the values `linear`, `quadratic` or `cubic`. It specifies how "complicated" we believe the eddy current-induced fields may be. Setting it to `linear` implies that we think that the field caused be eddy currents will be some combination of linear gradients in the x-, y- and z-directions. It is this model that is the basis for the claim "eddy current distortions is a combination of shears, a zoom and a translation". It is interesting (and surprising) how successful this model has been in describing (and correcting) eddy current distortions since not even the fields we intend to be linear (''i.e.'' our gradients) are particularly linear on modern scanners. The next model in order of "complication" is `quadratic` which assumes that the eddy current induced field can be modelled as some combination of linear and quadratic terms (x, y, z, x^2^, y^2^, z^2^, xy, xz and yz). This is almost certainly also a vast oversimplification but our practical experience has been that this model successfully corrects for example the HCP data (which is not well corrected by the `linear` model). The final model is `cubic` which in addition to the terms in the `quadratic` model also has cubic terms (x^3^, x^2^y, etc). We have yet to find a data set where the `cubic` model performs significantly better than the `quadratic` one. Note also that the more complicated the model the longer will `eddy` take to run. == --slm == "Second level model" that specifies the mathematical form for how the diffusion gradients cause eddy currents. For high quality data with 60 directions, or more, sampled on the whole sphere we have not found any advantage of performing second level modelling. Hence our recommendation for such data is to use `none`, and that is also the default. If the data has quite few directions and/or is has not been sampled on the whole sphere it can be advantageous to specify `--slm=linear`. == --fwhm == Specifies the FWHM of a gaussian filter that is used to pre-condition the data before using it to estimate the distortions. In general the accuracy of the correction is not strongly dependent on the FWHM. Empirical tests have shown that ~1-2mm might be best, but by so little that the default has been left at 0. One exception is when there is substantial subject movement, which may mean that `eddy` fails to converge in 5 iterations if run with `fwhm=0`. In such cases we have found that `--fwhm=10,0,0,0,0` works well. It means that the first iteration is run with a FWHM of 10mm, which helps that algorithm to take a big step towards the true solution. The remaining iterations are run with a FWHM of 0mm, which offers high accuracy. == --niter == `eddy` does not check for convergence. Instead it runs a fixed number of iterations given by `--niter`. This is not unusual for registration algorithms where each iteration is expensive (i.e. takes long time). Instead we run it for a fixed number of iterations, 5 as default. If, on visual inspection, one finds residual movement or EC-induced distortions it is possible that `eddy` has not fully converged. In that case we primarily recommend that one uses `--fwhm=10,0,0,0,0`, as described above, to speed up convergence. Only if that fails do we recommend increasing the number of iterations. == --fep == Stands for "Fill Empty Planes". For reasons that are not completely clear to us the reconstructed EPI images from some manufacturers contain one or more empty "planes". A "plane" in this context do not necessarily mean a "slice". Instead it can be for example the "plane" that constitutes the last voxel along the PE-direction for each "PE-direction column". The presence/absence of these "empty planes" seems to depend on the exact details of image encoding part of the sequence. IF `--fep` is set `eddy` will attempt to identify the empty planes and "fill them in". The filling will consist of duplicating the previous plane if the plane is perpendicular to the frequency-encode direction and by interpolation between the previous and the "wrap-around plane" if the plane is perpendicular to the PE-direction . == --interp == Specifies the interpolation model used during the estimation phase, and during the final resampling if --resamp=jac is used. We strongly recommend staying with `spline`, which is also the default. == --resamp == Specifies how the final resampling is performed. The options are * jac: Stands for Jacobian modulation. This is a "traditional" type of interpolation (spline or trilinear depending on the `--interp` parameter) combined with Jacobian modulation to account for signal pile-up/dilution caused by local stretching/compression. However, in areas of compression (resulting in signal pile-up) there is a loss of resolution that this type of resampling cannot resolve. If acquisitions have not been repeated with opposed PE-directions, this is the only option that can be used. * lsr: Stands for Least-Squares Reconstruction. This method attempts to use the complimentary information in images acquired with opposing PE-directions, where a compressed area in one of the images will be stretched in the other image. This method can only be used if all acquisitions have been repeated with opposed PE-directions. == --nvoxhp == Specifies how many (randomly selected within the brain mask) voxels that are used when estimating the hyperparameters of the Gaussian Process used to make predictions. The default is 1000 voxels, and that is more than sufficient for typical data with resolution of 2x2x2mm or lower. For very high resolution data, such as for example the HCP 7T data, with relatively low voxel-wise SNR one may need to increase this number. The only "adverse" effect of increasing this number is an increase in execution time. == --ff == This should be a number between 1 and 10 and determines the level of Q-space smoothing that is used by the prediction maker during the estimation of the movement/distortions. Empirical testing has indicated that any number above 5 gives best results. We have set the default to 10 to be on the safe side. == --dont_sep_offs_move == All our models for the EC-field contains a component that is constant across the field and that results in a translation of the object in the PE-direction. Depending on how the data has been acquired it can be more or less difficult to distinguish between this constant component and subject movement. It matters because it affects how the diffusion weighted images are aligned with the ''b''=0 images. Therefore eddy attempts to distinguish the two by fitting a second level model to the estimated constant component, and everything that is not explained by that model will be attributed to subject movement. As of release [[eddy#What_is_new_in_5.0.11.3F|5.0.11]] it will also perform a Mutual Information based alignment along the PE-direction. If you set this flag `eddy` will '''not''' do that estimation. The option to turn this off is a remnant from when we did not know how well it would work and it is very unlikely you will ever use this flag. It will eventually be deprecated. == --dont_peas == The motion correction within `eddy` has greatest precision within a shell and has a bigger uncertainty between shells. There is ''no'' estimation of movement between the first ''b''=0 volume and the first diffusion weighted volume. Instead is is assumed that these have been acquired very close in time and that there were no movement between them. If there are multiple shells or if the assumption of no movement between the first ''b''=0 and the first diffusion weighted volume is not fulfilled it can be advantageous to perform a "Post Eddy Alignment of Shells" (`peas`). Our testing indicates that the `peas` has an accuracy of ~0.2-0.3mm, ''i.e.'' it is associated with some uncertainty. This precision is still such that `peas` is performed as default. ''But'', if one has a data set with a single shell (''i.e.'' a single non-zero shell) ''and'' the assumption of no movement between the first ''b''=0 and the first diffusion weighted image is true it can be better to avoid that uncertainty. And in that case it may be better to turn off `peas` by setting the `--dont_peas` flag. == --repol == When set this flag instructs `eddy` to remove any slices deemed as outliers and replace them with predictions made by the Gaussian Process. Exactly what constitutes an outlier is affected by the parameters [[#A--ol_nstd|--ol_nstd]], [[#A--ol_nvox|--ol_nvox]], [[#A--ol_type|--ol_type]], [[#A--ol_pos|--ol_pos]] and [[#A--ol_sqr|--ol_sqr]]. If the defaults are used for all those parameters an outlier is defined as a slice whose average intensity is at least four standard deviations lower than the expected intensity, where the expectation is given by the Gaussian Process prediction. The default is to ''not'' do outlier replacement since we don't want to risk people using it "unawares". However, our experience and tests indicate that it is always a good idea to use `--repol`. == --ol_nstd == This parameter determines how many standard deviations away a slice need to be in order to be considered an outlier. The default value of 4 is a good compromise between type 1 and 2 errors for a "standard" data set of 50-100 directions. Our tests also indicate that the parameter is not terribly critical and that any value between 3 and 5 is good for such data. For data of very high quality, such as for example HCP data with 576 dwi volumes, one can use a higher value (for example 5). Conversely, for data with few directions a lower threshold can be used. == --ol_nvox == This parameter determines the minimum number of intracerebral voxels a slice need to have in order to be considered in the outliers estimation. Consider for example a slice at the very top of the brain with only ten brain voxels. The average (based on only ten voxels) difference from the prediction will be very poorly estimated and to try to determine if it was an outlier or not on that basis would be very uncertain. For that reason there is a minimum number of brain voxels (as determined by the `--mask`) for a slice to be considered. The default is 250. == --ol_type == The normal behaviour for `eddy` is to consider each slice in isolation when assessing outliers. When acquiring multi-band (mb) data each slice in the group will have a similar signal dropout when it is caused by gross subject movement (as opposed to pulsatile movement of ''e.g.'' the brain stem). It therefore makes sense to consider an mb-group as the unit when assessing outliers. There are therefore three options for the unit of outliers. * Slice-wise (sw) is the default and most basic way of estimating outliers. * Group-wise (gw) considers an mb-group as the outlier unit. * Both (both) considers an mb-group as the unit, but additionally looks for slice-wise outliers. This is to find single slices within a group that has been affected by pulsatile movement not affecting the other slices. Default is `sw`. == --ol_pos == By default eddy only considers signal dropout, ''i.e.'' the negative end of the distribution of differences. One could conceivably also have positive outliers, ''i.e.'' slices where the signal is greater than expected. This could for example be caused by spiking or other acquisition related problems. If one wants to detect and remove this type of outliers one can use the `--ol_pos` flag. In general we don't encourage its use since we believe that type of artefacts should be detected and corrected at source. We have looked a large number of FMRIB and HCP data sets and not found a single believable positive outlier. == --ol_sqr == Similarly to the `--ol_pos` flag above it extends the scope of outliers that `eddy` considers. If the `--ol_sqr` flag is set `eddy` will look for outliers also in the distribution of "sums of squared differences between observations and predictions". This means it will detect also artefacts that doesn't cause a change in mean intensity. Similarly to `--ol_pos` we don't encourage the use of `--ol_sqr`. Artefacts that fall into this category should be identified and the causes corrected at the acquisition stage. == --mb == If `--ol_type=gw` or `--ol_type=both` `eddy` needs to know how the multi-band groups were acquired and to that end `--mb` has to be set. If for example the total number of slices is 15 and `mb=3` it will be assumed that slices 0,5,10 were acquired as one group, 1,6,11 as the next group etc (slices numbered 0,1,...14). /!\ The current version of `eddy` allows for a more detailed description of the multi-band structure (see `--slspec` [[#A--slspec|below]]), and if one wants to do slice-to-volume motion correction it is necessary to use that description. It is still possible to use `--mb`, but it is discouraged and it will be deprecated in the next release. == --mb_offs == If a slice has been removed at the top or bottom of the volumes in `--imain` the group structure given by `--mb` will no longer hold. If the bottom slice has been removed `--mb_offs` should be set to `--mb_offs=-1`. For the example above it would mean that the first group of slices would consist of slices 4,9 and the second group of 0,5,10 where the slice numbering refers to the new (with missing slice) volume numbered 0,1,13. Correspondingly, if the top slice was removed it should be set to 1. /!\ The current version of `eddy` allows for a more detailed description of the multi-band structure (see `--slspec` [[#A--slspec|below]]), and if one wants to do slice-to-volume motion correction it is necessary to use that description. It is still possible to use `--mb_offs`, but it is discouraged and it will be deprecated in the next release. == --mporder == If one wants to do slice-to-vol motion correction `--mporder` should be set to an integer value greater than 0 and less than the number of excitations in a volume. Only when `--mporder` > 0 will any of the parameters prefixed by `--s2v_` be considered. The larger the value of `--mporder`, the more degrees of freedom for modelling movement. If `--mporder` is set to ''N''-1, where ''N'' is the number of excitations in a volume, the location of each slice/MB-group is individually estimated. We don't recommend going that high and in our tests we have used values of ''N''/4 -- ''N''/2. The underlying temporal model of movement is a DCT basis-set of order ''N''. Slice-to-vol motion correction is computationally very expensive so it is only implemented for the CUDA version. == --s2v_niter == Specifies the number of iterations to run when estimating the slice-to-vol movement parameters. In our tests we have used 5--10 iterations with good results, and possibly a small advantage of 10 over 5. The slice-to-volume alignment is computationally expensive so expect ''N'' iterations of slice-to-volume movement estimation to take an order of magnitude longer than ''N'' iterations of volumetric movement estimation. == --s2v_lambda == Determines the strength of temporal regularisation of the estimated movement parameters. This is especially important for single-band data with "empty" slices at the top/bottom of the FOV. We have used values in the range 1--10 with good results. == --s2v_interp == Determines the interpolation model in the slice-direction for the estimation of the slice-to-volume movement parameters. In theory `spline` is a better interpolation method, but in this particular context (interpolation of irregularly spaced data) the computational cost of using `spline` is very large. In our tests we have not been able to see any actual advantage of using `spline`, so we recommend using `trilinear`. For the final re-sampling `spline` is always used regardless of how `--s2v_interp` is set. == --slspec == Specifies a text-file that describes how the slices/MB-groups were acquired. This information is necessary for `eddy` to know how a temporally continuous movement translates into location of individual slices/MB-groups. Let us say a given acquisition has ''N'' slices and that ''m'' is the MB-factor (also known as Simultaneous Multi-Slice (SMS)). Then the file pointed to be `--slspec` will have N/m rows and m columns. Let us for example assume that we have a data-set which has been acquired with an MB-factor of 3, 15 slices and interleaved slice order. The file would then be {{{ 0 5 10 2 7 12 4 9 14 1 6 11 3 8 13 }}} where the first row "0 5 10" specifies that the first, sixth and 11th slice are acquired first and together, followed by the third, eighth and 13th slice etc. For single-band data and for multi-band data with an odd number of excitations/MB-groups it is trivial to work out the `--slspec` file using the logic of the example. For an even number of excitations/MB-groups it is considerably more difficult and we recommend using a DICOM->niftii converter that writes the exact slice timings into a .JSON file. This can then be used to create the `--slspec` file. == --estimate_move_by_susceptibility == Specifies that `eddy` shall attempt to estimate how the susceptibility-induced field changes when the subject moves in the scanner. The default is not to do it (''i.e.'' the `--estimate_move_by_susceptibility` flag is not set). This is because it increases the execution time, especially when combined with slice-to-volume motion correction. We recommend it is used when scanning populations that move "more than average", such as babies, children or other subjects that have difficulty remaining still. It can also be needed for studies with long total scan times, such that even in corporative subjects the total range of movement can become big. The estimation is based on a first order Taylor expansion of the static field with respect to changes in pitch (rotation around x-axis) and roll (rotation around y-axis). Theoretically the field should only depend on those parameters, and our experiments with modelling it as a Taylor expansion with respect to all six movement parameters have shown no additional benefits. We have also tested using a second order Taylor expansion with respect to pitch and roll, and not found any additional advantages of that either. == --mbs_niter == Specifies the total number of iterations used for estimating the susceptibility rate-of-change fields. Uncorrected susceptibility-by-movement effects may cause a bias when estimating movement parameters. Likewise, poorly estimated movement parameters may cause a bias when estimating the susceptibility rate-of-change fields. For this reason `eddy` interleaves the estimation of subject movement and susceptibility-by-movement effects, after first doing an "initial" estimation of movement parameters. == --mbs_lambda == == --mbs_ksp == Knot-spacing (in mm) of the cubic spline field used to represent the susceptibility rate-of-change fields. The default is 10mm. In practice it is always an integer multiple of of the voxel size, and `eddy` will chose the nearest spacing that ensures that the knot-spacing is not larger than the requested. For example of the voxel size is 2.2mm, and a knot spacing of 10mm is requested the "actual" knot-spacing will be 4x2.2=8.8mm. Our experiment indicates that for scanners with a field of 3T or lower there is no need for higher resolution than 10mm. == --data_is_shelled == At the moment `eddy` works for single- or multi-shell diffusion data, ''i.e.'' it doesn't work for DSI data. In order to ensure that the data is shelled `eddy "checks it", and only proceeds if it is happy that it is indeed shelled. The checking is performed through a set of heuristics such as i) how many shells are there? ii) what are the absolute numbers of directions for each shell? iii) what are the relative numbers of directions for each shell? etc. It will for example be suspicious of too many shells, too few directions for one of the shells etc. It has emerged that some popular schemes get caught in this test. Some groups will for example acquire a "mini shell" with low ''b''-value and few directions and that has failed to pass the "check", even though it turns out `eddy` works perfectly well on the data. For that reason we have introduced the `--data_is_shelled` flag. If set, it will bypass any checking and `eddy` will proceed as if data was shelled. Please be aware that if you have to use this flag you may be in untested territory and that it is a good idea to check your data extra carefully after having run `eddy` on it. == --verbose == Turns on printing of information about the algorithms progress to the screen. == --help ==