For clusters, the method of Cao, 1999, Advances in Applied Probability, 31:577-593 is used.
For our example, suppose
we search the tmap for a local maxima inside a region of 1200cc, which
is on average the volume of a brain acquired with EPI. Also, let us say
that the images were filtered in the preprocessing stage with a 6mm Gaussian
filter and the voxel volume is 2.34375mm*2.34375mm*7mm= 38.4521mm3.
Therefore, in order to calculate the threshold we have to run tstat_threshold
in MATLAB. The inputs to this function are: search_volume, fwhm, voxel_volume, df, significance, cluster_threshold.
search_volume is the volume of the search region in mm3. Default is 1000000, i.e. 1000 cc. The method works well for any value of SEARCH_VOLUME, even zero, which simply gives the threshold for the tstat image at a single voxel. For our example :
search_volume = 1200000;
fwhm is the fwhm of a smoothing kernel applied to the data. Default is 0.0, i.e. no smoothing, which is roughly the case for raw fMRI data. For PET data, this would be the scanner fwhm of about 6mm. In the example above, we blurred the images, thus:
fwhm = 6;
voxel_volume is the volume of one voxel in mm3. Default is 5*5*5=125. The voxel size depends on the field of view and the slice thickness that you choose when you scan your subject. If you didn't write down the information when you scanned, you can find this information with mincinfo. In this case:
voxel_volume = 38.4521;
df is the degrees of freedom of the tstat image (DF), either out as the output of fmrilm, either printed out by multistat The degrees of freedom of the individual sdeffect.mnc files is printed out by fmrilm, and for our example is 113.
df = 113;
significance is the desired significance in the range from 0 to 0.2. Default is 0.05.
significance = 0.05;
cluster_threshold must be less than the calculated threshold, thresh. The cluster size is only accurate for large cluster_threshold, let us say, more than 2.5 and large search_volume.
The outputs of tstat_threshold are thresh and cluster_size.
N.B. If df < 1000 then the program uses random numbers to do certain integrals, so the cluster_size might vary slightly from run to run.
At this point we can compute the threshold:
The threshold is 4.9 and the cluster size is 418.57mm3 for
this individual case. This means that any cluster of neighbouring voxels above a threshold of 3 with a volume larger than 418.57 mm3 is significant at P<0.05.
Now that the individual files are analyzed we can combine sessions and subjects. This can be done by using multistat in MATLAB.
Averaging Data from Multiple Subjects, or Multiple Sessions
If you have acquired dynamic data sets for multiple subjects, you will probably want to analyze the group average, or for the same subject, if your experiment consisted in multiple runs, or it spread out on different days, you would like to combine the sessions. This is currently done by resampling the dynamic data sets for all subjects in Talairach space (on a standardized sampling grid), or resampling all your data in the same space and generating an activation map using multistat. It is very important that your data has exactly the same dimensions (slices, rows and columns).
Next, two examples will be discussed:
A.
Same subject, multiple sessions.
B.
Multiple subjects.
A. Same subject, multiple sessions.
In the example that was described before, for the subject Smith John there are two functional scans, both of them done using the same protocol and the same design experiment. We already analyzed both runs using fmridesign and fmrilm, from which we got the *effect.mnc, the *sdeffect.mnc and the *tstat.mnc files.
To calculate the tsat for the combined session, multisat, will be used. input_files_effect, input_files_sdeffect, input_files_df, input_files_fwhm, X, contrast, output_file_base, which_stats, fwhm_varatio, are the inputs to multistat, and they have to be defined before the function can be used.
As it can be observed, as an input, the effect files and the sdeffect file are needed. Those must have the same dimensions, thus they will have to be resampled in the same space. For this example they will be transformed in Talairach space.
N.B. If the data of all the runs is already in the same space, you don't need to transform it to Talairach space; i.e if you are averaging the runs of one subject, and all the runs were done in the same day, without moving the subject out of the scanner, you can perform multistat in the native space.
To transform the data from native space to the new space, a transformation must be found. mritotal can be used:
mritotal smith_john_19971024_1_2_mri.mnc.gz smith_john_19971024_1_2_mri_total.xfm
smith_john_19971024_1_2_mri_total.xfm, is the transformation file that will be used for subject Smith John. The effect and sdeffect files from both dynamic scans will be resampled:
mincresample smith_john_19971024_1_105235_mri_MC_hmw_effect.mnc
smith_john_19971024_1_105235_mri_MC_hmw_effect_tal.mnc \
-like /data/avgbrain1/brain/images/norm_avg_305_mri_2mm_unfilt.mnc.gz \
-transformation smith_john_19971024_1_2_mri_total.xfm
mincresample smith_john_19971024_1_105235_mri_MC_hmw_sdeffect.mnc
smith_john_19971024_1_105235_mri_MC_sdeffect_tal.mnc \
-like /data/avgbrain1/brain/images/norm_avg_305_mri_2mm_unfilt.mnc.gz \
-transformation smith_john_19971024_1_2_mri_total.xfm
mincresample smith_john_19971024_1_105957_mri_MC_hmw_effect.mnc
smith_john_19971024_1_105957_mri_MC_effect_tal.mnc \
-like /data/avgbrain1/brain/images/norm_avg_305_mri_2mm_unfilt.mnc.gz \
-transformation smith_john_19971024_1_2_mri_total.xfm
mincresample smith_john_19971024_1_105957_mri_MC_hmw_sdeffect.mnc
smith_john_19971024_1_105957_mri_MC_sdeffect_tal.mnc \
-like /data/avgbrain1/brain/images/norm_avg_305_mri_2mm_unfilt.mnc.gz \
-transformation smith_john_19971024_1_2_mri_total.xfm
Instead of using mincresample multiple times, for the same, subject, you could use resample_tal, if you can use the same transformation to resample your images into the new space.Similar to the mincresample command, you will have to provide the transformation and the like file (the model to be used) for the resample_tal command to work. If you type resample_tal -h , in a shell, you will see all the options for the command.
N.B. resample_tal is useful only for the case where you can use the same transformation for all the files that you would like to transform at a time.
Now that your files are in the same space, let's define the inputs for multistat:
input_files_effect is an
array that has as elements the effect files, the * _effect.mnc files that were created after running fmrilm; is the dependent variable;the effect files can be padded with extra blanks if necessary; they will be removed before use.
For our example:
input_file_effect = [ | 'smith_john_19971024_1_105235_mri_MC_hmw_effect_tal.mnc'; |
'smith_john_19971024_1_105957_mri_MC_hmw_effect_tal.mnc' ] ; |
input_files_sdeffect = [ | 'smith_john_19971024_1_105235_mri_MC_hmw_sdeffect_tal.mnc'; |
'smith_john_19971024_1_105957_mri_MC_hmw_sdeffect_tal.mnc' ] ; |
N.B. The filenames should have the same length. Just make sure your filenames have the same number of characters, for that you can just add the correct number of blanks.
input_files_df is the average degrees of freedom of the input files, as printed out by fmrilm. It is used to calculate the degrees of freedom, that will be printed out at the end.
In the example disscussed before, the degrees of freedom printed was 113, so:
input_files_df = 113;
input_files_fwhm is the fwhm in mm of the
original fMRI data. It is only used to calculate the degrees of freedom,
printed out at the beginning.
Default is 6.
input_files_fwhm = 6;
X is the design matrix, whose
rows are the files, and columns are the explanatory (independent) variables
of interest.
Default is X=[1; 1; 1; ..1] which just averages the files. If the rank of X equals the number of files, e.g. if X is square, then only a fixed effects analysis is possible. This is done very quickly.
In this example,
X= [ 1
1];
contrast is a row vector of contrasts for the T statistic image. Default is [1 0 ... 0], i.e. it picks out the first column of X. For our particular situation,
contrast = [1];
output_file_base: Base for output statistics. Default is the first INPUT_FILES_effect name minus the .mnc extension, or minus the .mnc.gz extension if it is compressed.
output_file_base = 'smith_john_multi';
which_stats: logical row vector indicating output statistics by 1.
Columns correspond to:
fwhm_varatio is the fwhm in mm of the Gaussian filter used to smooth the ratio of the random effects variance divided by the fixed effects variance.
- 0 will do no smoothing, and give a purely
random effects analysis;
- Inf will do complete smoothing to a
global ratio of one, giving a purely fixed effects analysis.
The higher the fwhm_varatio, the higher
the ultimate degrees of freedom of the tstat image (printed out before
a pause), and the more sensitive the test. However too much smoothing
will bias the results. The program pauses to allow inspection of the df;
if it is too low, press control c to cancel (no files are created), and
try increasing fwhm_varatio. Default is 15.
For our case, we consider only fixed effects, thus the fwhm_varatio will be
fwhm_varatio = Inf;
We can now calculate the statistics:
Again, the threshold for the t maps must be calculated. The way to calculate it is similar to the one run, one subject case , that we disscussed before
For the multisession example, the only thing that must be taken in account is the degrees of freedom. The degrees of freedom are calculated by multistat.Assume, that in addition to John Smith, we had two other subjects perform the same task that he carried out in his first dynamic scan. The names of the two other subjects are `Joe Doe' and `Citizen X' and we have two corresponding 3D T1-weighted anatomic volumes:
citizen_x_19971125_1_2_mri.mnc.gz
and two dynamic functional
image series (following motion correction):
citizen_x_19971125_1_122229_mri_MC.mnc
To analyze the multisubject data
we'll have to discuss :
The way to analyze multiple subjects data is very similar to single subject, multiple sessions. The motion corrected files, from each subject, should be analyzed with fmridesign and fmrilm, one at a time. The effect files and the sdeffect files should be resampled to have the same size. For this example, the files will be transformed into Talairach space. To transform the data, a transformation for each subject must be found. This transformation will be used to transform the respective effect files and sdfiles.
The data for Smith John is already resampled, the data from the other two subjects should be resampled also.
mritotal doe_joe_19971120_1_2_mri.mnc.gz joe_doe_19971120_1_2_mri_total.xfm
The doe_joe_19971120_1_2_mri_total.xfm file will be used to transform the effect files and the sdeffect files, of subject Joe Doe from native into Talairach space.
mincresample doe_joe_19971120_1_165239_mri_MC__hmw_effect.mnc
doe_joe_19971120_1_165239_mri_MC_hmw_effect_tal.mnc
\
-like /data/avgbrain1/brain/images/norm_avg_305_mri_2mm_unfilt.mnc.gz \
-transformation doe_joe_19971120_1_2_mri_total.xfm
mincresample doe_joe_19971120_1_165239_mri_MC_hmw_sdeffect.mnc
doe_joe_19971120_1_165239_mri_MC_hmw_sdeffect_tal.mnc
\
-like /data/avgbrain1/brain/images/norm_avg_305_mri_2mm_unfilt.mnc.gz \
-transformation doe_joe_19971120_1_2_mri_total.xfm
Same thing should be done for citizen_x_19971125_1_122229_mri_MC.mnc
mritotal citizen_x_19971125_1_2_mri.mnc.gz
citizen_x_19971125_1_2_mri_total.xfm
mincresample citizen_x_19971125_1_122229_mri_MC__hmw_effect.mnc
citizen_x_19971125_1_122229_mri_MC_hmw_effect_tal.mnc
\
-like /data/avgbrain1/brain/images/norm_avg_305_mri_2mm_unfilt.mnc.gz \
-transformation citizen_x_19971125_1_2_mri_total.xfm
mincresample citizen_x_19971125_1_122229_mri_MC__hmw_sdeffect.mnc
citizen_x_19971125_1_122229_mri_MC_hmw_sdeffect_tal.mnc
\
-like /data/avgbrain1/brain/images/norm_avg_305_mri_2mm_unfilt.mnc.gz \
-transformation citizen_x_19971125_1_2_mri_total.xfm
After resampling all the files, multistat can be used.
input_files_effect = [ | 'smith_john_19971024_1_105235_mri_MC_hmw_effect_tal.mnc'; | |
'smith_john_19971024_1_105957_mri_MC_hmw_effect_tal.mnc'; | ||
'citizen_x_19971125_1_122229_mri_MC_hmw_effect_tal.mnc  '; | ||
'doe_joe_19971120_1_165239_mri_MC_hmw_effect_tal.mnc ' | ] ; |
input_files_sdeffect = [ | 'smith_john_19971024_1_105235_mri_MC_hmw_sdeffect_tal.mnc' ; | |
'smith_john_19971024_1_105957_mri_MC_hmw_sdeffect_tal.mnc' ; | ||
'citizen_x_19971125_1_122229_mri_MC_hmw_sdeffect_tal.mnc ' ; | ||
'doe_joe_19971120_1_165239_mri_MC_hmw_sdeffect_tal.mnc ' | ] ; |
input_files_df = 113;
input_files_fwhm = 6;
X = [ | ||
1 | ||
1 | ||
1 | ||
1 | ||
] ; |
output_file_base = 'hot_warm_multi';
which_stats = ones ( 1, 4 ) ;
fwhm_varatio = 15;
Because we have multiple subjects, the effects may vary from session to session and from subject to subject. To take this extra source of variability into account, we should do a random effects analysis.
The extra variability is measured by sdfiles,
outputted from multistat. If there is no random effect, then sdfiles should
be roughly 1. If there is a random effect, sdfiles will be greater than
1. To allow for the random effects, we multiply the sdeffect by sdfiles,
and divide the tstat by sdfiles. The only problem is that sdfiles is itself
extremely noisy due to its low degrees of freedom (multistat_df), which
results in a loss of sensitivity. One way around this is to assume that
the underlying effect is fairly smooth, so it can be better estimated by
smoothing sdfiles. This will increase its degrees of freedom, thus increasing
sensitivity. Note however that too much smoothing could introduce bias.
As a guess, try 15mm smoothing, which is acttually the default.
Previous | Next | Back to table of contexts |
---|