In the appearance-based matching proposed by Cootes, a model of the grey-level variations was combined with an Active Shape Model (ASM). For the former, PCA was used to reduce the dimensionality of the grey-level data and generate a linear grey variation model [4]:
where
is the mean normalised grey-level vector,
is a
set of orthogonal modes of variation and
is a set of grey-level
parameters. In lieu of the 2D ASM, we proposed [6] to use a
3D Warp Model, generated by statistical analysis of a large number of
example deformation fields. To simplify computations, the 3D
deformation vector fields were decomposed into volumes of orthogonal
deformation components x,y,z. PCA was used in a fashion similar to
the grey-level modelling to generate linear warp variation models:
Using the same notation as [4], these linear models allow
any new warp instance
to be approximated by
,
the mean warp,
,
the set of orthogonal modes of
warp variations, and
,
the set of warp parameters. The
space of all possible elements expressed by eqs. 2-4 is called the
Allowable Warp Domain.
Since there might have been correlations between the grey-level and warp
variations, parameters
and
were
concatenated in a common matrix
where
is a diagonal matrix of weights accounting for differences
in dimensions between grey-level (intensity) and warp variations
(distances). The weights were based on the ratio r of standard
deviation variation in each model:
being the identity matrix, and
the
standard deviations of the grey and warp models described above. PCA
of the concatenated matrix
(eq. 5) yielded a super-set
of parameters describing the complete appearance model
where
are appearance eigenvectors and
is a vector of parameters
controlling both the warp and the grey-levels of the model.
In order to know how each principal direction contributes in the
description of the total variance of the system, the ratio of relative
importance of the eigenvalue
associated with the eigenvector k is used
where the fraction rk is the relative importance for eigenvalue
,
over the sum of all
,
since p is the total number
of eigenvectors. In each of the aforementioned PCA models, a
percentage f% of the variance was selected, such that t
eigenvectors were kept
The core of the segmentation method consisted in matching a new grey-level image to one synthesized by the model using the appearance parameters. The iterative method described by Cootes [4] was used.
The first step in this approach consisted in building a linear
relationship between variations in appearance parameters and
grey-level synthesized images. Defining
as the
difference vector between
,
the vector of grey-level values
in the original image, and
,
the vector of grey-level
values from the synthesized image (eq. 10), a linear
model was constructed between
and the error in the
model parameters
(eq. 11), using each
image:
To derive ,
a multivariate linear regression was run on a
sample of known model displacements. The latter were expressed as a
fraction of the standard variation of super-parameters
.
The second step in appearance-based segmentation was to use an
iterative algorithm to generate estimates of the synthesized image
that gradually approximated the image to be segmented.
Varying model parameters
along each vector of
,
the
algorithm found the closest match in the least square sense by
minimizing the magnitude of the difference vector,
=
.
After convergence, the solution explicitly contained warp variation parameters (eq. 7), which could be expressed back into x,y,z components of the warp field and concatenated into ANIMAL vector format. Segmentation of the VOI was then possible using any structure model defined on the ANIMAL reference volume. It was achieved by applying the inverse of the deformation field to structures defined in the standard volume and then mapping those onto the subject.
In order to quantitatively compare the ANIMAL non-linear registration
and segmentation method and the appearance-based modelling, a
similarity measure, first proposed by Dice [7], was
selected. As shown by Zijdenbos [9], this measure is
a variant of the standard chance-corrected Kappa ()
coefficient originally developed by Cohen [8]. This
measure is the same as
when the background is infinitely
large:
where a is the number of voxels in the intersection of both labellings, b is the number of voxels only labelled automatically, and c, only labelled manually.