next up previous
Next: Results Up: Appearance-based modelling and segmentation Previous: Introduction

Methods

In the appearance-based matching proposed by Cootes, a model of the grey-level variations was combined with an Active Shape Model (ASM). For the former, PCA was used to reduce the dimensionality of the grey-level data and generate a linear grey variation model [4]:



$\displaystyle {\bf g = \bar g + P_gB_g}$     (1)


where ${\bf\bar g}$ is the mean normalised grey-level vector, ${\bf P_g}$ is a set of orthogonal modes of variation and ${\bf B_g}$ is a set of grey-level parameters. In lieu of the 2D ASM, we proposed [6] to use a 3D Warp Model, generated by statistical analysis of a large number of example deformation fields. To simplify computations, the 3D deformation vector fields were decomposed into volumes of orthogonal deformation components x,y,z. PCA was used in a fashion similar to the grey-level modelling to generate linear warp variation models:



$\displaystyle {\bf x} =$ $\textstyle {\bf\bar x + P_xB_x}$   (2)
$\displaystyle {\bf y} =$ $\textstyle {\bf\bar y + P_yB_y}$   (3)
$\displaystyle {\bf z} =$ $\textstyle {\bf\bar z + P_zB_z}$   (4)


Using the same notation as [4], these linear models allow any new warp instance ${\bf w(x,y,z)}$ to be approximated by ${\bf \bar w}$, the mean warp, ${\bf P_w}$, the set of orthogonal modes of warp variations, and ${\bf B_w}$, the set of warp parameters. The space of all possible elements expressed by eqs. 2-4 is called the Allowable Warp Domain.

Since there might have been correlations between the grey-level and warp variations, parameters ${\bf B_g}$ and ${\bf B_{x,y,z}}$ were concatenated in a common matrix ${\bf B}$



$\displaystyle {\bf B} =$ $\textstyle {\bf W_g'B_g'}$   (5)
  $\textstyle {\bf B_x'} \,$    
  $\textstyle {\bf B_y'} \,$    
  $\textstyle {\bf B_z'}$    


where ${\bf W_g}$ is a diagonal matrix of weights accounting for differences in dimensions between grey-level (intensity) and warp variations (distances). The weights were based on the ratio r of standard deviation variation in each model:



$\displaystyle {\bf W} =$ $\textstyle r{\bf I}$   (6)
r = $\textstyle \sqrt {\frac {({\bar \sigma_x}^2 + {\bar \sigma_y}^2 + {\bar \sigma_z}^2)^{1/2}} {\bar \sigma_{grey}}}$    


${\bf I}$ being the identity matrix, and $\sigma_{grey,x,y,z}$ the standard deviations of the grey and warp models described above. PCA of the concatenated matrix ${\bf B}$ (eq. 5) yielded a super-set of parameters describing the complete appearance model



\begin{displaymath}
{\bf B = QC} \end{displaymath} (7)


where ${\bf Q}$ are appearance eigenvectors and ${\bf C}$ is a vector of parameters controlling both the warp and the grey-levels of the model.

In order to know how each principal direction contributes in the description of the total variance of the system, the ratio of relative importance of the eigenvalue $\lambda_k$ associated with the eigenvector k is used



$\displaystyle r_k = \frac {\lambda_k} {\Sigma^p_{j=1} \lambda_j}$     (8)


where the fraction rk is the relative importance for eigenvalue $\lambda_k$, over the sum of all $\lambda$, since p is the total number of eigenvectors. In each of the aforementioned PCA models, a percentage f% of the variance was selected, such that t eigenvectors were kept



$\displaystyle r_1 + r_2 +...+r_t > {\frac {f}{100}}$     (9)


The core of the segmentation method consisted in matching a new grey-level image to one synthesized by the model using the appearance parameters. The iterative method described by Cootes [4] was used.

The first step in this approach consisted in building a linear relationship between variations in appearance parameters and grey-level synthesized images. Defining ${\bf\delta V}$ as the difference vector between ${\bf V_i}$, the vector of grey-level values in the original image, and ${\bf V_m}$, the vector of grey-level values from the synthesized image (eq. 10), a linear model was constructed between ${\bf\delta V}$ and the error in the model parameters ${\bf\delta C}$ (eq. 11), using each image:



$\displaystyle {\bf\delta V = V_i - V_m}$     (10)
$\displaystyle {\bf\delta C} = {\bf A \delta V}$     (11)


To derive ${\bf A}$, a multivariate linear regression was run on a sample of known model displacements. The latter were expressed as a fraction of the standard variation of super-parameters ${\bf C}$.

The second step in appearance-based segmentation was to use an iterative algorithm to generate estimates of the synthesized image ${\bf V_m}$ that gradually approximated the image to be segmented. Varying model parameters ${\bf C}$ along each vector of ${\bf A}$, the algorithm found the closest match in the least square sense by minimizing the magnitude of the difference vector, ${\bf\Delta}$ = $\vert {\bf\delta V} \vert ^2$.

After convergence, the solution explicitly contained warp variation parameters (eq. 7), which could be expressed back into x,y,z components of the warp field and concatenated into ANIMAL vector format. Segmentation of the VOI was then possible using any structure model defined on the ANIMAL reference volume. It was achieved by applying the inverse of the deformation field to structures defined in the standard volume and then mapping those onto the subject.

In order to quantitatively compare the ANIMAL non-linear registration and segmentation method and the appearance-based modelling, a similarity measure, first proposed by Dice [7], was selected. As shown by Zijdenbos [9], this measure is a variant of the standard chance-corrected Kappa ($\kappa$) coefficient originally developed by Cohen [8]. This measure is the same as $\kappa$ when the background is infinitely large:


\begin{displaymath}
\kappa_\infty = \frac{2a}{2a + b+ c}
\end{displaymath} (12)

where a is the number of voxels in the intersection of both labellings, b is the number of voxels only labelled automatically, and c, only labelled manually.


next up previous
Next: Results Up: Appearance-based modelling and segmentation Previous: Introduction
Simon DUCHESNE
2001-08-09