Introduction

Since the advent of digital processing methods, there has been an increasing number of techniques for digital acquisition of measurements of physical objects, based on such modalities as visible and invisible light, heat, magnetic fields, radio waves, and many other methods covering almost the entire energy spectrum. The vast quantity of information generated often precludes the use of all but automated methods for processing this data. In general, the raw data from the various sensors must be transformed into more convenient representations and structured in ways that facilitate advanced digital processing and analysis. This general problem of information transformation and representation manifests itself in many different applications, each with its own specific variations. One particular case is a large set of applications which requires the transformation of three dimensional density information into structural representations of three dimensional objects.

Within the domain of medical imaging, magnetic resonance imaging, known as MRI or MR imaging, is a prominent method of acquiring structural information about organisms in a non-destructive way. However, the raw data acquired constitutes a very low level representation of the information, with various sources of errors in the signal. In order to perform advanced processing, the MR image must be transformed into some digital representation that can be related to the wealth of anatomical information that is available from other sources, notably textbooks and experienced anatomists.

This document constitutes a doctoral dissertation in the field of computer science, with specific application to the domain of computational neuroscience. The problem being addressed is that of automatically generating digital models of neuroanatomical structures from three dimensional images such as MRI. One of the outstanding problems in this large research area is the question of how to respond to the non-optimality of the input data, which is noisy, under-sampled, and incomplete. Some contemporary methods of creating digital models are predominantly data driven, which is insufficient for automatic use due to the inaccuracy of the input data. Other more successful methods attempt to use model-based constraints to fill in the information missing in the data, but the facilities provided for incorporating model information are rather limited. A new method is presented here which is shown to provide improved neuroanatomical modeling of medical images, principally by integration of two model-based approaches.

The first approach addresses topological errors due to noise, under-sampling, and other imaging artifacts. Due to these factors, the image data from magnetic resonance imaging typically does not have a topology consistent with the knowledge gleaned from studies of actual human brains. As a result, purely data driven methods, as well as many model-based approaches, can be confounded by the incorrect topology in the data. Furthermore, most contemporary model-based methods are susceptible to producing models which are not correct relative to medical knowledge, in particular, objects which intersect themselves or other objects. In response to these limitations, the first approach presented here is that of intersection avoidance in the process of creating digital models. The resulting models are guaranteed not to intersect themselves or each other, making them more consistent with the real world objects which they are meant to represent. Although intersection avoidance is explored here within the context of neuroanatomical modeling, the idea can be generally applied to a broad array of image recognition tasks. It is shown here that the addition of such constraints to the construction of digital models can improve the correctness of the resulting solution, without adding a prohibitive computational cost.

The second approach addresses an aspect of imaging errors in the data which is particularly problematic in the domain of neuroanatomy. The human brain contains large surfaces which are very tightly folded, resulting in highly convoluted areas where under-sampling and noise degrade or remove the appearance of boundaries between two touching surfaces. The approach chosen to reduce the effects of such edge degradation uses anatomical knowledge about the relative position of different types of tissues in the human brain. The identification of one surface boundary is improved by constraining it with the position of another boundary and a priori knowledge of the relationship between the two. A general method of creating multiple component models with inter-component constraints is shown to improve the identification of cortical surfaces in the face of sub-optimal data. Again, this idea can be carried over into other domains, where sets of inter-connected objects provide a better model of the data than single objects.

The following chapter details the problem being addressed and provides a computational neuroanatomical context for dealing with the problem and its particular challenges. Relevant techniques from the literature are presented with a discussion of the advantages and disadvantages of each as it relates to the problem domain. A separate short chapter is devoted to a survey of general edge detection methods, concluding with a description of established feature-based tissue classification algorithms which can be successfully used to provide edge detection in medical images. After laying this groundwork, the proposed solution is presented. Description of the object representation, the objective function with its various components, as well as the method of minimizing to find a solution is presented in detail. A separate chapter investigates the effect of each of the objective function constraints on very simple phantom data. The following chapter discusses relevant implementation details of the surface deformation method. The discussion of the validation of this method is divided into two chapters. The first of these chapters presents results of tests constructed on simulated data and small phantoms, so that the correct answer is in some way known or assumed, with the result that quantifiable error estimates can be found. The second validation chapter demonstrates results on real data where the true answer is not readily available, but validation consists of showing that the method produces results qualitatively and quantitatively consistent with other neuroanatomical analysis. Figure [*] (not available here) shows a photograph of a human brain and a computer generated image of a brain surface created by the algorithm described in this dissertation. The final chapter summarizes the results of this work, the weaknesses of the method, and presents some ideas for further work.

About this document ...

This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998)

Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html -no_navigation -split 0 d_intro.tex.

The translation was initiated by David MACDONALD on 1998-06-18


David MACDONALD
1998-06-18