Image-Guided Neurosurgery (IGNS):

Tal Arbel, Xavier Morandi, Roch M. Comeau, D. Louis Collins


Abstract


Movements of brain tissue during neurosurgical procedures reduce the effectiveness of using pre-operative images for intra-operative surgical guidance. In this work, we explore the use of acquiring intra-operative ultrasound (US) images for the quantification of and correction for non-linear brain deformations. We present a multi-modal, automatic registration strategy that matches pre-operative images (e.g. MRI) to intra-operative ultrasound to correct for the non-linear brain deformations. The strategy involves using the predicted appearance of neuroanatomical structures in ultrasound images to build ``pseudo ultrasound'' images based on pre-operative segmented MRI. These images can then be registered to intra-operative US in a strategy based on cross-correlation measurements generated from the ANIMAL [1] registration package. The feasibility of the theory is demonstrated through its application to clinical patient data acquired during 12 neurosurgical procedures. Qualitative examination of the results indicate that the system is able to correct for non-linear brain deformations.


Introduction

For an introduction to IGNS, see IGNS and Ultrasound in IGNS.


Methods

The goal of this work is to update the patient's pre-operative MR images based on US images acquired intra-operatively, given the presence of non-linear brain deformations. Our strategy is to first acquire pre-operative patient MRI and store the result as a 3D volume. During the surgical procedure, a series of US images are acquired. The specific aims of this work are: (i) to compute an initial linear transformation that maps the position and orientation of the US images into the coordinate space defined by the pre-operative MR volume, (ii) to construct a full 3D composite US volume from the images acquired and (iii) to compute the non-linear deformation field from the US volume to the corresponding MR volume in order to correct for non-linear brain deformations as well as errors in the linear registration stage. The resulting deformation field will be used to provide the surgeon with an update of the patient's MR images during the procedure.

Linear Registration

Pre-operative patient MRI are acquired and stored as a 3D volume in MINC format, a publicly available medical image file format developed at the Montreal Neurological Institute that was designed as a multi-modal, N-dimensional, cross-platform format. In order to perform comparisons between the two data sets, US images acquired intra-operatively must then be mapped to the same space as the corresponding pre-operative MRI. The transformations required to perform the mapping are computed by making use of a Polaris tracking system (Northern Digital Inc.) using a procedure described in US in IGNS . As US images are acquired during surgery, the final transformation can be used to extract a corresponding oblique slice from the MRI volume, permitting the simultaneous display and comparison of the pre-operative MRI and intra-operative US.

3D Composite US Volume

Once the sequence of acquired US images are stored in the appropriate coordinate space, a composite 3D US is created by superimposing and averaging the slices into a full 3D volume (originally zero-valued) in the same coordinate frame as the MR volume acquired earlier. The methodology chosen uses a nearest neighbour approach to place each intensity pixel into the nearest voxel in the volume image, in a strategy similar to that in [2]. Because the images are stored in MINC format, tools are available to average the slices in the areas where they intersect. A blurring operator is applied to the volume to remove the effects of acoustic artifacts such as speckle noise. An example of an US volume superimposed onto a patient's MRI can be seen in Figure 1.

Figure 1: 3D Composite US Volume: MRI in grey with US overlayed in a hot-metal color scale.


Non-linear Registration

In previous work, we have developed a versatile 3D volume registration and segmentation package, termed ANIMAL (Automatic Non-linear Image Matching and Anatomical Labeling)[1,3]. In this project, ANIMAL is primarily used to compute the non-linear spatial transformation required to map intra-operative US to pre-operative MRI. The procedure for estimating this transformation is to compute dense field of 3D deformation vectors, mapping each voxel of one image volume to match those in a target image volume. The algorithm is described in ANIMAL. However, ANIMAL requires similar features in the source and target volumes in order to perform either cross-correlation or optical flow computations between the two data sets. Since US and MRI have very different characteristics, we have decided to generate pseudo US images - images whose appearance closely matches the predicted appearance of real US images that will be acquired during surgery - from data derived from the pre-operative MRI (The current strategy does not take acoustic properties or physics into account.). In this manner, the ANIMAL routine can be used to correlate the pseudo US to the real, intra-operative US images acquired.

To compute the pseudo US volumes, the ANIMAL segmentation package is used to segment major brain structures from the MRI volume. The segmentation is then further refined to create a volume that includes only those structures that are clearly visible in US. The resulting volume of anatomical structures is then submitted to a radial gradient operator in order to generate gradient magnitude data that reflects the appearance (in terms of intensity values) of acoustic boundaries visible in the US image. In future work, other structures prevalent in the US image (such as the cerebral falx) will be added to the pseudo US images, and more realistic physical simulations will be developed. However, the current technique allows for a proof-of-principal to be demonstrated here. For the time being, pre-operative segmentation of pathologies is performed manually (The automatic segmentation of pathologies is currently an open research topic.). Figure 2 shows an example of each of the steps involved in creating a pseudo US image (0.5 mm^2 pixel size).

Figure 2: Generating Pseudo US. The original MRI is segmented using ANIMAL, a radial gradient is then applied, and these are merged to create a pseudo US image

ANIMAL then estimates the non-linear spatial transformation required to match a pseudo US image volume to real US images acquired during surgery. This same transformation can be used to update the patient's MRI during surgery, thus permitting the neurosurgeon to make use of the pre-operative images during the intervention, even in the presence of a brain deformation (and errors in the linear registration from the patient to the pre-operative images).


Clinical Results



The method was applied to 12 surgical cases, including those with brain tumors (n=8) and selective amygdalo-hippocampectomies (n=4). Pre-operative MRI were acquired using a Philips 1.5T Gyroscan (The Netherlands) machine. Intra-operative images were acquired using an Ultramark 9, Advanced Technologies Laboratories Inc. (Bothwell, WA) machine with an ATL P7-4 multi-frequency probe, and a Capsure frame grabber on a Macintosh computer (Apple Computer, Cupertino, CA). Tracking was achieved with the use of a Polaris tracking system with a passive probe.

The feasibility of the approach is demonstrated through examination of the qualitative results from two clinical cases. Figure 3 shows the case of an amygdalo-hippocampectomy for intractable epilepsy. US images were acquired at two different stages of the operation: before and after the opening of the dura. The figure illustrates how the system is able to correct for brain deformations at these two stages of surgery. The extent of the correction can be seen in Figure 4. Figure 5 illustrates the case of a tumor resection. Here, US images were acquired after the dura opening, when significant brain deformation (on the order of 8mm) had occurred. The strategy was able to correct for the deformations of both pathological and anatomical structures. In all cases, it took approximately 30 seconds of processing time to provide a corrected MRI volume on average.

Pseudo USUS before dura openingUS after dura opening
Original MRICorrected MRICorrected MRI
Figure 3: Left selective amygdalo-hippocampectomy for intractable epilepsy: zoom of transverse images through the lateral ventricles. Patient was in the supine position with the head turned on the right side. A slight brain deformation is visible before the dura opening (column 2). A larger gravitational displacement (towards the right of the image) of the median structures is observed after dura opening (column 3). The deformation mainly involves the anterior horn of the left lateral ventricle (white arrow), whereas the falx (arrowhead) and septum pellucidum (double arrowhead) do not move. Correction of the deformation is demonstrated during these two surgical steps.

Figure 4: Case illustrating US (in green) after dura opening over original MRI (left) and over corrected MRI (right). Notice the distinct collapse of the left lateral ventricle.



Figure 5: Case with right frontal recurrent malignant tumor. These near-transverse images show the tumor (top), ventricles (bottom), with the front of the head towards the right. After dura opening, the sinking of the entire tumor, as well as the deeply-seated median structures, are clearly visible as a displacement towards the bottom of the image. The MRI is corrected for deformations of both pathological and anatomical structures (e.g. The ventricle is displaced and slightly compressed. The tumor is displaced as well.) The posterior part of the septum drops, but the anterior part does not as the registration system confuses it with the choroid plexus. This will be fixed with proper representation of the septum and choroid plexus in the simulations. Note that the falx does not move.

References


  1. D.L. Collins and A.C. Evans, ``Animal: validation and applications of non-linear registration-based segmentation,'' International Journal of Pattern Recognition and Artificial Intelligence, vol. 11, pp. 1271-1294, Dec 1997.
  2. A. King, J. Blackall, G. Penney et al., ``An estimation of intra-operative deformation for image-guided surgery using 3-D ultrasound,'' in MICCAI 2000 (Pittsburgh, PA, USA), pp.588-597, Oct. 2000.
  3. D.L. Collins, P. Neelin, T.M. Peters, and A.C. Evans, ``Automatic 3D inter-subject registration of MR volumetric data in standardized Talairach space,'' Journal of Computer Assisted Tomography , vol. 18, pp. 192-205, 1994.