Next: FUNCTIONAL IMAGE INTEGRATION Up: Multimodality Interactive Stereoscopic Image-Guided Previous: INTRODUCTION

STEREOSCOPIC 3-D IMAGE DISPLAY

Advances in technology have made possible the acquisition, manipulation and display of 3-D data sets. The display of such data is achieved either by projecting surface(s) within the volume, or the entire volume itself, onto a viewing screen. These two approaches are known as surface rendering and volume rendering techniques respectively[12][13][14]. Currently the 3-D display of medical images is most commonly achieved using surface rendering techniques. However, volume rendering techniques also find application, e.g. for the display of MRA data sets which can be usefully visualized using a ray-tracing approach where each point in the projected image represents the maximum intensity that a ray has encountered along its path through the volume to that point. This technique is known as Maximum Intensity Projection (MIP) and examples of its use are given later in this paper.

Employing surface and volume rendered representations of a 3-D image on a 2-D screen allows the incorporation of visual cues (for example, colour, shading and occlusions) giving the viewer an impression of depth. However, there is often ambiguity which can be resolved with a true binocular, or stereoscopic, representation. In the operating room at the Montreal Neurological Hospital we have installed a stereo display system (CrystalEyes, StereoGraphics Corp., San Raphael, CA) on an Iris Indigo Workstation (Silicon Graphics Inc., CA). This system displays alternate images (left- and right-eye views) at a frame rate of 60 images per second per eye. Glasses worn by the surgeon incorporate active liquid-crystal shutters that are synchronized to the display by means of an infrared beam so that each image is presented only to its appropriate eye.

Figure 3: Click on image.

In our applications 3-D objects representing structures of interest in the patient (e.g. the skin, the cortex, the ventricles, the hippocampus) are generated by `segmenting' the MRI or CT slice data. Stereoscopic views are then obtained by surface-rendering these objects from two view-points with an angular separation of between 2 and 7 degrees, the degree of disparity being operator selectable. A 3-D representation of the probe is also displayed in the correct orientation and position. When displayed stereoscopically, these elements appear to float in space allowing unambiguous location of structures critical for a particular surgical procedure. An example of a stereoscopic pair of surface rendered images is shown in Fig. 3. Segmented structures visible in these views are: the probe (violet), the brain stem (green), the hippocampus (yellow), the surface of the cortex (red), the skin (pink) and MRI contrast markers (blue) which were attached to the stereotactic frame that was affixed to the patient's head when the scan was made. Additionally to improve the realism and clinical usefulnes of this display the cut surfaces are coloured according to the value of the MRI data set at the corresponding 3-D location.

Perspective provides an additional depth cue when viewing 3-D objects naturally. Unfortunately incorporating perspective while rendering 3-D objects often significantly increases the computational effort required. We have found that the parallel-ray projections, employed to generate the images displayed in this paper, give stereo views that are readily understandable and unambiguous. However, as the computer hardware available in the OR becomes more powerful it may prove useful to include perspective into the stereo visualization.



Next: FUNCTIONAL IMAGE INTEGRATION Up: Multimodality Interactive Stereoscopic Image-Guided Previous: INTRODUCTION