A major limitation of interactive image guided systems to date has been the lack of live feedback to indicate to the surgeon the actual state of the brain or position of the probe and other surgical instruments relative to the current position of brain structures during the procedure. To address these shortcomings we have integrated live video to provide such feedback to the surgeon during the operation. In our development laboratory we have a video camera attached to an SGI Indy (Silicon Graphics Inc., CA) workstation (and will soon be installing a miniature video camera in the OR).
In the current implementation we capture a video image of the surgical scene to computer memory. This image is displayed on the computer screen simultaneously with a mono representation of the surface of the cortex and related vasculature, as obtained by surface rendering the appropriately segmented MRI and MRA data sets. By choosing an appropriate orientation and scaling of the rendered data this superimposed or `fused' display provides the surgeon with feedback of significant morphological changes that have occurred in the cortical surface during surgery. Displaying the probe in near-real-time on both data sets aids in highlighting any tissue distortions. Additionally, the spatial orientation of other surgical instruments visible in the video images is established relative to anatomical structures shown in the MRI data. An example of the fusion of the live video and cortical surface as generated by segmenting an MRI signal is shown in Fig. 7. Here an excised brain has been scanned in the MRI scanner and the cortical surface segmented. Manual selection of the orientation and scaling of this rendering have enabled the two images to be superimposed. Note that the two probes visible in this image are captured in the video signal. Video integration in this context has already been demonstrated by others, but only by manually aligning the camera such that the video image can be superimposed on the workstation image using a video mixer.
Live video provides more sophisticated possibilities that we are currently investigating through the use of a stereo pair of video images of a scene, typically of the patient's skin or cortex. Identification of eight or more object points in both stereo video signals allows determination of the relative geometry of the two projections allowing the 3-D coordinates of the identified points to be calculated. To facilitate point correspondence for 3-D surface reconstruction a structured light pattern is projected onto the object while capturing the stereo video pair. Siebert and Urquhart have shown that projection of a random noise field increases the accuracy of the depth map derived from the stereo views of a given 3-D surface. The 3-D depth map from the stereo pair can be registered to the MRI data set using a least squares fit. Once this achieved the geometric information relating the stereo views to the depth map can be used to obtain the appropriate rendering of the MRI data. In addition, both the rendering of the MRI data and the video signal contain representations of the probe. Once the registration is made, the positions of these two probe representations relative to the patient and to each other (they should be the same) will give a continual check of registration accuracy throughout the surgical procedure.
Figure 7: Click on image.