We conclude that the integration of the intracranial topographic and vascular anatomy, along with functional activity maps, provides the surgeon with a full image of brain structures during the operation. In both traditional stereotactic and open craniotomy procedures, the use of such a multi-faceted approach enables the surgeon to proceed with confidence that he has access to all of the relevant parameters relating to the state of the brain, both prior to and during the surgical procedure. We believe that stereoscopic display of these data significantly enhances the surgeon's ability to relate the 3-D images portrayed to him on the workstation, to the brain of the patient, and also provides an unambiguous view of the spatial relationships between the probe and anatomy.
The success and clinical acceptance of this system in encouraging. However, the work described here is just a first step towards the goal of providing the neurosurgeon with full image-guided capability during surgery. A major disadvantage at present is the limited real-time feedback to indicate to the surgeon the actual state of the brain or position of the probe relative to the current position of brain structures during the procedure. As a first step at addressing this problem we have implemented the simultaneous display of the video and matching cortical surface rendering to provide qualitative information. However, more intuitive methods of displaying the data will be required. In an extension to the live video approach we are currently investigating using stereoscopic video that is registered to the cortical surface during procedures. Other modalities also hold promise and we are planning to employ intra-operative ultrasound, correlated with the MRI image to provide real-time feedback of the probe position with respect to actual anatomy during the surgery. Another example of the use of integrated imaging during surgery is the inclusion of anatomical and functional atlases into the surgical imaging environment. We plan to adapt an existing anatomical atlas so that it may be treated as an additional data set to guide the surgeon. An overview of the modalities that we currently employ and plan to emplement in the near future is shown in Fig. 8.
Figure 8: Click on image.
To enhance user interaction we have also added CrystalEyes VR (the standard CrystalEyes incorporating an ultrasonic headtracker) to the the surgery planning system. With this technology an array of three ultrasonic transducers emitting pulses of ultrasonic radiation is placed on the computer monitor. Three microphones mounted on the CrystalEyes glasses detect the ultrasound allowing the position and orientation of the viewers head to be sensed. This information is employed to update the view-point at which the stereo renderings are generated. Thus, for example, when the viewer's head moves up the images displayed on the computer screen show the object from a more superior orientation. We have found this object positioning method to be intuitive and useful during surgical planning, especially when visualizing complex 3-D objects like the vasculature.
We are also considering adding an autostereoscopic display to the OR workstation. Colour flat panel displays capable of displaying 3-D data stereoscopically without the use of special viewing glasses are just becoming available. Installing such a display in the OR would eliminate the need for the surgeon to wear shutter glasses in order to perceive stereo. An additional advantage of using a small flat panel display is that it can be conveniently positioned close to the surgeon in the OR unlike the large CRT monitors that we currently employ.