Augmented Reality in Surgery



Fig. 6.1
Google Glass became available to the general public in May of 2014 and provides users with a natural language voice command user interface in order to provide a wearable hands-free computing experience





Devices


Augmented reality is the more modern term for mediated reality where our perception of the world is modified in some way. The simplest of these devices could be a rear-view mirror on a car; however, cutting edge augmented reality as we think of it today began in 1958 with Heads-up displays (HUD) for fighter pilots [9]. This overlay of computer-generated imagery has progressed since that time and has become infused into our vehicles, our smart phones and now our eyeglasses.


Smart Phones


Augmented reality for the smart phone became a reality with the integration of a camera. This allowed for the overlay of app-based information onto an image of the real world. Various apps used for locating restaurants and stores were the first to take advantage of this form of AR. Satellite AR (Analytical Graphics, Inc., Exton PA) even allows for the AR visualization of satellites as they orbit the earth (Fig. 6.2).

A273476_1_En_6_Fig2_HTML.jpg


Fig. 6.2
A screen shot from Satellite AR (Analytical Graphics, Inc., Exton PA) shows the digital overlay of a satellite’s current position and trajectory about the earth. (Source: https://​play.​google.​com/​store/​apps/​details?​id=​com.​agi.​android.​augmentedreality​. Permission granted under the terms of the GNU Free Documentation License, Version 1.2.)


Head Mounted Displays


As previously stated, the original head mounted displays (HMD) were pioneered by the military for use by fighter pilots (Fig. 6.3). Though many companies have entered the race to engineer the ideal heads-up display, not all will be applicable for medical or surgical applications.

A273476_1_En_6_Fig3_HTML.jpg


Fig. 6.3
The Heads-Up Display from the cockpit of an F-16 fighter plane displays altitude, artificial horizon, and magnetic heading along with other information to the pilot, without ever losing site of the external environment. (Source: http://​www.​defenseindustryd​aily.​com/​39m-to-keep-f16-huds-aok-02131/​. Permission granted under the terms of the GNU Free Documentation License, Version 1.2.)

For example, Sony released the Glasstron™ in 1997, which did not have any transmissivity and was intended for viewing of multimedia such as movies. Since that time, Sony and most of its competitors have moved toward AR compatible eyewear with varying AR compatibility.

Though many companies have thrown their hat into the head mounted display (HMD) arena, not all seek the same goal. Canon (Jamesburg, NJ) has developed their Mixed Reality eyewear with a price of $125,000, with the hope of providing a professional grade AR for use in multiple industries. Civil engineers would have the ability to visualize gas, water, and electrical lines as they exist or even as they are proposed prior to ever breaking ground.

Physicians and surgeons may be able to use the same device clinically and fully integrate the smart glass into the electronic medical record (EMR) of choice. This integration may allow for documentation, chart reviews, intraoperative communication, and information augmentation.

Not all groundbreaking work is being performed by tech based companies though. In 2007, the 3D Visualization and Imaging System Lab at the University of Arizona developed a unique polarized HMD and was subsequently awarded Army phase I and II Smart Business Innovation Research (SBIR) grants (www.​sbir.​gov) for its work in the field.


User Interface


As some authors have suggested, a HMD may cause information overload [10]. The user interface (UI) becomes important in order to tailor the content provided by the device. Google Glass uses a touchpad located on the side of the glass and voice commands in order to interact with the content of glass. Most of the newer devices have Bluetooth® (Bluetooth SIG, Kirkland, WA) connectivity and will allow for interface with a smartphone and computer, which can be used as the UI. The Technology Partnership (Melbourne, UK) has integrated electrodes capable of interpreting extraocular muscle activity as a proxy for eye tracking, which is implemented as the UI. Olympus Optical (Shinjuku, Japan) is one of the many users that have experimented with hand gesture recognition as the UI.

Google recently received recognition for their consumer smart glass (Google Glass) that did not require linkage to a device in order to display information. Google Glass is a computer with an optical head mounted display comparable to a 25″ HD television viewed from 8 ft. It uses bone conduction for audio, stores 16GB, possesses a 5MP camera, Wi-Fi, Bluetooth, voice recognition and has roughly 4 h of continuous, active battery life. It is incorporated into a familiar eyeglass form factor (Fig. 6.4).

A273476_1_En_6_Fig4_HTML.jpg


Fig. 6.4
Google Glass. One eye consists of a see-through monocle and integrated camera with bone-conducting or direct in-ear microphone. It connects to the Internet via Bluetooth and mobile phone or directly via WiFi

Inventors continue to push the envelope. Babak Parviz, an affiliate professor of electrical engineering at the University of Washington, has already created a contact lens with a single LED, and it is powered wirelessly with radio-frequency (RF) energy (Fig. 6.5) [11]. This is just the beginning. More recent works have shown the creation of contact lenses capable of sensing tear glucose concentrations [12].

A273476_1_En_6_Fig5_HTML.gif


Fig. 6.5
A mock up of an augmented reality contact lens pioneered by Babak A. Parviz, PhD, a bionanotechnologist at the University of Washington. (Source: B. Parviz. 2009, September. Augmented Reality in a Contact Lens [Online]. Illustration by Emily Cooper. Available at: http://​spectrum.​ieee.​org/​biomedical/​bionics/​augmented-reality-in-a-contact-lens/​eyesb1. Permission granted under the terms of the GNU Free Documentation License, Version 1.2.)

Within our own group, the smart phone changed the way we performed telemedicine [6]. With the advent of Google Glass, we were able to further integrate telemedicine into our practice.


Intraoperative Case Example


As a surgical limb salvage group, we consulted with another surgical colleague and discussed requirements for resection and subsequent admixture of an antimicrobial bioactive implant to deliver into a previously infected bony defect following a combined vascular-soft tissue reconstructive limb salvage procedure (Fig. 6.6a–d). The Google Hangouts™ application was managed entirely by the operating surgeon using hands-free voice control. Additionally, real time diagrams and MRI measurements were developed as a picture in picture, permitting two colleagues to effectively consult without the surgeon’s eyes leaving the operative field to view intraoperative imaging along with surgical anatomy.

A273476_1_En_6_Fig6_HTML.jpg


Fig. 6.6
(ad) Visible defect on main screen with consultant clinician in lower right (a), real-time photograph of admixture of antibiotic-demineralized bone matrix (b) and measured delivery into defect (c and d). (ad: Used with permission from Armstrong DG, Rankin TM, Giovinco NA, Mills JL, Matsuoka Y. A Heads-Up Display for Diabetic Limb Salvage Surgery: A View Through the Google Looking Glass. J Diabetes Sci Technol September 2014; 8: 951–956.)

A consultation for the above patient was continued in clinic the next day. The clinician performed a dressing change along with the virtual consultant. The consultant was able to be virtually present for the first postoperative dressing change in the clinic, satisfactorily managing postoperative care.


Intraoperative Education


In a different patient at high risk for wound complications, we utilized Google Glass as an educational adjunct. A junior resident donned Google Glass during a scheduled delayed primary closure of a plantar defect and was engaged in an interactive “screen share” feature, which fed detailed descriptions on retention suture technique. This allowed for real-time, visual instruction in collaboration with a senior attending surgeon (Fig. 6.7a, b). This approach maximized hands on experience and autonomy for the resident. This was performed using bandwidth from a standard 3G CDMA connection using Glass-Bluetooth-Phone. A similar procedure followed demonstrating compartmental anatomy and assisting in planning a surgical decompression of a limb-threatening infection (Fig. 6.8a, b). Similarly, investigators have shown that AR education can shorten the learning curve during the acquisition of laparoscopic skills [13].

A273476_1_En_6_Fig7_HTML.gif


Fig. 6.7
(a and b) During a delayed primary closure of a high-risk plantar wound, real-time descriptions with instant “screen share” were fed through Glass to a junior resident during the surgical procedure to assist instruction by the senior attending surgeon. (a and b: Used with permission from Armstrong DG, Rankin TM, Giovinco NA, Mills JL, Matsuoka Y. A Heads-Up Display for Diabetic Limb Salvage Surgery: A View Through the Google Looking Glass. J Diabetes Sci Technol September 2014; 8: 951–956.)


A273476_1_En_6_Fig8_HTML.gif


Fig. 6.8
(a and b) View during Intraoperative consultation for plantar deep space infection to assist in incision planning, exploration, and decompression using senior author’s manuscript’s figures as case example. (a and b: Used with permission from Armstrong DG, Rankin TM, Giovinco NA, Mills JL, Matsuoka Y. A Heads-Up Display for Diabetic Limb Salvage Surgery: A View Through the Google Looking Glass. J Diabetes Sci Technol September 2014; 8: 951–956.)

The above example is merely one of many ways that physicians are beginning to scratch the surface of what can be made possible.


Current Work


Surgeons are constantly faced with the task of mentally integrating two-dimensional radiographs and the three-dimensional surgical field, which is the very reason that augmented reality is so attractive. Researchers have been attempting to overcome this complexity for the last two decades [14].


Trauma


Trauma is the leading cause of mortality in patients aged 1–44 in the USA (CDC.gov). The ability to save lives is predicated upon a well-established system of responding to trauma patients and transporting them for definitive treatment. First responders are constantly faced with navigating unknown neighborhoods and homes in order to rescue injured patients. AR navigation could not only facilitate transportation to the distress call but also has been shown by researchers in France to make navigation within low-visibility environments, such as fires, safer [15]. This technology could be further modified to make the cellular signal of the 911 caller a beacon in order to accelerate the localization of the patient.


Orthopedic Surgery


Investigators have also been able to successfully apply AR to orthopedic trauma surgery by integrating the live feed/fluoroscopic radiographs onto the patient in a geometrically appropriate fashion so that the surgeon can visualize the underlying osseous structures during surgical intervention and manipulation (Fig. 6.9) [16]. Trauma surgeons have also utilized hybrid navigation [17].

A273476_1_En_6_Fig9_HTML.jpg


Fig. 6.9
The correctly oriented radiograph from a C-arm is displayed over the surgical field. (Used with permission from Weidert, S., L. Wang, A. von der Heide, et al., [Intraoperative augmented reality visualization. Current state of development and initial experiences with the CamC]. Unfallchirurg 2012; 115(3): 209–13.)

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 28, 2016 | Posted by in CRITICAL CARE | Comments Off on Augmented Reality in Surgery

Full access? Get Clinical Tree

Get Clinical Tree app for offline access