Computer-Assisted Fluoroscopic Navigation for Orthopaedic Surgery

Abstract

In the absence of computer-assistance, orthopaedic surgeons frequently rely on a challenging interpretation of fluoroscopy for intraoperative guidance. Existing computer-assisted navigation systems forgo this mental process and obtain accurate information of visually obstructed objects through the use of 3D imaging and additional intraoperative sensing hardware. This information is attained at the expense of increased invasiveness to patients and surgical workflows. Patients are exposed to large amounts of ionizing radiation during 3D imaging and undergo additional, and larger, incisions in order to accommodate navigational hardware. Non-standard equipment must be present in the operating room and time-consuming data collections must be conducted intraoperatively. Using periacetabular osteotomy (PAO) as the motivating clinical application, this dissertation introduces methods for computer-assisted fluoroscopic navigation of orthopaedic surgery, while remaining minimally invasive to both patients and surgical workflows. Partial computed tomography (CT) of the pelvis is obtained preoperatively, and surface models of the entire pelvis are reconstructed using a combination of thin plate splines and a statistical model of pelvis anatomy. Intraoperative navigation is implemented through a 2D/3D registration pipeline, between 2D fluoroscopy and the 3D patient models. This pipeline solely relies on patient anatomy to recover the relative motion of the fluoroscopic imager, without any introduction of external objects. PAO bone fragment poses are computed with respect to an anatomical coordinate frame and are used to intraoperatively assess acetabular coverage of the femoral head. Convolutional neural networks perform semantic segmentation and detect anatomical landmarks in fluoroscopy, allowing for automation of the registration pipeline. Real-time tracking of PAO fragments is enabled through the intraoperative injection of BBs into the pelvis; fragment poses are automatically estimated from a single view in less than one second. A combination of simulated and cadaveric surgeries was used to design and evaluate the proposed methods. Compared to existing systems, the techniques developed in this thesis are designed to be less invasive to patients, compatible with minimally invasive approaches, and minimize interference to surgical workflows.

Publication
Johns Hopkins Libraries
Date
Links