Matthew Hickey
Doctor of Philosophy in Biomedical Engineering (PhD)
Research Topic
Investigating Mechanical Causes of Failure in Total Knee Replacement
Computer-assisted orthopaedic surgery
Ultrasound image acquisition and processing
Surgical navigation systems
Background in mechanical, electrical or biomedical engineering, or computer science
Understanding of spatial relationships (transforms, etc)
Experience with one or more of robotics, biomechanics, ergonomics, image acquistion and processing, machine learning, computer vision
G+PS regularly provides virtual sessions that focus on admission requirements and procedures and tips how to improve your application.
These videos contain some general advice from faculty across UBC on finding and reaching out to a potential thesis supervisor.
Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.
Ultrasound imaging is an effective and affordable tool for visualizing anatomical structures. Conventional ultrasound probes have limitations in size, shape, and conformability due to their rigid construction. Flexible ultrasound arrays could better conform to patient anatomy, potentially improving acoustic coupling, and provide larger field of view with a single acquisition event. However, their performance under bending and varying shape during use pose new challenges. This thesis focuses on strategies to design, fabricate, and implement a new type of flexible transducer arrays to enable conformal sonography.A new fabrication process for flexible capacitive micromachined ultrasound transducer (CMUT) arrays is developed. Polymers are used as the structural materials, with the CMUT membranes and support structures built from SU-8 photoresist on a polyimide substrate. Electromechanical characterization shows good fabrication yield and uniformity across arrays. Acoustic tests demonstrate wide bandwidth and mechanical durability under repeated bending. The proposed technology enables low-cost batch production of flexible CMUT arrays in different shapes and configurations up to 15 MHz frequency, including small and large form factors and 1D and 2D arrays.Two computational methods, based on image sharpness and spatial coherence, are introduced to estimate the unknown shape of flexible arrays. Both methods are evaluated using simulation. Additionally, the coherence-based method is tested with tissue-mimicking phantoms and in vivo experiments. Compared to state-of-the-art methods, the spatial coherence approach demonstrates improved generalizability for imaging complex anatomical targets, while maintaining comparable estimation accuracy.Finally, the longest reported monolithic flexible CMUT array with 128 elements over 9 cm aperture is fabricated and used to capture ultrasound scans. This section represents the first preliminary implementation of flexible CMUT arrays for in vivo scanning.In conclusion, this thesis presents advances in the field of flexible ultrasound array technology in three areas: a polymer-based fabrication process for flexible CMUT arrays; computational methods for predicting the shape of (flexible) ultrasound arrays without external hardware; and preliminary imaging with a flexible CMUT array under bending conditions. While limitations remain before clinical viability is achieved, this work provides key contributions that could pave the way for future integration of flexible ultrasound arrays into practical clinical systems.
View record
Pelvic fractures require a complex surgery to realign the fracture and stabilize the pelvic ring. The conventional method for placing iliosacral screws (ISS) in the pelvis relies heavily on intraoperative fluoroscopy. However, due to the narrow sacral passage the screw must pass through, this technique is susceptible to high rates of screw malplacement which can lead to iatrogenic complications. Additionally, fluoroscopy requires ionizing radiation and therefore exposes the surgical staff to harm. This thesis proposes and evaluates an alternative surgical protocol using a navigation system based on ultrasound (US) imaging, which enables accurate bone surface imaging without requiring ionizing radiation.This system's development involves four contributions, the first two of which focus on the requirement of automatically identifying bone in US. First, we developed a multi-institutional US bone imaging dataset and corresponding evaluation framework, allowing for more systematic evaluation of US bone segmentation algorithms. Using this, we benchmarked six segmentation algorithms on thousands of US images and found deep convolutional neural networks are the most accurate. Second, we characterized a wide range of uncertainty estimation techniques and novel loss functions for US bone segmentation, and found that deep ensembling used with versions of the binary cross entropy loss can significantly improve segmentation (mean Dice: 0.75) and calibration errors (mean expected calibration error: 0.24%). Third, we validated multiple graphical visualizations, and found that bullseye visualizations achieve the best ISS targeting with mean distance and angulation errors of 0.51 mm and 0.55°.Finally, we combined these components to develop an integrated US-based surgical navigation system which we call NOFUSS (Navigated Orthopaedic Fixation using Ultrasound System). We surgically inserted ISSs in human cadaver specimens using NOFUSS, and compared its accuracy and efficiency to the conventional fluoroscopic-based surgery. We found that with NOFUSS, ISSs can be placed with comparable accuracy as fluoroscopy guidance, while requiring no intraoperative radiation and with a 60% reduction in median insertion times. This work demonstrates that combining US imaging with surgical navigation techniques is feasible and can likely enable surgeons to perform ISS insertions with accuracy and efficiency comparable to conventional procedures, but with little or no radiation exposure.
View record
Developmental dysplasia of the hip (DDH) is the most common pediatric hip condition, representing a spectrum of hip abnormalities ranging from mild dysplasia to irreducible hip dislocation. Thirty-three years ago, the introduction of the Graf method revolutionized the use of ultrasound (US) and replaced radiography for DDH diagnoses. However, it has been shown that current US-based assessments suffer from large inter-rater and intra-rater variabilities which can lead to misdiagnosis and inappropriate treatment for DDH. In this thesis, we propose an automatic dysplasia metric estimator based on US and hypothesize that it significantly reduces the subjective variability inherent in the manual measurement of dysplasia metrics. To this end, we have developed an intensity invariant feature to accurately extract bone boundaries in US images, and have further developed an image processing pipeline to automatically discard US images which are inadequate for measuring dysplasia metrics, as defined by expert radiologists. If found adequate, our method automatically measures clinical dysplasia metrics from the US image. We validated our method on US images of 165 hips acquired through clinical examinations, and found that automatic extraction of dysplasia metrics improved the repeatability of diagnoses by 20%. We extended our automatic metric extraction method to three-dimensional (3D) US to increase robustness against operator dependent transducer placement and to better capture the 3D morphology of an infant hip. We present a new random forests-based method for segmenting the femoral head from a 3D US volume, and a method for automatically estimating a 3D femoral head coverage measurement from the segmented head. We propose an additional 3D hip morphology-derived dysplasia metric for identifying an unstable acetabulum. On 40 clinical hip examinations, we found our methods significantly improved the reproducibility of diagnosing femoral head coverage by 65% and acetabular abnormalities by 75% when compared to current standard methods.
View record
Current practice in orthopaedic surgery relies on intra-operative two dimensional (2D) fluoroscopy as the main imaging modality for localization and visualization of bone tissue, fractures, implants, and surgical tool positions. However, with such projection imaging, surgeons typically face considerable difficulties in accurately localizing bone fragments in three dimensional (3D) space and assessing the adequacy and accuracy of reduced fractures. Furthermore, fluoroscopy involves significant radiation exposure. Ultrasound (US) has recently emerged as a potential non-ionizing imaging alternative that promises safer operation while remaining relatively cheap and widely available. US image data, however, is typically characterized by high levels of speckle noise, reverberation, anisotropy and signal dropout which introduce significant difficulties in interpretation of captured data, automatic detection and segmentation of image features and accurate localization of imaged bone surfaces.In this thesis we propose a novel technique for automatic bone surface and surgical tool localization in US that employs local phase image information to derive symmetry-based features corresponding to tissue/bone or tissue/surgical tool interfaces through the use of 2D Log-Gabor filters. We extend the proposed method to 3D in order to take advantage of correlations between adjacent images. We validate the performance of the proposed approach quantitatively using realistic phantom and in-vitro experiments as well as qualitatively on in-vivo and ex-vivo data. Furthermore, we evaluate the ability of the proposed method in detecting gaps between fractured bone fragments. The current study is therefore the first to show that bone surfaces, surgical tools and fractures can be accurately localized using local phase features computed directly from 3D ultrasound image volumes. Log-Gabor filters have a strong dependence on the chosen filter parameters, the values of which significantly affect the outcome of the features being extracted. We present a novel method for contextual parameter selection that is autonomously adaptive to image content. Finally, we investigate the hypothesis that 3D US can be used to detect fractures reliably in the emergency room with three clinical studies. We believe that the results presented in this work will be invaluable for all future imaging studies with US in orthopaedics.
View record
The goal of this research was to establish a methodology for quantifying performance ofsurgeons and distinguishing skill levels during live surgeries. We integrated threephysical measures (kinematics, time and movement transitions) into a modelingtechnique for quantifying performance of surgical trainees. We first defined a newhierarchical representation called Motor and Cognitive Modeling Diagram forlaparoscopic procedures, which: (1) decomposes ‘tasks’ into ‘subtasks’ and at the very detailed level into individual movements ‘actions’; and (2) includes an explicit cognitive/motor diagrammatic representation that enables to take account of the operative variability as most intraoperative assessments are conducted at the ‘whole procedure’ level and do not distinguish between performance of trivial and complicated aspects of the procedure. Then, at each level of surgical complexity, we implemented specific mathematical techniques for providing a quantitative sense of how far a performance is located from a reference level:(1) The Kolgomorov-Smirnov statistic to describe the similarity between twoempirical cumulative distribution functions (e.g., speed profiles)(2) The symmetric normalized Jensen-Shannon Divergence to compare transitionprobability matrices(3) The Principal Component Analysis to identify the directions of greatest variability in a multidimensional space and to reduce the dimensionality of the data using a weight space.Two experimental studies were completed in order to show feasibility of our proposedassessment methodology by monitoring movements of surgical tools while: (1) dissecting mandarin oranges, and (2) performing laparoscopic cholecystectomy procedures at the operating room to compare residents and expert surgeons when executing two surgical tasks: exposing Calot’s Triangle and dissecting the cystic duct and artery.Results demonstrated the ability of our methodology to represent selected tasks using the Motor and Cognitive Modeling Diagram and to differentiate skill levels. We aim to use our approach in future studies to establish correspondences between specific surgical tasks and the corresponding simulations of these tasks, which may ultimately enable us to do validated assessments in a simulated setting, and to test its reliability in differentiating skill levels at the operating room as the number of subjects and procedures increase.
View record
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
Treatment of head and neck cancers in the mandible and surrounding tissues may necessitate oncological resection. This leaves a loss of continuity in the mandible, creating deficiencies in function and aesthetics. Reconstruction of the mandible is often completed with an autologous bone and tissue transplant from the fibula. The fibula is harvested to maintain viability, cut into segments, and secured into the defect. Traditionally, this is completed freehand, with surgeons using a trial-and-error approach to shaping and positioning the segments. This technique is time-intensive, and its accuracy is dependent on surgeon skill. To overcome these issues, a technique was developed in which 3D printed cutting guides for the mandible and fibula are created based on CT scans taken up to 4 weeks prior, creating potential for the guides to become ineffective due to cancer growth prior to surgery. To overcome the challenges of the 3D guides, the ISTAR group at UBC developed a system to facilitate day-of intraoperative surgical planning and guidance. However, the positioning device in this system utilized an ’all or nothing’ locking mechanism which did not allow for precise or iterative segment positioning. Therefore, the main aims of this thesis are to develop and validate a micropositioning surgical device for positioning of the fibula segments in the reconstruction. After fabricating this device, we conducted verification testing, demonstrating 0.48 ± 0.51, 0.15 ± 0.63, and 0.02 ± 0.71 mm segment positioning accuracy in each translational DoF. The rotational accuracy was shown to be -0.13 ± 4.53, 0.82 ± 2.41, and -0.51 ± 1.64 deg in each rotational DoF. In collaboration with Melissa Yu, testing with 3 ENT surgeons was conducted on anatomical models. The results of this test demonstrated a 0.77 ± 0.05 full reconstruction Dice score, a 1.79 ± 0.79 mm HD95, and a 1.16 ± 0.26 mm overlap deviation. When isolating the fibula reconstruction from the mandible, the Dice score and HD95 improved to 0.83 ± 0.03 and 0.86 ± 0.29 mm respectively. The accuracy of reconstruction is comparable to literature for 3D printed guides, with the addition of day-of-surgery planning and improved intraoperative flexibility.
View record
Mandibular reconstruction with a fibula free flap is a common method for restoring form and function to patients with segmental mandible defects. The conventional strategy for reconstruction is a freehand technique, but this makes it difficult to achieve adequate contact between bone segments. More recent technology uses virtual surgical planning to 3D print cutting guides. However, this process can take days to weeks of preparation, during which the tumour can progress beyond the resection margins, rendering the initial plan obsolete. A day-of-surgery approach combats this limitation using image guidance to bring surgical planning into the operating room. This provides surgeons with the flexibility to respond to intraoperative changes while maintaining the benefits of guided approaches. A proof-of-concept system was developed by previous graduate students that demonstrated comparable accuracy to 3D printed guides but significantly prolonged the operative time. Therefore, the objective of this work is to prepare the day-of-surgery approach for clinical implementation by addressing the key limitations of the existing system.This thesis details our efforts to reduce its intraoperative time, update its virtual surgical planning (VSP) algorithm, and introduce guided navigation to support segment positioning. We used a motor cognitive modelling diagram to identify areas of inefficiency within the integrated procedure and inform usability improvements to the software workflow. We also updated the VSP algorithm to align with the version developed by the ISTAR group to automate surgical planning and support future dental implant considerations. We then used this VSP to guide a surgeon through the precise micromanipulator adjustments required to move a segment into its target position. These changes were evaluated by ENT surgeons through a simulated mandibular reconstruction procedure on anatomical models. We generated five reconstructions with comparable accuracy to the previous cadaver study, and reduced operative time to an average of only 215 minutes. This represents a key step towards its clinical implementation by demonstrating that the day-of-surgery system can be implemented into mandibular reconstruction surgery without introducing significant delays to the operation.
View record
Developmental Dysplasia of the Hip (DDH) is a painful orthopaedic malformationdiagnosed at birth in 1-3% of all newborns. Left untreated, DDH canlead to significant morbidity including long term disability. Currently thecondition is clinically diagnosed using 2D ultrasound (US) imaging acquiredbetween 0-6 months of age. DDH metrics are manually extracted by highlytrained radiologists through manual measurements of relevant anatomy fromthe 2D US data, which remains a time consuming and highly error proneprocess. Recently, it was shown that combining 3D US imaging with deeplearning (DL)-based automated diagnostic tools may significantly improveaccuracy and reduce variability in measuring DDH metrics. However, robustnessof current techniques remains insufficient for reliable deploymentinto real life clinical workflows. In this thesis, we present a quantitativerobustness evaluation of state-of-the-art (SOTA) DL models in bone segmentationfor 3D US and demonstrate examples of failed or implausiblesegmentations with SOTA models under common data variations, e.g., smallchanges in image resolution or anatomical field of view (FOV) from thoseencountered in the training data. We propose a 3D extension of the Seg-Former architecture, a lightweight transformer-based model with hierarchicallystructured encoders producing multi-scale features, which we show toconcurrently improve accuracy and robustness. Specifically we show an increasein the 3% Dice score performance over the previous SOTA modelsfor 3D US segmentation. To allow researchers, collaborators, clinicians, anddoctors access to our DL models, we develop a prototype web-based applicationthat will allow users to upload three dimensional US data and visualizetheir data before eventually selecting from various DL models to run ontheir data. The DL models will run in the background segmenting out thehip anatomical structures and return the calculated DDH metrics as well asrelevant visualization of the segmentation and a 3D rendered mesh of thehip from the segmentation. We also investigate the use of learnable GaborFilter Banks as a preprocessing layer in DL models to mimic the humanvisual system.
View record
Degenerative Lumbar Spondylolisthesis (DLS) is a frequently diagnosed spine pathology, presenting as an anterior displacement of one vertebra over the subjacent one, and can require surgical treatment. One of the important factors in determining surgical treatment for DLS patients is clinical spinal instability. Unfortunately, clinical spinal instability is not well understood and current clinical methods for evaluating it are considered rather limited as they do not account for vertebral motion in six Degrees of Freedom (DOF). As a result, determining the most appropriate surgical intervention can be challenging. Our research goal is therefore to develop a more accurate method for measuring clinical spinal instability in six DOF, with the overall goal of improving surgical management of DLS patients. In this thesis, we developed and implemented a vertebral 2D/3D registration procedure, which spatially aligns a patient’s preoperative CT to two biplanar X-ray images, to measure intervertebral motion at the level of spondylolisthesis. Our intensity-based registration method uses X-ray images captured using an EOS System, a relatively new clinical biplanar scanner that provides significantly less radiation to subjects compared to conventional X-ray systems. We validated our registration approach with phantom models and found that our process has accuracies ranging between 0.12 to 0.67 mm for translations and -0.02 to 0.74 deg for rotations, which is below the magnitude at which instability may be occurring. We also found that our process is repeatable with sub-millimetre (0.06 to 0.70 mm) and sub-degree (0.01 to 0.51 deg) variability. We compared the micro and normal radiation dose settings of the EOS System, and the results indicate that the microdose and normal dose settings are equivalent (within the bounds of +/- 0.5 mm and +/- 0.5 deg) for most position parameters. In preparation for evaluating our system with clinical data, we have designed a clinical pilot study and have received the necessary hospital approvals to move the project forward. Overall, the EOS biplanar X-ray registration approach presented here appears to have sufficient accuracy and repeatability to be useful in investigating intervertebral motion patterns in DLS patients.
View record
Mandible reconstruction surgery requires a surgeon to remove a section of the mandible bone affected by a tumor and recreate the curved contour of the jaw using a donor bone since the mandible is fundamental in helping the patient to eat, talk and breathe. The most common donor bone used is the fibula, and the surgery requires small fibula segments of a specific length and cut angle to be created which are then aligned to one another to form the desired curve. Currently surgeons During this lead time the tumor can grow, and pre-printed guides cannot be changed during surgery to accommodate this. Our research has addressed these issues through the development of a fully image-guided mandible reconstruction workflow. The scope of this thesis was to develop the software required to integrate image guidance into this procedure, investigate methods to guide the fibula cuts, and perform proof-of-concept testing on the fully-integrated system. A user study on the fibular cutting process highlighted that using an image-guided cutting guide would be the most feasible method for guiding the fibula cuts in a surgical environment. This method was able to replicate the planned fibula cuts with an average deviation of -0.68±2.66 mm in segment length and 3.68±2.59 ̊ in cut angle. Bench testing of the integrated system's workflow demonstrated that we could successfully perform an on-the-fly simulation of the surgery without requiring any pre-surgical planning as the VSP is generated during the surgery and can be changed as required throughout. Proof-of-concept testing performed on five cadaver specimens further demonstrated successful execution of the workflow in a more realistic surgical setting. These tests resulted in accuracy metrics that are comparable to existing state-of-the-art systems using 3D-printed cutting guides such as an average Dice score of 0.81 and Hausdorff distance of 0.94 mm when compared to the VSP. By utilizing image guidance during all stages of the surgical workflow, including to guide the fibula cuts, this system demonstrates that an on-the-fly surgical workflow is possible, which, once transferred to the operating room, would eliminate the lead time and inflexibility of the physical cutting guides.
View record
Glenoid implantation accuracy in total shoulder arthroplasty (TSA) has been significantly improved by the use of surgical navigation systems. Despite its benefits, surgical navigation has not been widely implemented into TSA procedures. Based on feedback from orthopaedic surgeons, we believe this lack of adoption is due to the obtrusiveness of the optical trackers that protrude into the surgeon’s workspace. Furthermore, these systems require time to calibrate during surgery and offer limited options for camera placement, which may cause inconvenience intra-operatively. To tackle these challenges, we developed and assessed a new TSA protocol based on a less-intrusive dental navigation system developed by Navigate Surgical Technologies (NST). Our proposed system consists of laser-engraved surgical drills which are calibrated once when manufactured, and do not require calibration in the operating room. Similarly, we present a design for a substantially smaller bone tracker that can be tracked from almost all directions due to its curved pattern design. To assess our system’s performance, we modified the NST software to support guidance of a TSA procedure. We then conducted a user study in which three participants used the system to drill multiple holes in a glenoid model. Using a CMM (coordinate measuring machine), we determined the resulting trajectory of the surgical drill and compared this to the pre-planned trajectory. Since we used a model glenoid rather than anatomical specimens, we were unable to test a realistic registration process, so were limited to reporting precision only and not accuracy. We found that our system’s targeting precision was markedly lower than the end-to-end precision achieved by the main commercially-available TSA navigation system (ExactechGPS) -
View record
Advanced head and neck cancer in the mandible and accompanying soft tissue requires aggressive resection of large segments of the oral cavity including the mandible bone. Accurate reconstruction is essential after these resections to rebuild the proper geometry of the mandible to restore form and function. To reconstruct the proper geometry of the mandible, vascularized donor bone and tissue, commonly from the fibula, needs to be harvested and accurately segmented. The conventional approach is to do this in an iterative, unguided process, which is difficult, time intensive and can produce insufficient accuracy. Virtual planning and 3D printed cutting guides have been developed to improve accuracy, decrease operative time and decrease difficulty. However, these 3D printed guides take a substantial amount of time to create and need to be designed and manufactured 2-4 weeks prior to surgery. Aggressive tumours can grow in that time period, making the plan obsolete. Therefore, we aim to develop an intraoperative optical tracking navigation system and associated tracked surgical tools to guide the execution of virtual reconstruction plans in an accurate and time efficient manner.This thesis will specifically cover the aims of developing and verifying a registration protocol, developing the tracked surgical tools and testing the system on cadaver specimens. Using a combined paired-point and fiducial-model registration method we found we could achieve a Target Registration Error of 1.25 ± 0.06mm on the mandible and 1.86 ± 0.07mm on the fibula on a porcine model. Two optically tracked devices were developed: a mandible fixator device to maintain alignment of mandible fragments and a fibula cutting and placement device to guide the surgeon in executing fibula cuts and subsequently placing those fibula segments into position according to the virtual plan. We successfully used the developed system to conduct 5 guided reconstructions on cadaver specimens with an average accuracy of 1.15 ± 1.17mm in width, 1.47 ± 1.61mm in projection and a Dice score of 0.80. This is comparable to the current 3D printed guide approach, indicating the feasibility of utilizing image guided technology in this surgery.
View record
Knee osteoarthritis is the most common form of lower-limb osteoarthritis, where cartilage wears away and causes pain. Unicompartmental knee arthroplasty (UKA) and total knee arthroplasty (TKA) are common treatment options. UKA replaces the knee articulation surfaces by prosthesis only at the degenerated tibial-femoral compartment, while TKA replaces the entire knee joint surfaces. UKA could lead to better functional results and faster recovery, but the technique may be under-utilized due to higher risk of revision surgery. In previous work, our group has developed a lower-cost bone-mounted robot for TKA surgeries. In this project, our goal was to adapt this platform for use in UKA procedures. Our new robot design implements a guidance concept called dynamic physical constraint (DPC), which mechanically emulates rigid contact with a virtual fixture, a 3D surface stored in computer space, while allowing smooth motion parallel to that surface. The robot consists of a rotary-prismatic-prismatic joint configuration, followed by a remote center of motion mechanism that holds a hand mill at the end-effector. During an operation, the robot is mounted to the patient’s femur, establishing a robust robot-bone relative position, and the robot imposes accuracy and safety for the surgeon who operates it with both hands. We built a functional robot prototype and performed medial-femoral milling tests on an experimental platform which uses femoral condyle models made of medium density fibreboard as milling targets. Two types of virtual fixture geometries – combined-curved and tri-planar – were tested. Analysis of the laser-scanned post-milling surfaces revealed that the average RMS deviation was 0.33 mm (SD = 0.06 mm) for the combined-curved surface and 0.41 mm (SD = 0.05 mm) for the tri-planar surface. We also conducted inter-specimen surface comparisons and found an average RMS deviation of 0.07 mm (SD = 0.01 mm) for the combine-curved surface and 0.07 mm (SD = 0.02 mm) for the tri-planar surface. In this controlled experimental scenario, our UKA robot successfully achieved the goal of sub-millimetric milling accuracy, and the repeatability of milled surface geometry between different milling attempts seems high. We thus conclude that this robot design should be advanced to the next stage of development.
View record
C-Arms, a mobile X-ray machine with emitter and detector on opposite ends of a ‘C’, are used in many surgeries conducted within hospitals, especially in orthopaedic applications such as trauma repairs. The C-Arm, a 350kg unit, must be manually moved between storage, operating rooms, and various positions around the operating table, often requiring considerable physical exertion from the radiology technologists (RTs), putting them at high risk of musculoskeletal injury (MSIs). A powered robotic base was developed to alleviate this high risk of injury as well as the potential to allow for more precise movement and lowering of radiation dosage. This base, named the Easy-C, is retrofittable underneath existing C-Arms so that they retain their certifications while allowing “easy” movement by the RTs. The Easy-C was designed with a rear set of wheels on a main structure and a nose wheel driven separately. Omni wheels were utilized to give holonomic motion; the Easy-C platform can move freely in the X-Y plane of the floor, unlike a shopping cart or other standard wheeled vehicle. This Easy-C system was verified on a C-Arm through open loop movement along the three major axes available: X, Y, and in-plane rotation ω, all which replicate clinically relevant movements. Low relative error was seen in X and Y movements with both only at 1.7% relative error on the main movement axis, and 7.7% or less in the off axes. In-plane rotation had a larger relative error of 6.2%, with 7.1% or less in the off axes. For open loop control, the Easy-C performed as expected across these movements, allowing for minimal effort from the operator to move the C-Arm and greatly reducing the MSI risk. While several limitations were realized, with future development the Easy-C could provide a new and effective tool to the healthcare industry.
View record
Developmental Dysplasia of the Hip is one of the most common congenital disorders. Misdiagnosis leads to financial consequences and reduced quality of life. The current standard diagnostic technique involves imaging the hip with ultrasound and extracting metrics such as the α angle. This has been shown to be unreliable due to human error in probe positioning, leading to misdiagnosis. 3D ultrasound, being more robust to errors in probe positioning, has been introduced as a more reliable alternative. In this thesis, we aim to further improve the image processing techniques of the 3D ultrasound-based system, addressing three components: segmentation, metrics extraction, and adequacy classification. Segmentation in 3D is prohibitively slow when performed manually and introduces human error. Previous work introduced automatic segmentation techniques, but our observations indicate lack of accuracy and robustness with these techniques. We propose to use deep Convolutional Neural Network (CNN)s for improving the segmentation accuracy and consequently the reproducibility and robustness of dysplasia measurement. We show that 3D-U-Net achieves higher agreement with human labels compared to the state-of-the-art. For pelvis bone surface segmentation, we report mean DSC of 85% with 3D-U-Net vs. 26% with CSPS. For femoral head segmentation, we report mean CED Error of 1.42mm with 3D-U-Net vs. 3.90mm with the Random Forest Classifier. We implement methods for extracting α₃D, FHC₃D, and OCR dysplasia metrics using the improved segmentation. On a clinical set of 42 hips, we report inter-exam, intra-sonographer intraclass correlation coefficients of 87%, 84%, and 74% for these three metrics, respectively, beating the state-of-the-art. Qualitative observations show improved robustness and reduced failure rates. Previous work had explored automatic adequacy classification of hip 3D ultrasound, to provide clinicians with rapid point-of-care feedback on the quality of the scan. We revisit the originally proposed adequacy criteria and show that these criteria can be improved. Further, we show that 3D CNNs can be used to automate this task. Our best model shows good agreement with human labels, achieving an AROC of 84%. Ultimately, we aim to incorporate these models into a fully automatic, accurate, reliable, and robust system for hip dysplasia diagnosis.
View record
Developmental dysplasia of the hip (DDH) is the most common pediatric hip disorder, representing a spectrum of hip instabilities from mild to complete dislocation. Routine DDH clinical examinations consist of two parts: static assessment, for evaluating acetabular morphologies with ultrasound (US), and dynamic assessment, for detecting abnormal hip instabilities by applying stress to the joint and feeling the resulting movement. Several recent works have shown that 3D US computer-aided methods significantly reduce dysplasia metrics’ variability by 70% compared to standard 2D approaches. However, identifying adequate diagnostic US volumes is a challenging task and dynamic assessment has been shown to be relatively unreliable. In this thesis, we propose automated techniques to classify 3D US scan adequacy and a repeatable method for quantifying femoral head displacement observed during dynamic assessment with 3D US. To automatically classify scan adequacy, we developed and evaluated three near real-time deep learning techniques that build upon each other from individual slice by slice categorization with a convolutional neural network to long range inter-slice analysis with a recurrent neural network. Our contributions include developing effective criteria that defines the features required for DDH diagnosis in an adequate 3D US volume, proposing an efficient architecture for robust classification, and validating our model's agreement with expert radiologist labels. We achieved 80% per volume accuracy on a test set of 20 difficult to interpret volumes and a runtime of two seconds.To quantify dynamic assessment, we propose an automatic method of calculating the observed degree of movement through a novel 3D femoral head coverage displacement metric. We designed and conducted a clinical study to record dynamic assessment manoeuvres with 3D US on a cohort of 40 pediatric patients. We evaluated our 3D femoral head coverage displacement metric and found a good degree of repeatability with a test-retest ICC measure of 0.70 (95% CI: 0.51 to 0.83, p
View record
In Canada, plain X-ray machines are operated by medical radiation technologists (MRTs). During their training, students learn how to produce radiographs on models rather than human subjects to avoid radiation exposure, and learn patient positioning with human subjects, but are not permitted to produce X-ray images of these subjects. In the latter case, instructors evaluate students’ patient positioning by only visual inspection since no images are produced for evaluation. Therefore, students do not receive visual feedback on the correlations between patient positioning and the resulting radiographs and are limited to learning to evaluate radiographs primarily through studying reference images in textbooks. Our goal was to develop a system during training that would generate estimated radiographs based on real-time measurements of joint positions of live human subjects to better prepare students for their initial clinical experiences. Our system combines real-time tracking of a live patient stand-in with CT modeling and virtual radiographs generated on-the-fly as the patient stand-ins are repositioned by MRTs. We performed a 16-participant user study to determine if this Virtual Patient X-ray (VPX) system would improve novice students’ ability to learn and perform proper patient positions. Participants were trained to learn how to position elbow and knee radiographic views using both VPX and conventional methods in which no visual feedback was provided. Patient positioning, expert evaluation scores, training time and survey results were measured to evaluate if training with VPX improved participants’ learning compared to conventional methods. We successfully designed a system generating near-real time virtual patient X-rays with a joint accuracy of 10-15°. Our user study showed a non-significant difference in evaluation scores for elbow imaging tasks and a lower significant difference in evaluation scores for knee imaging tasks using our VPX system. We also found significantly higher training times and higher confidence scores with VPX training. This leads us to believe there was a positive engagement and user learning interaction with the system. We conclude VPX shows promise for use in medical radiology classrooms for improving patient positioning skills. VPX could also serve as an effective visualization tool to complement the instructor’s feedback during in-class lessons.
View record
Surgically repairing pelvic fractures is complex and includes intensive use of ionizing X-rays. The surgery is prone to screw insertion errors, which can harm the patient. We aim to alleviate these issues by proposing ultrasound (US), instead of X-rays, to guide the surgery. However, US images are noisy, which makes them difficulty to use intraoperatively. This thesis presents segmentation and registration methods aimed at making an US based pelvic fracture repair technique possible.We perform a scoping review of US bone segmentation literature to better understand the current state of the field. We find a lack of consistency in validation practices, especially in quantifying segmentation accuracy. We also recommend techniques based on clinical requirements. We then develop a three-dimensional (3D) US bone segmentation technique: ’Shadow Peak’ (SP), which uses simplified analysis of shadow and peak intensity information in US. In a full-sized pelvic phantom study and on pilot in-vivo pediatric data, we demonstrate SP to be more accurate than two state-of-the-art segmentation methods: phase symmetry (PS) and confidence-weighted structured phase symmetry (CSPS). SP achieves a mean F-score of 63% compared to 54% for PS and 34% for CSPS on phantom data, and 94% on in-vivo data compared to 70% and 72% for PS and CSPS. SP is real-time with a mean runtime of 0.48s per volume, compared to 18.1s and 21.95s for PS and CSPS.Thirdly, we develop a registration pipeline for aligning tracked US and preoperative CT, using the normalized cross-correlation (NCC) similarity metric. We find NCC-based registration is more accurate and robust than Gaussian Mixture Model (GMM) and Coherent Point Drift (CPD) point-set registrations, two methods previously used for US-CT bone registration. SP segmentation with NCC registration achieves a mean target registration error of 3.22mm, compared to 3.89mm with CPD, while GMM registration typically fails. All methods are evaluated on a full-sized pelvic phantom containing soft-tissue details.Our proposed methods are fast and accurate as tested on phantom and in-vivo datasets, and have the potential to make ultrasound-based pelvic fracture guidance practically feasible. Moreover, our US bone segmentation review is useful for guiding future studies in the field.
View record
Stroke is the leading cause of disability in North America. Fifty-four percent of stroke survivors suffer from upper body hemiparesis, a weakness that limits the client’s ability to perform functional tasks with the affected side of the body. Stroke rehabilitation aims to recover limb mobility through thousands of repeated functional movements that lead to neural regeneration. However, time constraints in clinical rehabilitation lead to an average of 32 arm repetitions per session, which is insufficient for optimal recovery. Accurate monitoring of client activity outside of the clinical setting could enable therapists to track what they do, improving recovery. To address this problem, we have designed the Arm Rehabilitation Monitor (ARM), a wrist-worn device that collects movement data in unconstrained environments, and processes it offline to identify reach actions. Reach actions were identified as functionally meaningful tasks that lead to better rehabilitation. We enrolled 15 participants with mild to moderate hemiparesis due to stroke to perform two activities: (1) a functional assessment of the arm, and (2) an activity of daily living (ADL) task that consisted of making a pizza. The data recorded by the IMU on both activities was used to train three different machine learning algorithms (Random Forest, Convolutional Neural Networks and Shapelets) to detect reaching gestures.We found that the ARM obtained the best results with the Random Forest and CNN algorithms. The CNN algorithm had the best F1-score (0.523) for the Clinic-Home inter-subject tests, while the RF algorithm obtained the best score (0.486) in the Clinic-Home intra-subject configuration. We used the ARM to estimate the time spent reaching and the number of reach counts. The CNN algorithm predicted the reach time for the Clinic-Home inter-subject tests to be 1.07x ( 0.55x) the true reach time and the reach counts to be 1.28x ( 0.40x) the true number of reach gestures. In turn, the RF algorithm predicted the reach time for the Clinic-Home intra-subject configuration to be 1.16x ( 0.84x) and the reach counts to be 1.26x (0.40x). Both results have a smaller standard deviation when estimating reach counts than a comparable commercial accelerometer worn on the wrist.
View record
Mobile C-arm X-ray machines are commonly used for imaging during orthopaedic surgeries to visualize internal anatomy during procedures. However, there is evidence indicating that excess operating time and radiation exposure result from the use of scouting images to aid C-arm positioning during surgery. Additionally, C-arms are currently used primarily as a qualitative tool. Several techniques have been proposed to improve positioning, reduce radiation exposure, and increase quantitative utility, but they require accurate C-arm position tracking. There have been attempts by other research groups to develop C-arm tracking systems, but there are currently no solutions suitable for use in an operating room. The objective of this thesis is therefore to present the development and verification of a real-time C-arm base-tracking system called OPTIX (On-board Position Tracking for Intraoperative X-rays).The proposed tracking system uses a single floor-facing camera mounted to the base of a C-arm. A computer vision algorithm was developed that tracks motion relative to the operating room floor. This system is capable of relative motion tracking as well as absolute position recovery for previous positions.The accuracy of the system was evaluated on a real C-arm in a simulated operating room. The experimental results demonstrated that the relative tracking algorithm can measure C-arm translation with errors of less than 0.75% of the total distance travelled, and orientation with errors better than 5% of the cumulative rotation. With the incorporated loop closure step, OPTIX can be used to achieve C-arm repositioning with translation errors of less than 1.10±0.07 mm and rotation errors of less than 0.17 ±0.02°. These results are well within the desired system requirements of 5 mm and 3.1°.The system has shown promising results for use as a C-arm base-tracking system. The system has clinically acceptable accuracies and should lead to a reduced need for scouting images when re-obtaining a previous position. The base-tracking system can be integrated with a C-arm joint tracking system, or implemented on its own for steering guidance. When implemented in an operating room, OPTIX has the potential to lead to a reduction in operating time and harmful radiation exposure to surgical staff.
View record
Fluoroscopic C-arms are operated by medical radiography technologists (MRTs) in Canadian operating rooms (ORs). Newly trained MRTs often experience most of their practical learning curve with C-arms in the OR, where achieving the radiographic views requested by surgeons can be challenging. New MRTs often require several scout X-rays during C-arm positioning, resulting in unnecessary radiation exposure and added OR time. To address this problem we have designed an Artificial X-ray Imaging System (AXIS) in order to assess the utility of artificial X-rays in improving the C-arm positioning performance by inexperienced users. AXIS is designed to generate Digitally Reconstructed Radiographs (DRRs), or artificial X-ray images, based on the relative position of a C-arm and manikin. We enrolled 30 participants into our user study, each of whom performed four activities: an introduction session, an AXIS-guided evaluation, a non-AXIS-guided evaluation, and a questionnaire. The main goal of the study was to compare C-arm positioning performance with and without AXIS guidance. For each evaluation, the participants had to replicate a set of target X-ray images by taking real radiographs of the manikin with the C-arm. During the AXIS evaluation, artificial X-rays were generated at 2 Hz for guidance, while in the non-AXIS evaluation, the participants had to acquire real X-rays to guide them toward the correct view. We recorded the number of real X-rays and time required per task, as well as tracked the C-arm’s pose and compared it to the target pose to determine positioning accuracy. We found that users required 53% fewer scout X-rays and achieved 10% better C-arm displacement accuracies when guided by AXIS, without requiring more time to complete the imaging tasks. From the questionnaires we found that, on average, participants felt significantly more confident in their ability to capture correct anatomical views when they were guided by AXIS. Moreover, the participants found the usefulness of AXIS in guiding them to the desired view to be ‘very good’. Overall, we are encouraged by these findings and plan to further develop this system with the goal of deploying it both for training and intraoperative uses.
View record
Purpose: This thesis comprised two main phases. Initial work focused on clarifying the need and use case for a novel device to measure drilled bore depth in bone during osteosynthesis surgery. Next, I demonstrated the feasibility and reliability of an optical sensing device for automatic measurement of drilled bore depth in bone during surgery compared with conventional methods.Methods: I completed a structured Needs Assessment followed by an Engineering Design process to develop a series of prototypes using laser displacement sensors mounted on a surgical drill to determine drilled bore depth in bone. In all versions of the prototypes bore depth was computed based on a characteristic pattern of drilling velocity in bicortical bone. Prototypes consisted of one or more laser displacement sensors sending displacement and time data to a microprocessor and then a personal computer. After data filtering with a second order Butterworth filter velocity and acceleration were calculated using differentiation and double differentiation. Characteristic spikes in velocity and acceleration indicated cortical breach and allowed identification of bore depth. Exploratory experiments were done with multiple sensor arrangement concepts in porcine long bones, and more rigorous final evaluation experiments were done with the lead designs in pig hind limbs with comparison to CT scan as ‘gold standard’.Results: In exploratory experiments a design involving two laser displacement sensors angled towards the drilling axis measuring distance from a mock drill guide performed better than alternative designs. This design in final evaluation experiments showed superior performance to the conventional depth gauge under three clinically relevant drilling conditions (standard deviation 0.70 mm vs. 1.38 mm, 0.86 mm vs. 3.79 mm, 0.80 mm vs. 3.19 mm). A positive bias was present in all drilling conditions.Conclusions: An optical sensing device can be used to measure bore depth in bone during surgery.
View record
“You can’t improve it unless you can measure it” is a common sentiment in engineering. For total knee replacement patients, failed implants requiring revision surgery is a significant risk. Our long-term goal, therefore, is to develop and evaluate a protocol that will allow us to accurately measure the full 3D position of an implant in the early post-surgical period in order to detect signs of relative motion occurring between implant and bone. By doing this, we will be able to gain insights into the failure mechanisms behind total knee replacement implants. The 'gold standard' method for measuring relative motion is known as Roentgen Stereophotogrammetric Analysis (RSA) – a technique which extracts 3D information about the implant and bone positions from two roughly orthogonal radiographs. This information can be used to quantify the migration of an implant over time to submillimeter accuracy, a metric that has been shown to reliably predict implant longevity in patients (Pijls 2012). Unfortunately, commercial RSA systems are expensive, which has limited their use in clinical settings. Our goal in this project was to develop an RSA protocol based on C-arm fluoroscopy machines, many of which already exist in most hospitals. We successfully developed such a protocol and evaluated its accuracies and precisions through a series of phantom-based verifications. Results were highly promising: accuracies ranged between -39 to 11 μm for translations and -0.025 to 0.029° for rotations, while system precisions ranged between 16 to 27 μm and 0.041 to 0.059°. This performance was comparable to RSA systems in the literature, where traditional and more expensive radiographic equipment was typically employed. In addition, inter-rater reliability tests also showed a high degree of correlation (ICC > 0.999) between two raters who were trained to use the protocol. We conclude that we have developed an RSA protocol appropriate for measuring relative motion of knee replacement implants in phantoms and cadaveric specimens by leveraging the use of existing C-arm technology. This research places us in position to further develop the protocol for use in extensive prospective clinical assessments – research that can potentially drive future improvements in surgical technique and implant design.
View record
Despite being demonstrably better than conventional surgical techniques with regards to implant alignment and outlier reduction, computer navigation systems have not faced widespread adoption in surgical operating rooms. We believe that one of the reasons for the low uptake stems from the bulky design of the optical tracker assemblies. These trackers must be rigidly fixed to a patient’s bone and they occupy a significant portion of the surgical workspace, which makes them difficult to use. In this thesis we introduce the design for a new optical tracker system, and subsequently we evaluate the tracker’s performance.The novel tracker consists of a set of low-profile flexible pins that can be placed into a rigid body and individually deflect without greatly affecting the pose estimation. By relying on a pin’s stiff axial direction while neglecting lateral deviations, we gain sufficient constraint over the underlying body. We used an unscented Kalman filter based algorithm as a recursive body pose estimator that can account for relative marker displacements.We assessed our tracker’s performance through a series of simulations and experiments inspired by a total knee arthroplasty. We found that the flexible tracker performs comparably to conventional trackers with regards to accuracy and precision, with tracking errors under 0.3mm for typical operating conditions. The tracking error remained below 0.5mm during pin deflections of up to 40mm. Our algorithm ran at computation speeds greater than real-time at 30Hz which means that it would be suitable for use in real-time applications.We conclude that this flexible pin concept provides sufficient accuracy to be used as a replacement for rigid trackers in applications where its lower profile, its reduced invasiveness and its robustness to deflection are desirable characteristics.
View record
The success of many orthopaedic procedures relies on the accurate and timely machining of bone, which can be difficult to achieve. Errors during machining can negatively affect implant placement or cause neurovascular injury. Bracing can improve the performance of both humans and machines during a variety of interactive tasks such as writing and grinding. The purpose of this thesis was to assess the feasibility of braced computer assisted orthopaedic surgery by testing the influence of bracing on theperformance of a surgically relevant task.We developed a computer assisted orthopaedic surgery research system and experimental bracing devices for two surgical drilling tasks: navigated targeting and cortical drilling. The performance of each device was tested in a user study with 25 (13 male, 12 female) non-expert subjects.In the navigated targeting task, subjects aligned a drill bit with a randomly generated trajectory while using a rigid brace to support the forearm and two different versions of guidance displays to provide visual feedback: a 2D axial display and a 3D-perspective display. Bracing reduced variation within- and between-trials, but did not affect final accuracy or targeting speed. There was a significant increase in final radial (170 %, 95% CI: 140–210 %) and angular error (350 %, 95% CI: 300–400 %) with the 3D-perspective display.In the cortical drilling task, subjects attempted to minimize plunge of the drill bit after breakthrough. An experimental damper-based bracing device was designed by developing a numerical model to predict drill plunge, extending the model to predict the behaviour with bracing, and estimating an optimal brace damping range. Subjects drilled through oak workpieces using a standard high speed steel drill bit and a brad point drill bit at 4 damping levels. At a level of 10Ns/mm, there was a significant decrease in plunge depth of 74% (95% CI: 71–76 %) and no significant difference in drilling duration.This thesis provides experimental evidence that a simple bracing strategy can improve the performance of a clinically relevant task; Applying bracing to computer assisted orthopaedic surgery may be an effective way to improve performance and warrants further investigation.
View record
Bracing is defined as a parallel mechanical link between a tool user, the environment, and/or the workpiece that alters the mechanical impedance between the tool and workpiece with the goal of improving task performance. Bracing is used in a variety of settings including robotics/automation and more recently in medicine/dentistry, however it remains relatively understudied in formal ways. This thesis explored whether bracing could be beneficial in a current orthopaedic problem. We selected a candidate orthopaedic procedure based on selection criteria that included three degrees of freedom, and the ability to abstract/simulate the surgical task using phantom tests.Femoral head-neck osteochondroplasty is used to treat a deformity of the anterosuperior femoral head-neck region called cam-type femoroacetabular impingement. During this procedure a surgeon uses a spherical burr to remove the cam lesion and restore the normal contour of the femoral head-neck. The goal of this thesis was to evaluate whether a proposed bracing technique could enable a user to perform a cam resection more accurately and quickly than a currently employed arthroscopic technique.We first performed a pilot study with 4 subjects to examine the impact of bracing on simulated bone milling and found that bracing could reduce errors on the order of 7-14% and procedure length on the order of 30-50% but these findings were limited by a small sample and effect size. Workspace issues with the brace indicated the need for a redesign, which we combined with the creation of a higher fidelity surgical simulation. We showed that the most effective brace design projected a remote center of motion combined with a spring for axial stiffness.This improved brace design was tested using 20 non-surgeons and 5 surgeons. While bracing had no detectable effect on the surgeon population, bracing reduced procedure length and error by 37% and 27% respectively in the non-surgeon population when compared to the unbraced condition. Unfortunately, when compared to the surgical simulation condition, there was no detectable effect of bracing. This finding suggests that an optimal level of bracing may exist but how to experimentally determine this level remains a topic for future study.
View record
This research presents a new biologically motivated robotic model of the human eye. The model incorporates aspects of the anatomy that are functionally important for understanding biological oculomotor systems. The 3DOF robotic eye is driven by 6 DC motors through low friction dyneema cables. The DC motors represent muscle actuation while dyneema cables represent the 6 extraocular muscles (EOM). The globe’s natural orbital support is emulated by a low-friction gimbal structure that supports the eye on the anteroposterior axis at the back of the globe, where there is no tendon interference. Moreover, we have used the Buckingham Π theorem dimensionless analysis to scale the geometric and dynamic properties of the biological eye according to the model’s specified dimensions and inertia. Lastly, to confirm the functionality of the eye and to verify that the initial design requirements have been satisfied, we have implemented a controller design to drive this redundant (6 actuators, 3 DOF) system.The presented robotic eye model is to be employed as a test bed for testing theories about oculomotor control. Furthermore, this system could also be used to assess proposed surgical corrections for various oculomotor diseases.
View record
Pelvic fractures are serious injuries that are most commonly caused by motor vehicle accidents and affect people of all ages. Surgeries to realign the pelvis and fix the bone fragments with screws have inherent risks and rely on cumbersome intra-operative radioscopic imaging methods. Ultrasound (US) is emerging as a desirable imaging modality to replace fluoroscopy as an intra-operative tool for pelvic fracture surgery because it is safe, portable and inexpensive. Despite the many advantages of US, it suffers from speckle noise, a limited field of view and a low signal-to-noise ratio. Therefore, we must find a way to efficiently process and utilize ultrasound data so that it can be used to effectively visualize bone. In the past decade, there has been much research focused on fusing US with pre-operative Computed Tomography (CT) to be used in an intra-operative guidance system; however, current methods are either too slow or not robust enough to use in a clinical setting. We propose a method to automatically extract bone features in US and CT volumes and register them using a fast point-based method. We use local phase features to estimate the bone surfaces from B-mode US volumes. We simplify the bone surface using particle simulation, which we optimize using the hierarchical Barnes-Hut algorithm. To ensure the point cloud best represents the bone surface, we reinforce them with high curvature features. We then represent the point clouds using Gaussian Mixture Models (GMMs) and find the correspondence between them by minimizing a measure of distance between the GMMs. We have validated our proposed algorithm on a phantom pelvis and clinical data acquired from pelvic fracture patients. We demonstrate a registration runtime of 1.4 seconds and registration error of 0.769 mm.
View record
Member of G+PS
View explanation of statuses
If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.