Date of Award

2019-01-01

Degree Name

Doctor of Philosophy

Department

Metallurgical and Materials Engineering

Advisor(s)

Wei Qian

Second Advisor

Tzu-Liang (Bill) Tseng

Abstract

With the development of imaging analysis techniques, flurry of applications associated with it has been hatched. In this paper, image analysis on both organ and cellular levels will be demonstrated. For organ level images, a deep learning based computer aided lung cancer diagnosis based on computer tomography images is studied. Deep learning techniques have been extensively used in computerized pulmonary nodule analysis recent years. Many reported studies still utilized hybrid methods for diagnosis, in which convolutional neural networks (CNNs) are used only as one part of the pipeline, and the whole system still needs either traditional image processing modules or human intervention to obtain the final results. In this paper, we introduced a fast and fully-automated end-to-end system that can efficiently segment the precise lung nodule contours from the raw thoracic CT scans. Our proposed system has three major modules: candidate nodule detection with Faster regional-CNN (R-CNN), candidate merging and false positive (FP) reduction with CNN, nodule segmentation with customized fully convolutional neural network (FCN). The entire system has no human interaction or database specific design. The average runtime is about 16 seconds per scan on a standard workstation. The nodule detection accuracy is 91.4% and 94.6% with an average of 1 and 4 false positives (FPs) per scan. The average dice coefficient of nodule segmentation compared to the groundtruth is 0.793.

For cellular level images, we studied localization algorithms on supperresolution localization microscopy to improve resolutions. Localization algorithms play a significant role in determining the accuracy in super resolution fluorescence imaging. A primary challenge is that choosing the right algorithm depends on users' prior knowledge about their specific imaging system. We introduce a Deep Matching method that combines convolutional neural networks to process raw images together with several conventional localization algorithms to calculate fluorophore positions. This method not only improves the localization accuracy, but also removes the dependence of accuracy on the algorithm chosen by the user. Our results also indicate the possibility to overcome the practical limit of the Cramér-Rao lower bound in the low signal-to-noise ratio regime with Deep Matching processed images. Furthermore, inspired by the nature of the point spread function (PSF) in defocused images that have ring structures, they can be used to localize the 3D position of single particles by calculating the ring center (x & y) and radius (z). Since there is no well-developed mathematical model for a defocused PSF, it is difficult to perform fitting based algorithm in such images. A new particle localization algorithm based on radial symmetry and ellipse fitting is developed to localize the centers and radii of defocused PSFs. Our method can localize the 3D position of a fluorophore within 20 nm precision in three dimensions in a range of 40 µm in z dimension from defocused 2D images.

Language

en

Provenance

Received from ProQuest

File Size

116 pages

File Format

application/pdf

Rights Holder

Xia Huang

Included in

Biomedical Commons

Share

COinS