Date of Award
5-2020
Document Type
Thesis
Degree Name
Master of Science (MS)
College/School
College of Science and Mathematics
Department/Program
Mathematical Sciences
Thesis Sponsor/Dissertation Chair/Project Chair
Amir H. Golnabi
Committee Member
Bogdan Nita
Committee Member
Ashwin Vaidya
Abstract
Data-driven modeling has gained a lot of attention over the past few years. In most cases, such models use a big collection of inputs and the corresponding outputs to find the pattern in data. Prerequisite for applying these models is the availability of a large collection of data. Data-driven modeling has been employed to accomplish many tasks over the years. However, due to the lack of clinical data, the advancement of data-driven modeling in medical imaging has been relatively limited, mainly due to challenges involved with medical data collection and analysis. This is particularly true in ultrasound imaging for assessing fetal health and growth.
The present thesis is part of a larger project that aims to create a fully-automated method to segment fetal structures from 3D ultrasound images. In this project, our main goal is to synthesize virtual samples that could be used to increase the learning accuracy of an automated segmentation model. This effort was developed primarily due to the lack of available collected and annotated clinical data. The work presented here is comprised of two parts: The first part is based on implementing a semi-automated method for segmenting placenta from 3D fetal ultrasound images, by using a graph-based method called Random Walker (RW). The random walker method in turn can be used to annotate both synthetic and real images to establish the ground truth.
In the second part, we purpose a 2D virtual sample generator to synthesize data than could be used to increase the learning accuracy of an automated segmentation model. This part of the project includes three main steps: First, we adopt a pre-trained deep neural network which was trained for image net dataset to encode a 2D sample (an image) to a fixed-length vector. Next, we feed these extracted vectors to a virtual sample generator to synthesize virtual1D vectors. Finally, we build a conditional generative adversarial network (cGAN) to create a 2D synthetic sample for each of these generated vectors.
File Format
Recommended Citation
Vafaee, Reza, "Semi-Automated Image Segmentation and Synthesis of Virtual Samples Using Generative Adversarial Networks and Fuzzy Sets" (2020). Theses, Dissertations and Culminating Projects. 476.
https://digitalcommons.montclair.edu/etd/476