Date of Award
Master of Science
Machine Learning continues to evolve as applications become more complex. Neural Networks, or Deep Networks, are integral to machine learning and the entire taxonomy of Artificial Intelligence [Sze17]. Intelligent structures and algorithms continue to advance, keeping pace with the complexi-ty of data. Changes in architecture, algorithms, and parameters are necessary to keep up with com-putational complexity and data available. This study focuses on how changes in depth of the archi-tecture affect performance on three distinct datasets, including one on Heart Disease. An adaptable network is created in original code, trained, and tested on these datasets. Its performance parameters are observed in order to better understand when it is necessary to add depth and what depth to add to a network for each dataset. The study compares performance, which indicates that there are tradeoffs, and that users must understand the balance between the complexity of the problem and complexity of the architecture in order to effectively use these architectures.
Received from ProQuest
Byers, Kirsten, "A Comparative Study of the Impact of Depth in Deep Learning Architectures" (2020). Open Access Theses & Dissertations. 2937.