Date of Award
Doctor of Philosophy
Patricia A. Nava
Artificial Neural Networks (ANNs) have been developed in an attempt to emulate the information processing capabilities of the biological brain. They offer an alternate computing approach to problems in which mathematical modeling is complicated, such as pattern recognition and pattern classification.
Since ANNs were proposed in the early 1940s, there has been a great amount of research effort dedicated to the development of new models that improve performance. Consequently, different architectures, a variety of activation functions, and distinct learning algorithms have been developed and implemented in different disciplines such as medicine, engineering, and science. In addition, ANNs have been combined with other alternate computing approaches such as fuzzy logic, genetic algorithms, and quantum computing to create hybrid systems in order to improve the performance of an ANN at the cost of making the system more complex. However, the majority of these efforts target the network level, and do not focus on the individual neuron.
This investigation focuses on the individual neuron and introduces the Divcon Neuron (DN) model, which increases the computing power of an ANN, when compared to the commonly used perceptrons. Two additional facets considered that derive from this research are first, the hardware implementation of the proposed model to observe the hardware resources utilized compared to perceptrons. The second aspect is the simulation time of the model to observe its computational benefits compared to the perceptron.
The DN proves to be an asset to performance in accuracy, simulation time, and efficiency.
Received from ProQuest
Saenz, Jovan, "Investigation of the Divcon Neuron to Increase the Performance of a Traditional Feed Forward Multi-Layer Perceptron and its Hardware Implementation" (2012). Open Access Theses & Dissertations. 2386.