Adaptive Microphone Array Systems with Neural Network Applications
A microphone array integrated with a neural network framework is proposed to enhance and optimize speech signals derived from environments prone to noise and room reflections that cause reverberation. Microphone arrays provide a way to capture spatial acoustic information for extracting voice input from ambient noise. In this study, we utilize and analyze established signal processing methods combined with different neural network architectures to achieve denoised and dereverberated speech signal results that are comparable with their clean, anechoic versions. The first stage of the proposed system involves using datasets containing anechoic speech recordings of speech utterances and convolving them with a collection of different room impulse responses to simulate reverberation. Noisy speech signals are also added to the clean speech to simulate noisy environments. These datasets are used for training and testing speech enhancement methods of our combined system. The goal of this work is to develop an approach to process and remove reverberation present in voice signals to achieve adaptability and optimal signal-to-noise ratio for any stationary and non-stationary interference including white noise, impulse noise, and reverberations. CNN and FCNN neural network architectures are structured to optimize learning or improve other performance metrics based on attributes such as layer connections, types of layers, number of neurons in each layer, activation functions, and cost functions. While preserving signal integrity, this work aims to enhance the quality and intelligibility of the extracted speech. Our experimental results show that, while more computationally intensive, the CNN outperformed the FCNN in training and audible quality.
Computer Engineering|Artificial intelligence
Covarrubias, Jazmine Marisol, "Adaptive Microphone Array Systems with Neural Network Applications" (2019). ETD Collection for University of Texas, El Paso. AAI27671627.