An Approach to Predicting Performance of Sparse Computations on NVIDIA GPUs
Abstract
Sparse problems arise from a variety of applications, from scientific simulations to graph analytics. Traditional HPC systems have failed to effectively provide high bandwidth for sparse problems. This limitation is primarily because of the nature of sparse computations and their irregular memory access patterns.We predict the performance of sparse computations given an input matrix and GPU hardware characteristics. This prediction is done by identifying hardware bottlenecks in modern NVIDIA GPUs using roofline trajectory models. Roofline trajectory models give us insight into the performance by simultaneously showing us the effects of strong and weak scaling. We then create regression models for our benchmarks to model performance metrics. The outputs of these models are compared against empirical results.We expect our results to be useful to application developers in understanding the performance of their sparse algorithms in GPUs and to hardware designers in fine-tuning GPU features to better meet the requirements of sparse applications.
Subject Area
Computer science|Information Technology|Statistical physics
Recommended Citation
Long, Rogelio, "An Approach to Predicting Performance of Sparse Computations on NVIDIA GPUs" (2021). ETD Collection for University of Texas, El Paso. AAI28715116.
https://scholarworks.utep.edu/dissertations/AAI28715116