An evaluation framework for scientific programming productivity
Abstract
Substantial time is spent on building, optimizing and maintaining large-scale software that is run on supercomputers. However, little has been done to utilize overall resources efficiently when it comes to including expensive human resources. The community is beginning to acknowledge that optimizing the hardware performance such as speed and memory bottlenecks contributes less to the overall productivity than does the development lifecycle of high-performance scientific applications. Researchers are beginning to look at overall scientific workflows for high performance computing. Scientific programming productivity is measured by time and effort required to develop, configure, and maintain a simulation experiment and its constituent parts, together with the time to get to the solution when the programs are is executed. There is no systematic framework by means of which scientific programming productivity of the available tools can be evaluated. We propose an evaluation approach that compares recorded novice programming workflows to an expert workflow to identify productivity bottlenecks and suboptimal paths. Based on a set of predefined criteria we can evaluate both short-term and long-term productivity criteria. We use these results to suggest improvements to the programming environment or tools. We give preliminary results from applying this approach to two case studies involving the use of numerical libraries.
Subject Area
Computer science
Recommended Citation
Munipala, W. K. Umayanganie, "An evaluation framework for scientific programming productivity" (2016). ETD Collection for University of Texas, El Paso. AAI10118828.
https://scholarworks.utep.edu/dissertations/AAI10118828