Date of Award

2025-08-01

Degree Name

Master of Science

Department

Computer Science

Advisor(s)

Christoph Lauter

Abstract

In recent times, we have seen the use of artificial intelligence in our daily lives. It helps us solve complicated problems. Some of these problems can be large and complex, requiring large models. As models grow in complexity, they require more computations and energy to be trained and tested. The execution of these models relies on floating-point arithmetic, which imposes constraints due to its finite precision. Due to these limitations, many of these computations are not exact. When this happens, computers are forced to round or approximate. We can use several number formats to circumvent this issue. For example, in single precision, we are allowed to use 24 binary digits, and in double precision, we are allowed to use 53 bits of precision. We can also explore small formats like FP8 [26], which could have 3 or 4 bits of precision. The importance of choosing the right format can drastically reduce the resources needed and allow us to increase or decrease the precision depending on the model's performance. As it propagates through the model, the error caused by rounding is compounded across the different layers and may have an impact on the model's final prediction. If we can analyze the rounding errors, we are then able to increase or decrease the model's precision to better optimize the resources and predictions. If we notice almost no error, we are then able to reduce the precision, optimizing the time and memory needed. In this work, we contributed by developing a software that uses the PyTorch C++ API to load and analyze the impact of the rounded error produced. We tested our software not only with standard forward-feeding models, but with deep learning models as well. We built this by using our implementation of the tensor core that allows custom floating-point operations to be performed. With this class, we can produce the relative error, absolute error, and an upper and lower bound of where the final answer may be.

Language

en

Provenance

Received from ProQuest

File Size

71 p.

File Format

application/pdf

Rights Holder

Johnatan Garcia

Share

COinS