Date of Award

2024-12-01

Degree Name

Master of Science

Department

Computational Science

Advisor(s)

Saeid S. Tizpaz-Niari

Abstract

The recent advances in training deep neural networks (DNNs) have revolutionized thedevelopment of data-driven decision support software. As a result, fairness testing and verification approaches for DNNs have received considerable attention. Testing approaches, based on statistical analyses, aim to provide counterexamples to fairness, while verification approaches attempt to offer a proof of correctness. The notion of individual fairness is a well-accepted concept characterizing discrimination as the existence of a counterfactual individual, differing only in protected features, receiving a better algorithmic outcome. DNNs may encode several such counterfactual instances, and in extreme scenarios, overwhelm the analyst and hide critical instances. Moreover, the mere existence of such counterfactuals fails to provide workable information on the root cause of discrimination. We study a quantitative generalization of individual fairness, called k-unfairness, where counterexamples include the presence of k â?¥ 2 counterfactual instances. We show that this quantitative notion of individual fairness allows us to prioritize discriminatory instances, measure the sensitivity of DNNs to the protected attributes, and debug the patterns in fairness bugs with rich information. On the technical side, we propose a hybrid method that combines formal symbolic analysis (SMT and MILP solvers) to certify individual fairness with randomized search (random walks and simulated annealing) to search for instances with diverse explanations. This method brings the advantages of both techniques: it certifies the fairness requirements if no counterexample is found and quantifies discrimination, which is computationally challenging for symbolic analysis. We use random walks and simulated annealing strategies to guide the search and find inputs that maximize objectives like the sensitivity of DNNs to protected attributes. Our experiments show that some benchmarks manifest the maximum sensitivity, while others show some or no sensitivity to the protected attributes. We also find that decision trees provide intuitive explanations to understand circumstances when DNNs significantly discriminate against protected groups.

Language

en

Provenance

Recieved from ProQuest

File Size

64 p.

File Format

application/pdf

Rights Holder

Ranit Debnath Akash

Available for download on Friday, January 08, 2027

Share

COinS