Machine learning has been actively and successfully used to make financial decisions. In general, these systems work reasonably well. However, in some cases, these systems show unexpected bias towards minority groups -- the bias that is sometime much larger than the bias in the data on which they were trained. A recent paper analyzed whether a proper selection of hyperparameters can decrease this bias. It turned out that while the selection of hyperparameters indeed affect the system's fairness, only a few of the hyperparameters lead to consistent improvement of fairness: the number of features used for training and the number of training iterations. In this paper, we provide a theoretical explanation for these empirical results.