Millions of lines of code are written every day, and it is not practically possible to perfectly thoroughly test all this code on all possible situations. In practice, we need to be able to separate codes which are more probable to contain bugs -- and which thus need to be tested more thoroughly -- from codes which are less probable to contain flaws. Several numerical characteristics -- known as code quality metrics -- have been proposed for this separation. Recently, a new efficient class of code quality metrics have been proposed, based on the idea to assign consequent integers to different levels of complexity and vulnerability: we assign 1 to the simplest level, 2 to the next simplest level, etc. The resulting numbers are then combined -- if needed, with appropriate weights. In this paper, we provide a theoretical explanation for the above idea.