Publication Date



Technical Report: UTEP-CS-17-85


Probabilistic graphical models are a very efficient machine learning technique. However, their only known justification is based on heuristic ideas, ideas that do not explain why exactly these models are empirically successful. It is therefore desirable to come up with a theoretical explanation for these models' empirical efficiency. At present, the only such explanation is that these models naturally emerge if we maximize the relative entropy; however, why the relative entropy should be maximized is not clear. In this paper, we show that these models can also be obtained from a more natural -- and well-justified -- idea of maximizing (absolute) entropy.