In expert systems, we elicit the probabilities of different statements from the experts. However, to adequately use the expert system, we also need to know the probabilities of different propositional combinations of the experts' statements -- i.e., we need to know the corresponding joint distribution. The problem is that there are exponentially many such combinations, and it is not practically possible to elicit all their probabilities from the experts. So, we need to estimate this joint distribution based on the available information. For this purpose, many practitioners use heuristic approaches -- e.g., the t-norm approach of fuzzy logic. However, this is a particular case of a situation for which the maximum entropy approach has been invented, so why not use the maximum entropy approach? The problem is that in this case, the usual formulation of the maximum entropy approach requires maximizing a function with exponentially many unknowns -- a task which is, in general, not practically feasible. In this paper, we show that in many reasonable example, the corresponding maximum entropy problem can be reduced to an equivalent problem with a much smaller (and feasible) number of unknowns -- a problem which is, therefore, much easier to solve.