Fuzzy information processing systems start with expert knowledge which is usually formulated in terms of words from natural language. This knowledge is then usually reformulated in computer-friendly terms of membership functions, and the system transform these input membership functions into the membership functions which describe the result of fuzzy data processing. It is then desirable to translate this fuzzy information back from the computer-friendly membership functions language to the human-friendly natural language. In general, this is difficult even in a 1-D case, when we are interested in a single quantity y; however, the fuzzy research community has accumulated some expertise of describing the resulting 1-D membership functions by words from natural language. The problem becomes even more complicated in 2-D and multi-D cases, when we are interested in several quantities y1,...,ym, because there are fewer words which describe the relation between several quantities than words describing a single quantity. To reduce this more complicated multi-D problem to a simpler (although still difficult) 1-D case, L. Zadeh proposed, in 1966, to use words to describe fuzzy information about different combinations y=f(y1,...,ym) of the desired variables. This idea is similar to the use of marginal distributions in probability theory. The corresponding terms are called shadows of the original fuzzy set. The main question is: do we lose any information in this translation? Zadeh has shown that under certain conditions, the original fuzzy set can be uniquely reconstructed from its shadows. In this paper, we prove that for appropriately chosen shadows, the reconstruction is always unique. Thus, if we manage to describe the original membership function by linguistic terms which describe different combinations y, this description is lossless.