It is well known that not all algorithms are feasible; whether an algorithm is feasible or not depends on how many computational steps this algorithm requires. The problem with the existing definitions of feasibility is that they are rather ad hoc. Our goal is to use the maximum entropy (MaxEnt) approach and get more motivated definitions.
If an algorithm is feasible, then, intuitively, we would expect the following to be true:
If we have a flow of problems with finite average length L, then we expect the average time T to be finite as well.
Thus, we can say that an algorithm is necessarily feasible if T is finite for every probability distribution for which L is finite, and possibly feasible if T is finite for some probability distribution for which L is finite.
If we consider all possible probability distributions, then these definitions trivialize: every algorithm is possibly feasible, and only linear-time algorithms are necessarily feasible.
To make the definitions less trivial, we will use the main idea of MaxEnt and consider only distributions for which the entropy is the largest possible. Since we are interested in the distributions for which the average length is finite, it is reasonable to define MaxEnt distributions as follows: we fix a number L0 and consider distributions for which the entropy is the largest among all distributions with the average length L=L0.
If, in the above definitions, we only allow such "MaxEnt" distributions, then the above feasibility notions become non-trivial: an algorithm is possibly feasible if it takes exponential time (to be more precise, if and only if its average running time t(n) over all inputs of length $n$ grows slower than some exponential function C^n), and necessarily feasible if it is sub-exponential (i.e., if t(n) grows slower than any exponential function).