Some Concrete Factorial Facts
November 29, 2024
It is well-known that Stirling-like bounds on the factorial [1] are the tightest ones. However it is sometimes convenient to have looser but simpler bounds on \(n!\), for example, in order to simplify proofs. Assuming \(n\ge 1\), if we choose for instance some \(v\in [1,n)\) then the \(\lfloor v\rfloor \) integers from \(1\) to \(\lfloor v\rfloor \) trivially lie betwen \(1\) and \(v\), whereas the \(n-\lfloor v\rfloor \) integers from \(\lfloor v\rfloor +1\) to \(n\) lie between \(v\) and \(n\). Using these basic lower and upper bounds we thus have that
\(\seteqnumber{0}{}{0}\)\begin{equation*} 1^{\lfloor v\rfloor } v^{n-\lfloor v\rfloor }\le n! \le v^{\lfloor v\rfloor } n^{n-\lfloor v\rfloor }. \end{equation*}
We can get rid of those pesky floor functions by using \(v-1<\lfloor v\rfloor \le v\) to get the somewhat looser bounds
\(\seteqnumber{0}{}{0}\)\begin{equation*} v^{n-v}\le n! \le \left (\frac {v}{n}\right )^{v-1} n^{n}, \end{equation*}
where equality in the upper bound can only happen when \(v=1\) and \(n=1\). In general, different values of \(v\) maximise the lower bound and minimise the upper bound, respectively, although neither can be obtained explicitly — they can easily be obtained numerically, though, by using the Lambert W function. In order to make the bounds only dependent on \(n\) we can simply choose \(v\) to be some arbitrary function of \(n\), even if this may not be optimum in general. For example, setting \(v=\sqrt {n}\) yields
\(\seteqnumber{0}{}{0}\)\begin{equation} \label {eq:sqrt_n} \sqrt {n}^{n-\sqrt {n}}\le n! \le \left (\frac {1}{\sqrt {n}}\right )^{\sqrt {n}-1} n^{n}, \end{equation}
and setting \(v=n/2\) (which in principle requires \(n>1\)) yields
\(\seteqnumber{0}{}{1}\)\begin{equation} \label {eq:half_n} \left (\frac {n}{2}\right )^{\frac {n}{2}}\le n! \le \left (\frac {1}{2}\right )^{\frac {n}{2}-1} n^n, \end{equation}
where both bounds incidentally also work for \(n=1\).
What is the use of (1) or (2)? After all these bounds are “on the loose”…Well, for one thing, they show that \(n!\) grows exponentially. Of course \(n!\le n^n\), but the key to showing exponential growth of \(n!\) is it being sandwiched in between two exponential bounds. Now, in their classic Concrete Mathematics book [2], Graham et al. give a nicer derivation of simple (and tighter) bounds for \(n!=\prod _{k=1}^n k\) by mimicking the method that Gauss is said to have used as a child to quickly compute \(S_n=\sum _{k=1}^{n} k\) for \(n=100\)— an apocryphal story by all accounts. Just like the summands in \(S_n+S_n\) can be conveniently paired to write \(2S_n=\sum _{k=1}^n k+(n-k+1)=n(n+1)\), the idea in [2] is to pair the multiplicands in \((n!)(n!)\) in the same way to write \((n!)^2=\prod _{k=1}^n k (n-k+1)\). There is no closed-form computation in the multiplicative case, but because the polynomial \(f(k)=k(n-k+1)\) has maximum value \(f((n+1)/2)=((n+1)/2)^2\) and minimum value \(f(1)=f(n)=n\) (when \(k\in \{1,\dots ,n\}\)) then the following bounds are obtained:
\(\seteqnumber{0}{}{2}\)\begin{equation} \label {eq:concrete} n^{\frac {n}{2}}\le n! \le \frac {(n+1)^n}{2^n} \end{equation}
These bounds are tighter than both (1) and (2), and equally show the exponential nature of \(n!\) with a plus of elegance: a “very cute” fact (in the words of D. Knuth) is that the two bounds in (3) are the \(n\)th powers of the geometric and arithmetic means of \(1\) and \(n\), respectively.
And here comes our other modest observation. The bounds (3) in Concrete Mathematics can be tightened without completely betraying their original “smells like Gaussian spirit” vibe, by doing instead the pairings as follows: \((n!)^2=\prod _{k=2}^n k (n-k+2)\). In this case, the polynomial \(g(k)=k(n-k+2)\) has maximum \(g((n+2)/2)=((n+2)/2)^2\) and minimum \(g(2)=g(n)=2n\) (when \(k\in \{2,\dots ,n\}\)) which leads to the following bounds
\(\seteqnumber{0}{}{3}\)\begin{equation} \label {eq:reinforced_concrete} \left (2n\right )^{\frac {n-1}{2}}\le n! \le \left (\frac {2+n}{2}\right )^{n-1}. \end{equation}
These bounds, which are equal or tighter than (3) for all \(n\ge 1\), have an equally “cute” interpretation: they are the \((n-1)\)th powers of the geometric and arithmetic means of \(2\) and \(n\), respectively. Now for the hiccups: the upper bound in (4) still works for \(n=0\), but the lower bound is spectacularly wrong in this case— in contrast, the upper bound in (3) also works for \(n=0\), while the lower bound works after defining \(0^0=1\).
In any case, both (3) and (4) are really easy to commit to memory.
References
-
[1] H. Robbins. A remark on Stirling’s formula. The American Mathematical Monthly, 62(1):26–29, January 1955.
-
[2] R.L. Graham, D.E. Knuth, and O. Patashnik. Concrete Mathematics: A Foundation for Computer Science. Addison-Wesley, Harlow, UK, 2nd edition, 1994.
Last updated: \(\textit {Date: 2024/12/03 20:15:16}\)
[back]