## A little background to Stirling’s Formula ##

Stirling’s approximation is vital to a manageable formulation of statistical physics and thermodynamics. It vastly simplifies calculations involving logarithms of factorials where the factorial is huge. In statistical physics, we are typically discussing systems of \(10^{22}\) particles. With numbers of such orders of magnitude, this approximation is certainly valid, and also proves incredibly useful.

There are numerous ways of deriving the result, and further refinements to the approximation to be found elsewhere. Here is a simple derivation using an analogy with the Gaussian distribution:

## The Formula ##

Derive the Stirling formula:\[\ln(n!) \approx (n+\frac{1}{2})\ln{n} – n + \frac{1}{2}\ln{2\pi}\]

## Let’s Go ##

We begin by calculating the integral \(\int_{0}^{\infty} e^{-x} x^n dx\) (where \(n \ge 0\)) using integration by parts.

The formula for integration by parts is:\[\int u(x)v´(x)dx=u(x)v(x)-\int u´(x)v(x)dx\]

Given this situation, it’s normally better to take the infinitely differentiable function as your \(v´(x)\):

\(u(x)=x^n\)          \(u’(x)=nx^{n-1}\)

\(v(x)=-e^{-x}\)     \(v’(x)=e^{-x}\)

Thus we have:

\begin{equation} \int_{0}^{\infty} x^ne^{-x}dx=[x^n·-e^{-x}]_{0}^{\infty} + n\int_{0}^{\infty}x^{n-1}·e^{-x}dx\label{eq:int1} \end{equation}

We note that \([x^n·-e^{-x}]_{0}^{\infty}\) is always zero. Labelling the first integral \(g(x)\), the last integral is nothing more than \(n·g(x)|_{n \rightarrow n-1}\). This can be integrated by parts to yield a similar expression but with \(n\) reduced to \(n-1\) (and so on) such that we end up with \(n·(n-1)·(n-2)·…·(1)=n!\).

\begin{equation} \int_{0}^{\infty} x^ne^{-x}dx=n! \label{eq:eqeq} \end{equation}

Take the logarithm of the integrand in \eqref{eq:int1}:\[\ln{e^{-x}x^n} = -x + n\ln{x} =: f(x)\]\[f’(x) \stackrel{!}{=} 0 = -1 + \frac{n}{x}\]\[\Rightarrow x_0=n\]

So we find it has a maximum at \(x_0=n\).

Expanding \(f(x)\) around \(n\):

\begin{eqnarray} f(x)&=&-n+n \ln{n}+(-1+\frac{n}{x})|_{x=n}(x-n) + \frac{1}{2!} (\frac{-n}{x^2})|_{x=n}(x-n)^2 + … \nonumber \\ &=&-n+n\ln{n}-\frac{1}{2n}(x-n)^2 \nonumber \end{eqnarray}

Therefore, \(f(x)=\ln e^{-x} x^n \approx -n + n\ln n - \frac{1}{2n}(x-n)^2\).

If we take the exponential of \(f(x)\) and integrate, we see we have the same integral we just calculated in \eqref{eq:int1} but in a more useful form:\[\int \exp(f(x))dx=\int e^{-x}x^ndx\]\[\simeq\int \exp(-n + n\ln{n} – \frac{1}{2n}(x-n)^2)\,dx\]

\begin{equation} =\exp(-n+n\ln{n})\int_{0}^{\infty} \exp(-\frac{(x-n)^2}{2n})dx \stackrel{\eqref{eq:eqeq}}{=} n! \label{eq:nbang} \end{equation}

This is calculable by analogy with the Gaussian distribution, where\[P(x)=\frac{1}{\sqrt{2\pi}\sigma}\exp(-\frac{(x-\stackrel{-}{x})^2}{2\sigma^2}).\]

Given the sum of all probabilities \(\int_{-\infty}^{\infty} P(x)dx = 1\), it follows

\[\sqrt{2\pi}\sigma = \int_{-\infty}^{\infty} \exp(-\frac{(x-\stackrel{-}{x})^2}{2\sigma^2})\,dx.\]

If we compare this with \eqref{eq:nbang}, we note \(\sigma^2\) and \(\stackrel{-}{x}\) both translate to \(n\) in our analogy:

\begin{eqnarray} n!&\simeq&\exp(-n+n\ln{n})\int_{0}^{\infty} \exp(-\frac{(x-n)^2}{2n})\,dx \nonumber \\ &=& \exp(-n+n\ln{n})\sqrt{2\pi n} \label{eq:final} \end{eqnarray}

Note that the lower bound on the integral has changed from \(-\infty\) to 0. This approximation is justifiable since the peak of the distribution \(n\) is at a point much greater than zero, thus most of the distribution is greater than zero.

Finally, taking the logarithm of \eqref{eq:final},

\begin{eqnarray} \ln{n!} &\simeq& -n + n\ln{n} + \frac{1}{2}\ln{2\pi n} \nonumber \\ &=&(n+\frac{1}{2})\ln{n}-n+\frac{1}{2}\ln{2\pi}, \nonumber \end{eqnarray}

we recover what we started with.