Yesterday I ran into an equation that was a sum of an exponential and a linear term:
It doesn’t take long to figure out that there is no analytical solution, and so I set out to write some crappy numerical code. After wasting some time with a fixed point iteration that did not really work, it occured to me that I most probably wasn’t the first person out there trying to solve such a simple equation. Indeed not.
The equation above has a solution in terms of a special function called Lambert’s W, and an nicer-looking one in terms of its cousin the generalised log (introduced by D. Kalman here).
Just like is the inverse of
,
is the inverse of
, and Lambert’s
is the inverse of
. Neither glog nor W can be computed analytically, but fast implementations for
are available (for R, it’s in the GSL package), and:
In terms of the generalised log function the solution to the equation is:
The (easy) proof is on page 5 of Kalman’s article. Here’s some R code:
require(gsl) solve.lexpeq <- function(alpha,beta,delta) { v <- beta/alpha -lambert_W0(-(delta/alpha)*exp(-v)) -v }
So where does this turn up in statistics? Well, one example is finding the Maximum A Posterior estimate of a Poisson mean, if you put a Gaussian prior on the log of the mean.
Why not simply using Newton’s root-finding algorithm? (Started for instance at solution of the equation where e^x is replaced by 1+x)
That’s what most implementations do internally, but a benefit is that someone’s already figured out that headache for you (for example, guaranteeing convergence). Also, there are some series expansions of Lambert’s W which are very fast to compute and quite accurate in certain regions, which is useful among other things in an optimisation context when you don’t need to optimise exactly, but just roughly. You could use that implement, e.g., a very fast coordinate ascent method.
I don’t get how you got the 3rd equation from the first using the 2nd equation. Can you post some more steps in between. Thank you