# Berndt–Hall–Hall–Hausman algorithm

The Berndt–Hall–Hall–Hausman (BHHH) algorithm is a numerical optimization algorithm similar to the Newton–Raphson algorithm, but it replaces the observed negative Hessian matrix with the outer product of the gradient. It is named after the four originators: Ernst R. Berndt, Bronwyn Hall, Robert Hall, and Jerry Hausman.[1]

## Usage

If a nonlinear model is fitted to the data one often needs to estimate coefficients through optimization. A number of optimisation algorithms have the following general structure. Suppose that the function to be optimized is Q(β). Then the algorithms are iterative, defining a sequence of approximations, βk given by

${\displaystyle \beta _{k+1}=\beta _{k}-\lambda _{k}A_{k}{\frac {\partial Q}{\partial \beta }}(\beta _{k}),}$,

where ${\displaystyle \beta _{k}}$ is the parameter estimate at step k, and ${\displaystyle \lambda _{k}}$ is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithm λk is determined by calculations within a given iterative step, involving a line-search until a point βk+1 is found satisfying certain criteria. In addition, for the BHHH algorithm, Q has the form

${\displaystyle Q=\sum _{i=1}^{N}Q_{i}}$

and A is calculated using

${\displaystyle A_{k}=\left[\sum _{i=1}^{N}{\frac {\partial \ln Q_{i}}{\partial \beta }}(\beta _{k}){\frac {\partial \ln Q_{i}}{\partial \beta }}(\beta _{k})'\right]^{-1}.}$

In other cases, e.g. Newton–Raphson, ${\displaystyle A_{k}}$ can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed.[citation needed]