# Powell's method

The function must be a real-valued function of a fixed number of real-valued inputs. The caller passes in the initial point. The caller also passes in a set of initial search vectors. Typically N search vectors (say${\textstyle \{s_{1},\dots ,s_{n}\}}$) are passed in which are simply the normals aligned to each axis.[1]
The method minimises the function by a bi-directional search along each search vector, in turn. The bi-directional line search along each search vector can be done by Golden-section search or Brent's method. Let the minima found during each bi-directional line search be ${\textstyle \{x_{0}+\alpha _{1}s_{1},{x}_{0}+\sum _{i=1}^{2}\alpha _{i}{s}_{i},\dots ,{x}_{0}+\sum _{i=1}^{N}\alpha _{i}{s}_{i}\}}$, where ${\textstyle {x}_{0}}$ is the initial starting point and ${\textstyle \alpha _{i}}$ is the scalar determined during bi-directional search along ${\textstyle {s}_{i}}$. The new position (${\textstyle x_{1}}$) can then be expressed as a linear combination of the search vectors i.e. ${\textstyle x_{1}=x_{0}+\sum _{i=1}^{N}\alpha _{i}s_{i}}$. The new displacement vector (${\textstyle \sum _{i=1}^{N}\alpha _{i}s_{i}}$) becomes a new search vector, and is added to the end of the search vector list. Meanwhile, the search vector which contributed most to the new direction, i.e. the one which was most successful (${\textstyle i_{d}=\arg \max _{i=1}^{N}\alpha _{i}\|s_{i}\|}$), is deleted from the search vector list. The new set of N search vectors is ${\textstyle \{s_{1},\dots ,s_{i_{d}-1},s_{i_{d}+1},\dots ,s_{N},\sum _{i=1}^{N}\alpha _{i}s_{i}\}}$.The algorithm iterates an arbitrary number of times until no significant improvement is made.[1]