# Divided differences

This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
(Learn how and when to remove this template message) |

In mathematics, **divided differences** is an algorithm, historically used for computing tables of logarithms and trigonometric functions.^{[citation needed]} Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.^{[1]}

Divided differences is a recursive division process. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form.

## Contents

## Definition[edit]

Given *k* + 1 data points

The **forward divided differences** are defined as:

The **backward divided differences** are defined as:

## Notation[edit]

If the data points are given as a function *ƒ*,

one sometimes writes

Several notations for the divided difference of the function *ƒ* on the nodes *x*_{0}, ..., *x*_{n} are used:

etc.

## Example[edit]

Divided differences for and the first few values of :

To make the recursive process more clear, the divided differences can be put in a tabular form:

## Properties[edit]

- Divided differences are symmetric: If is a permutation then

- From the mean value theorem for divided differences it follows that

- where is in the open interval determined by the smallest and largest of the 's.

### Matrix form[edit]

The divided difference scheme can be put into an upper triangular matrix. Let .

Then it holds

- This follows from the Leibniz rule. It means that multiplication of such matrices is commutative. Summarised, the matrices of divided difference schemes with respect to the same set of nodes form a commutative ring.

- Since is a triangular matrix, its eigenvalues are obviously .
- Let be a Kronecker delta-like function, that is

- Obviously , thus is an eigenfunction of the pointwise function multiplication. That is is somehow an "eigenmatrix" of : . However, all columns of are multiples of each other, the matrix rank of is 1. So you can compose the matrix of all eigenvectors from the -th column of each . Denote the matrix of eigenvectors with . Example
- The diagonalization of can be written as
- .

## Alternative definitions[edit]

### Expanded form[edit]

With the help of a polynomial function with this can be written as

Alternatively, we can allow counting backwards from the start of the sequence by defining whenever or . This definition allows to be interpreted as , to be interpreted as , to be interpreted as , etc. The expanded form of the divided difference thus becomes

Yet another characterization utilizes limits:

#### Partial fractions[edit]

You can represent partial fractions using the expanded form of divided differences. (This does not simplify computation, but is interesting in itself.) If and are polynomial functions, where and is given in terms of linear factors by , then it follows from partial fraction decomposition that

If limits of the divided differences are accepted, then this connection does also hold, if some of the coincide.

If is a polynomial function with arbitrary degree and it is decomposed by using polynomial division of by , then

### Peano form[edit]

The divided differences can be expressed as

where is a B-spline of degree for the data points and is the -th derivative of the function .

This is called the **Peano form** of the divided differences and is called the Peano kernel for the divided differences, both named after Giuseppe Peano.

### Taylor form[edit]

#### First order[edit]

If nodes are cumulated, then the numerical computation of the divided differences is inaccurate, because you divide almost two zeros, each of which with a high relative error due to differences of similar values. However we know, that difference quotients approximate the derivative and vice versa:

- for

This approximation can be turned into an identity whenever Taylor's theorem applies.

You can eliminate the odd powers of by expanding the Taylor series at the center between and :

- , that is

#### Higher order[edit]

The Taylor series or any other representation with function series can in principle be used to approximate divided differences. Taylor series are infinite sums of power functions. The mapping from a function to a divided difference is a linear functional. We can as well apply this functional to the function summands.

Express power notation with an ordinary function:

Regular Taylor series is a weighted sum of power functions:

Taylor series for divided differences:

We know that the first terms vanish, because we have a higher difference order than polynomial order, and in the following term the divided difference is one:

It follows that the Taylor series for the divided difference essentially starts with which is also a simple approximation of the divided difference, according to the mean value theorem for divided differences.

If we would have to compute the divided differences for the power functions in the usual way, we would encounter the same numerical problems that we had when computing the divided difference of . The nice thing is, that there is a simpler way. It holds

Consequently, we can compute the divided differences of by a division of formal power series. See how this reduces to the successive computation of powers when we compute for several .

If you need to compute a whole divided difference scheme with respect to a Taylor series, see the section about divided differences of power series.

## Polynomials and power series[edit]

Divided differences of polynomials are particularly interesting, because they can benefit from the Leibniz rule. The matrix with

contains the divided difference scheme for the identity function with respect to the nodes , thus contains the divided differences for the power function with exponent . Consequently, you can obtain the divided differences for a polynomial function with respect to the polynomial by applying (more precisely: its corresponding matrix polynomial function ) to the matrix .

This is known as *Opitz' formula*.^{[2]}
^{[3]}

Now consider increasing the degree of to infinity, i.e. turn the Taylor polynomial to a Taylor series. Let be a function which corresponds to a power series. You can compute a divided difference scheme by computing the according matrix series applied to . If the nodes are all equal, then is a Jordan block and computation boils down to generalizing a scalar function to a matrix function using Jordan decomposition.

## Forward differences[edit]

When the data points are equidistantly distributed we get the special case called **forward differences**. They are easier to calculate than the more general divided differences.

Note that the "divided portion" from **forward divided difference** must still be computed, to recover the **forward divided difference** from the **forward difference**.

### Definition[edit]

Given *n* data points

with

the divided differences can be calculated via **forward differences** defined as

The relationship between divided differences and forward differences is^{[4]}

### Example[edit]

## See also[edit]

- Difference quotient
- Neville's algorithm
- Polynomial interpolation
- Mean value theorem for divided differences
- Nörlund–Rice integral
- Pascal's triangle

## References[edit]

**^**Isaacson, Walter (2014).*The Innovators*. Simon & Schuster. p. 20. ISBN 978-1-4767-0869-0.**^**de Boor, Carl,*Divided Differences*, Surv. Approx. Theory 1 (2005), 46–69, [1]**^**Opitz, G.*Steigungsmatrizen*, Z. Angew. Math. Mech. (1964), 44, T52–T54**^**Burden, Richard L.; Faires, J. Douglas (2011).*Numerical Analysis*(9th ed.). p. 129.

- Louis Melville Milne-Thomson (2000) [1933].
*The Calculus of Finite Differences*. American Mathematical Soc. Chapter 1: Divided Differences. ISBN 978-0-8218-2107-7. - Myron B. Allen; Eli L. Isaacson (1998).
*Numerical Analysis for Applied Science*. John Wiley & Sons. Appendix A. ISBN 978-1-118-03027-1. - Ron Goldman (2002).
*Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling*. Morgan Kaufmann. Chapter 4:Newton Interpolation and Difference Triangles. ISBN 978-0-08-051547-2.

## External links[edit]

- An implementation in Haskell.