Nowadays, we call Taylor's Theorem several variants of the
following expansion of a smooth function f about
a regular point a, in terms of a polynomial whose coefficients are
determined by the successive derivatives of the function at that point:
f (a + x) =
f (a) +
f ' (a) x +
f '' (a) x/2 + ... +
f (n) (a) xn/ n! + Rn (x)
A Taylor expansion about the origin (a = 0)
is often called a Taylor-Maclaurin expansion, in honor of
Colin Maclaurin
(1698-1746) who focused on that special case in 1742.
Other variants of Taylor's theorem differ by the distinct explicit expressions which can be
given for the so-called remainder Rn .
Taylor published two versions of his theorem in 1715. In a letter to his friend
John
Machin (1680-1751) dated July 26, 1712, Taylor gave Machin credit for the idea.
Several variants or precursors of the theorem had also been discovered independently by
James Gregory (1638-1675),
Isaac Newton (1643-1727),
Gottfried Leibniz (1646-1716),
Abraham de Moivre (1667-1754) and
Johann Bernoulli (1667-1748).
The term Taylor series was apparently coined by
Simon
Lhuilier (1785).
(2015-04-19) Basing calculus on Taylor's expansions (1772)
Lagrange's strict algebraic interpretation
of differential calculus.
Taylor's theorem was brought to great prominence in 1772 by
Joseph-Louis Lagrange (1736-1813)
who declared it the basis for differential calculus
(he made this part of his own lectures at Polytechnique in 1797).
Arguably, this was a rebuttal to religious concerns which had been raised in 1734
(The Analyst) by
George Berkeley (1685-1753)
Bishop of Cloyne (1734-1753)
about the infinitesimal foundations of Calculus.
The mathematical concepts behind differentiation and/or integration are so
pervasive that they can be introduced or discussed outside of the historical context
which originally gave birth to them, one century before Lagrange.
The starting point of Lagrange is an exact expression, valid for any polynomial f
of degree n or less, in any commutative ring :
f (a + x) =
f (a) +
D1 f (a) x +
D2 f (a) x2
+ ... +
Dn f (a) xn
In the ordinary interpretation of Calculus [over any field
of characteristic zero] the following relation holds, for any polynomial f :
D0 f (a) =
f (a)
Dk f (a) =
f (k) (a) / k!
However, the expressions below are true even when
neither the reciprocal of k! nor higher-order derivatives are defined,
over any commutative ring.
Lagrange's definitions of Dk f (a) are just based on the
binomial theorem.
Dk f is simply a polynomial of degree k. No divisions are needed.
The following manipulations are limited to the case when f
is a polynomial whose order is at most n.
So only finitely many terms are involved in the data and in the results.
With infinitely many terms, the convergence of neither would be guaranteed.
"Théorie des fonctions analytiques contenant les principes du calcul différentiel,
dégagés de toute considération d'infiniment petits ou d'évanouissants,
de limites ou de fluxions et réduits à l'analyse algébrique des quantités finies"
by Joseph-Louis Lagrange (1797)
Journal de l'École polytechnique, 9, III, 52, p. 49
(2008-12-23) Radius of Convergence of a Complex Power Series
A complex power series converges inside
a disk and diverges outside of it (the situation at different points of
the boundary circle may vary).
That disk is called the disk of convergence.
Its radius is the radius of convergence
and its boundary is the circle of convergence.
The result advertised above is often called
Abel's power series theorem.
Although it was known well before him, Abel is
credited for making this part of a general discussion which includes the status
of points on the circumference of the circle of convergence.
The main tool for that is another theorem due to Abel,
discussed in the next section.
(2018-06-01) Stolz Sector
Slice of the disk of convergence with its apex on the boundary.
Stolz angle
by Andrzej Kozlowski (Wolfram Demonstrations Project).
Stolz region.
Question of Daniel answered by Robert Israel (StackExchange, 2012-09-02).
Brian Keiffer (Yahoo!
2011-08-07)
Formal properties of exp series.
Defining exp (x) = Sn xn/n!
and e = exp (1) prove that exp (x) = e x
In their opendisk of convergence
(i.e., circular boundary excluded, unless it's at infinity)
power series are absolutely convergent series.
So, in that domain, the sum of the series is unchanged by modifying the
order of the terms (commutativity) and/or grouping them
together (associativity).
This allows us to establish directly the following fundamental property
(using the binomial theorem):
exp (x) exp (y) = exp (x+y)
Such manipulations are disallowed for convergent series that are not
absolutely convergent (which is to say that the series consisting of
the absolute values of the terms diverges).
Rearranging the terms of any such real series can make it converge
to any arbitrary limit !
exp (x) exp (y)
=
(
¥
å
n = 0
xn/n!
)(
¥
å
n = 0
yn/n!
)
=
¥
å
n=0
¥
å
m=0
(xn/n!) (ym/m!)
=
¥
å
n=0
n
å
k=0
xk yn-k
k! (n-k)!
=
¥
å
n=0
(x+y)n
n!
= exp (x+y)
This lemma shows immediately that exp (-x) = (exp x)-1.
Then, by induction on the absolute value of the integer n, we can establish that:
exp (n x) = (exp x) n
With m = n and y = n x, this gives
exp (y) = (exp y/m)m . So :
exp (y / m) = (exp y) 1/m
Chaining those two results, we obtain, for any rational q = n/m
exp (q y) = (exp y) q
By continuity, the result holds for any real q = x. In particular, with y = 1:
exp (x) = (exp 1) x
= e x
(2008-12-23) Analytic Continuation (Weierstrass, 1842)
Power series that coincide wherever their disks of convergence overlap.
In the realm of real or complex numbers,
two polynomials which coincide at infinitely many distinct points are necessarily equal
(HINT: as a polynomial with infinitely many roots,
their difference must be zero).
This result on polynomials doesn't have an immediate generalization to
analytic functions for the simple reason that
there are analytic functions with infinitely many zeroes.
The sine function is one example of an analytic function
with infinitely discrete seroes.
However, an analytic function defined on a nonempty open domain
can be extended in only one way to a larger open domain of definition
which doesn't encircle any point outside the previous one.
Such an extension of an analytic function is called an
analytic continuation thereof.
Divergent Series :
Loosely speaking, analytic continuations
can make sense of divergent series in a consistent way.
Consider, for example, the classic summation formula for the
geometric series, which converges when
|z| < 1 :
1 + z +
z2 +
z3 +
z4 + ... +
zn + ...
= 1 / (1-z)
The right-hand-side always makes sense, unless z = 1.
It's thus tempting to equate it formally
to the left-hand-side, even when the latter diverges!
This viewpoint has been shown to be consistent.
It makes perfect sense of the following "sums" of
divergent series which may otherwise look like monstrosities
(respectively obtained for z = -1, 2, 3) :
I am greatly impressed by the quick and accurate generalization of my question, which gave me a
deeper understanding of the related material. Thank you for creating such a great site!
(2021-07-22) Periodic Decomposition of a Power Series
A slight generalization of the technique introduced above.
Instead of retaining only the terms of a power series whose indices are multiples of
a given modulus k, we may wish to keep only indices whose
residues modulo k are a prescribed remainder r.
Thus, we're now after:
f k,r (z) = å na kn+r z kn+r
That can be worked out with our previous result
(the special case r = 0) by applying it to the function z(k-r) f (z).
Using w = exp(2pi/k), we have:
z(k-r) f (z) +
(w z)(k-r) f (w z)
+...+
(wk-1 z)(k-r) f (wk-1 z)
=
k z(k-r)f k,r (z)
Dividing both sides by k z(k-r), using
wk = 1, we obtain the desired result:
f k,r (z) = (1/k) [
f (z) + w-r f (w z)
+ w-2r f (w2 z)
+ ... +
w-(k-1)r f (wk-1 z) ]
In the example k = 4 for f = exp,
we have w = i and, therefore:
f 4,r (z) = ¼ [
ez + (-i)r eiz +
(-1)r e-z +
ir e-iz ]
That translates into four equations, for r = 0, 1, 2 or 3:
f 4,0 (z) = ¼ [
ez + eiz +
e-z + e-iz ]
=
½ [ ch z + cos z ]
f 4,1 (z) = ¼ [
ez - i eiz
- e-z
+ i e-iz ]
=
½ [ sh z + sin z ]
f 4,2 (z) = ¼ [
ez -
eiz +
e-z - e-iz ]
=
½ [ ch z - cos z ]
f 4,3 (z) = ¼ [
ez + i eiz
- e-z - i e-iz ]
=
½ [ sh z - sin z ]
The whole machinery may be an overkill in this case, where the above four relations
are fairly easy to obtain directly from the expansions of cos, ch, sin and sh.
However, that's a good opportunity to introduce the methodology which is needed in less trivial cases
with other roots of unity...
(2021-10-05) Finite-Difference Calculus (FDC)
Applying the methods of calculus to discrete sequences.
Difference Operator D (discrete derivative) :
D f (n) = f (n+1) - f (n)
Like the usual differential operator (d) this is a
linear operator, as are all
iterated difference operators Dk recursively defined, for k ≥ 0 :
D0f = f
Dk+1f =
Dk (D f ) =
D (Dkf )
Unlike the differential operator d of infinitesimal calculus, the above difference operator
D yields
ordinary finite quantities whose products can't be neglected; there's
a third term in the corresponding product rule :
D (uv) =
(D u) v + u (D v)
+ (D u) (D v)
Falling Powers (falling factorials) :
The number of ways to pick a sequence of m objects out of n possible choices (allowing repetitions)
is nm, pronouncedn to the power of m.
When objects already picked are disallowed, the result is denoted nm and called
n to the falling power of m. It's the product of m decreasing factors:
nm = (n)m = n (n-1) (n-2) ... (n+1-m)
As usual, that's 1 when m = 0, because it's the product of zero factors.
Falling powers are closely related to choice numbers:
C(n,m) = nCm = C
n m
=
æ è
n m
ö ø
=
n!
=
nm
=
(n)m
(n-m)! m!
m!
m!
Falling powers are to FDC what powers are to infinitesimal calculus since:
D nm = m nm-1
Iterating this relation yields the pretty formula:
Dk nm =
mk nm-k
James Gregory (1638-1675)
Gregory-Newton forward-difference formula :
f (n) =
¥ å k=0
æ è
n k
ö ø
Dkf (0)
When n is a natural integer,
the right-hand side is a finite sum,
as all binomial coefficients with k > n vanish.
Proof :
By induction on n (the case n = 0 being trivial):
Assuming the formula holds for a given n, we apply it to
D f and obtain:
f (n+1) - f (n) =
D f (n) =
¥ å k=0
æ è
n k
ö ø
Dk+1f (0)
=
¥ å k=0
æ è
n k-1
ö ø
Dkf (0)
Note the zero leading term (k = 0) in the re-indexed rightmost sum. We may add this finite sum
termwise to the previous expansion of f (n) to obtain: