The scientist knows very well that he is approaching ultimate truth only in
an asymptotic curve and is barred from ever reaching it. Konrad Lorenz (1903-1989)
in "On Agression" (1963)
(2012-08-01) Fundamentals of Asymptotics
Only zero is asymptotic to zero.
Let's first consider numerical functions
(where division makes sense):
Numerical Functions :
Two numerical functions f and g
are called asymptotic (or equivalent)
to each other in the neighborhood of some limit point L
(possibly at infinity) when the
ratio f (x) / g (x) tends to 1
as x tends to L. In other words, the following two notations
are equivalent, by definition:
f (x) ~ g (x)
( x ® L )
or
f (x) / g (x)
®
1
x
®
L
Likewise, the statement "
f (x) is negligible  compared to
g (x) as x tends to L "
is denoted or defined as follows:
f (x) << g (x)
( x ® L )
or
f (x) / g (x)
®
0
x
®
L
In the US, this is sometimes read
f (x) is a lot less than
g (x) (as x tends to L).
That's misleading because
the relation is unrelated to ordering in the real line.
For example, both of the following relations hold as x tends to 0:
-1 < x2
but
x2 << -1
You can manipulate algebraically an asymptotic equivalence exactly as you would
an ordinary equation, except that you're not allowed
to transpose everything to one side of the equation!
Nothing (but zero itself) is asymptotic to zero...
Extension to Vectorial Functions :
For vectorial functions, the symmetry in the above definitions must be broken.
Negligibility is not difficult to define in a
normed vector space:
One quantity is negligible compared to another when the norm of the first
is negligible compared to the norm of the other.
With this in mind, we can promote to a definition among vectors what's
a simple characteristic theorem for equivalent scalars quantities
(with the definitions given above):
Definitions for Vectorial Asymptotics
As x tends to L, one vectorial quantity
f (x) is said to be negligible compared to
another quantity g (x) when the ratio
|| f (x) || / || g (x) ||
has a limit of zero.
Two quantities are asymptotically equivalent to each other
("asymptotic to" or "equivalent to", for short)
if their difference is negligible compared to their sum.
Is zero asymptotic to zero?
In asymptotics, "zero" is any function which is identically equal to 0
(the null vector) in some
neighborhood of the relevant limit point.
The following relations are valid whenever f is a nonzero quantity:
0 << f
and
f ~ f
By convention, we retain the validity of those two for zero quantities:
Only zero is negligible compared to zero. Only zero is equivalent to zero.
(2017-11-25) Bachmann-Landau symbols: Big-O (and relatives).
The most common symbol in a system of four asymptotic notations.
The big-O symbol was introduced by Paul Bachman in 1824.
In 1909, Edmund Landau adopted that notation.
At the same time, Landau introduced the same syntax for an unrelated
little-o notation which pertains more to pure theoretical asymptotic analysis.
In fact, it just expresses negligibility in the above sense.
Thus, as x tends to L, we have three equivalent notations:
f (x) << g (x)
or
f (x) = o (g x)
or
f (x) / g (x) ® 0
The third one is the defining relation for scalar quantities only
(where division is defined) but the first two are well-defined for
normed vector spaces as well, with the understanding
that a vector function is negligible compared to another exactly when the norm of the first
is negligible compared to the norm of the second.
(2012-10-04) Asymptotic expansions about a limit point.
Asymptotic expansions may or may not be convergent.
Against proper mathematical usage,
the term asymptotic series is used
exclusively for divergent series by
several leading authors (including R.B. Dingle and
Gradshteyn & Ryzhik ). I beg to differ.
It makes a lot more sense to work out an asymptotic expansion first and only
then worry whether it converges or not (which is usually far from obvious.
Likewise, asymptotic expansions are best defined without concerns about possible convergence:
Definition :
Bob Dingle has investigated how the exact values of a function
can be extracted from the latent information contained in its asymptotic expansion, even if it's not convergent.
Well before the more general notion of distributions was devised
(in 1944, by my late teacher Laurent Schwartz)
the Dutch mathematician
Thomas Stieltjes considered measures as generalized
derivatives of functions of bounded variations of a real variable.
Such functions are differences of two monotonous bounded functions;
they need not be differentiable or continuous.
(Stieltjes got his doctorate in Paris,
under Hermite and Darboux.)
Let's define a weight function r
as a nonnegative function of a nonnegative
variable which has a moment of order n,
expressed by the following convergent integral,
for any nonnegative integer n :
an =
ò
¥
r(t) t n dt
0
To any such weight function is associated a Stieltjes function defined by:
f (x) =
ò
¥
r(t) dt
0
1 + x t
A Sieltjes function f has four properties:
It's analytic on the cut plane (.e., outside the negative real axis).
It tends to zero at infinity along any direction of the cut plane.
Its asymptotic series about 0 in the cut plane
is S (-a)n zn
Its opposite is Herglotz (i.e., sign Img -f (z) = sign Img z).
Surprisingly, the converse is true (those 4 properties imply f is Stieltjes).
His younger friend James Stirling (1692-1770) immediately refined that
by finding that the actual limit of the last term is Log Ö2p.
That's just enough to give a proper asymptotic equivalence
for n! , namely: