It can be of no practical use to know that Pi
is irrational, but if we can know, it surely would be intolerable
not to know. Ted
Titchmarsh
(1899-1963)
(2003-07-26) 0
Zero is a number like any other, only more so...
Zero is probably the most misunderstood number.
Even the imaginary number i is probably better understood,
(because it's usually introduced only to comparatively sophisticated audiences).
It took humanity thousands of years to realize what a great mathematical
simplification it was to have an ordinary number used to indicate "nothing",
the absence of anything to count...
The momentous introduction of zero metamorphosed the ancient
Indian system of numeration
into the familiar decimal system we use today.
The counting numbers start with 1,
but the natural integers start with 0...
Most mathematicians prefer to start with zero the indexing
of the terms in a sequence, if at all possible.
Physicists do that too, in order to mark the origin
of a continous quantity:
If you want to measure 10 periods of a pendulum,
say "0" when you
see it cross a given point from left to right (say) and start your stopwatch.
Keep counting each time the same event happens again and stop your timepiece when you
reach "10", for this will mark the passing of 10 periods.
If you don't want to use zero in that context, just say something like "Umpf"
when you first press your stopwatch; many do...
A universal tradition, which probably predates the introduction of zero by a few millenia,
is to use counting numbers (1,2,3,4...) to name successive intervals of time;
a newborn baby is "in its first year", whereas a 24-year old is in his 25th.
When applied to calendars, this unambiguous
tradition seems to disturb more people than it should.
Since the years of the first century are numbered 1 to 100, the second century goes
from 101 to 200, and the twentieth century consists of the years 1901 to 2000.
The third millenium starts with January 1, 2001.
Quantum mechanics was born in the nineteenth century
(with Planck's explanation for the blackbody law, on 1900-12-14).
For some obscure reason, many people seem to have a mental block about some
ordinary mathematics applied to zero.
A number of journalists, who should have known better,
once questioned the simple fact that zero is even.
Of course it is:
Zero certainly qualifies as a multiple of two
(it's zero times two).
Also, in the integer sequence, any even number is surrounded by two
odd ones, just like zero is surrounded by the odd integers
-1 and +1...
Nevertheless, we keep hearing things like:
"Zero, should be an exception, an integer
that's neither even nor odd."
Well, why on Earth would anyone
want to introduce such unnatural exceptions where none is needed?
What about 00 ?
Well, anything raised to the power of zero is equal to unity and
a closer examination would reveal that there's
no need to make an exception for zero in this case either:
Zero to the power of zero is equal to one!
Any other "convention" would invalidate a substantial portion of the mathematical
literature (especially concerning common notations for polynomials and/or power series).
A related discussion involves the factorial of zero (0!)
which is also equal to 1.
However, most people seem less reluctant to accept this one, because the generalization
of the factorial function
(involving the Gamma function)
happens to be continous about the origin...
(2003-07-26) 1
The unit number to which all nonzero numbers refer.
(2003-07-26) p =
3.141592653589793238462643383279502884+
Pi is the ratio of the perimeter of a circle to its diameter.
The symbol p
for the
most famous transcendental number was introduced in a 1706 textbook by
William
Jones (1675-1749) reportedly because it's the first letter of the
Greek verb perimetrein ("to measure around") from which
the word "perimeter" is derived.
Euler popularized the notation after 1736.
It's not clear whether Euler knew of the previous usage pioneered by Jones.
Historically, ancient mathematicians did convince themselves that
LR/2 was the area of the surface
generated by a segment of length R when one of its extremities (the "apex")
is fixed and the other extremity has a trajectory of length L
(which remains perpendicular to that segment).
The record shows that they did this for planar geometry
(in which case the trajectory is
a circle) but the same reasoning would apply
to nonplanar trajectories as well (any curve
drawn on the surface of sphere centered on the apex will do).
They reasoned that the trajectory (the circle) could be approximated
by a polygonal line with many small sides.
The surface could then be seen as consisting of many thin triangles whose heights
were very nearly equal to R, whereas the base was very nearly a portion of the
trajectory.
As the area of each triangle is R/2 times such a portion, the area of the
whole surface is R/2 times the length of the entire
trajectory [QED?].
Of course, this type of reasoning was made fully rigorous only with the advent of
infinitesimal calculus, but it did convince everyone of the
existence of a single number p which would
give both the perimeter (2pR) and the
surface area (pR2 )
of a circle of radius R...
The ancient problem of squaring the circle asked for a
ruler and compass construction
of a square having the same area as a circle of given diameter.
Such a thing would constitute a proof that p
is constructible, which it's not.
Therefore, it's not possible to square the circle...
p isn't even algebraic (i.e.,
it's not the root of any polynomial with integer coefficients).
All constructible numbers are algebraic but the converse doesn't hold.
For example, the cube root of two is algebraic
but not constructible, which is to say that there's no solution
to another ancient puzzle known as the Delian problem
(or duplication of the cube).
A number which is not algebraic is called transcendental.
In 1882,
p was shown to be transcendental by
C.L. Ferdinand von Lindemann (1852-1939)
using little more than the tools devised 9 years earlier by
Charles Hermite to prove the transcendence
of e (1873).
p was proved irrational much earlier (1761) by
Lambert (1728-1777).
Since 1988, Pi Day is celebrated worldwide on March 14 (3-14 is the beginning
of the decimal expansion of Pi and it's also the Birthday of
Albert Einstein, 1879-1955).
This geeky celebration was the brainchild of the physicist
Larry Shaw (1939-2017).
The thirtieth Pi Day
was celebrated by Google with the above
Doodle on their home page, on 2018-03-14.
On that fateful Day,
Stephen Hawking (1942-2018) died at the age of 76.
(2003-07-26) Ö2 =
1.414213562373095048801688724209698+
Root 2. The diagonal of a square of unit side. Pythagoras' Constant.
He is unworthy of the name of man who is ignorant of the fact that
the diagonal of a square is incommensurable
with its side. Plato (427-347 BC)
When they learned about the irrationality of Ö2,
the Pythagoreans sacrificed 100 oxen to the gods
(a so-called hecatomb)...
The followers of
Pythagoras (c. 569-475 BC)
kept this sensational discovery a secret to be revealed
to the initiated mathematikoi only.
At least one version of a dubious legend says that the man who
disclosed that dark secret was thrown overboard and perished at sea.
The martyr may have been
Hippasus of Metapontum
and the death sentence—reportedly handed out by Pythagoras himself—may
have been a political retribution for starting a rival sect,
whether or not the schism revolved around the newly discovered concept of irrationality.
Eight centuries later,
Iamblicus reported
that Hippasus had drowned because of his publication of the construction
of a dodecahedron inside a sphere (something construed as a sort of community secret).
Hippasus of Metapontum is credited with the classical proof
(ca. 500 BC) which is summarized below. It is based on the
fundamental theorem of arithmetic
(i.e., the unique factorization of any integer into primes).
The square of any fraction features an even number of prime
factors both in the numerator and in the denominator.
Those cannot cancel pairwise
to yield a single prime, like 2, in lowest terms.
The irrationality of the square root of 2 may also be
proved very nicely
using the method of infinite descent, without
any notion of divisibility !
(2013-07-17) Ö3 =
1.732050807568877293527446341505872+
Root 3. Diameter of a cube of unit side. Constant of Theodorus.
Theodorus taught mathematics to Plato,
who reported that he was teaching about the irrationality of the square root of all
integers besides perfect squares "up to 17", before 399 BC.
Of course, the theorem of Theodorus
is true without that artificial restriction (which Theodorus
probably imposed for pedagogical purposes only).
Once the conjecture is made, the truth of the general theorem
is fairly easy to establish.
Elsewhere on this site, we
give a very elegant short modern proof of the general theorem,
by the method of infinite descent. A more pedestrian
approach, probably used by Theodorus, is suggested below...
There's also a partial proof which settles only the
cases below 17. Some students of the history of mathematics
jumped to the conclusion that this must have been the (lost)
reasoning of Theodorus (although this guess flies in the face that
the Greek words used by Plato do mean "up to 17" and not "up to 16").
Let's present that weak argument, anachronistically, in the vocabulary
of congruences, for the sake of brevity:
If q is an odd integer with a rational square root expressed
in lowest terms as x/y, then:
q y2 = x2
Because q is odd, so are both sides
(or else x and y would have a common
even factor). Therefore, the two odd squares are congruent to 1
modulo 8 and q must be too.
Below 17, the only possible values of q are
1 and 9 (both of which are perfect squares).
This particular argument doesn't settle the case of q = 17
(which Theodurus was presenting in class as solved) and it's not
much simpler (if at all) than a discussion
based on a full factorization of both sides
(leading to a complete proof by
mere generalization of the method which had
established the irrationality of the square root
of 2, one century earlier).
Therefore, my firm opinion is that Theodorus himself knew very
well that his theorem was perfectly general, because he had proved it so...
The judgement of history that the square root of 3
was the second number proved to be irrational seems fair.
So does the naming of that constant and the related
theorem after
Theodorus
of Cyrene (465-398 BC).
(2003-07-26) f =
1.61803398874989484820458683436563811772+
The diagonal of a regular pentagon of unit side:
f = (1+Ö5) / 2
f2 = 1 + f
This ubiquitous number is variously known as the
Golden Number, the Golden Section, the Golden Mean,
the Divine Proportion or the Fibonacci Ratio
(because it's the limit of the ratio of consecutive terms in the
Fibonacci sequence).
It's the aspect ratio
of a rectangle whose
semiperimeter is to the larger side what the larger side is to the smaller one.
(2014-05-15)
1-1/e = 0.632120558828557678404476229838539...
Rise time and fixed-point probability: 1/1! - 1/2! + 1/3! - 1/4! + 1/5! - ...
Every electrical engineer knows that the time constant of a first-order
linear filter is the time it takes to reach 63.2%
of a sudden level change.
For example, to measure a capacitor C with an oscilloscope, use a known resistor R
and feed a square wave to the input of the basic
first-order filter formed by R and C.
Assuming the period of the wave is much larger than RC, the value of RC is equal
to the time it takes the output to change by 63.2% of the peak-to-peak
amplitude on every transition.
The "rise-time" which can be given automatically by modern oscillosopes
is defined as the time it takes a signal to rise from 10% to 90% of
its peak-to-peak amplitude.
It's good to know that the RC time constant is about 45.5% of that
for the above signal (it sure beats messing around with cursors
just to measure a capacitor).
For example, the rise time of a sinewave is 29.52% of its period
(the reader may want to check that the exact number is asin(0.8)/p).
Proof :
If the time constant of a first-order lowpass filter is taken as the unit of time, then
its response to a unit step will be 1-exp(-t)
at time t.
That's 10% at time ln(10/9) and 90% at time ln(10)
The rise time is the interval between those two times,
namely ln(9) or nearly 2.2.
The reciprocal of that is about 45.512%. More precisely:
The time constant (RC) of a first-order lowpass filter is 45.5%
of its rise time.
Waveform
Rise Times
Time unit
0 to 63.212%
10-90%
0-100%
RC-filtered long-period squarewave
1
ln 9
n/a
RC
2.1972
Sinewave
¼+asin(1-1/e)/2p
asin(0.8)/p
0.5
Period
0.3589
0.2952
Probability of existence of a fixed point :
The number 63.212...% is also famously known as the probability that a
permutation of many elements will
have at least one fixed point
(i.e., an element equal to its image). Technically, it's only the
limit of that as the number of elements tends to infinity. However, the
convergence is so rapid that the difference is negligible.
The exact probability for n elements is:
With n = 10, for example, this is
28319 / 44800 = 0.63212053571428...
(which approximates the limit to a precision smaller than 37 ppb).
A random self-mapping (not necessarily bijective)
of a set of n points will have at least one fixed point
with a probability that tends slowly to that same limit
when n tends to infinity. The exact probability is:
1 - ( 1 - 1/n )n
For n = 10,
this is 0.6513215599, which is 1.92% more than the limit.
When many actual computations used decimal logarithms,
every engineer memorized the 5-digit value
(0.30103) and trusted it to 8-digit precision.
If decibels (dB) are used, a power factor of 2
thus corresponds to 3 dB or, more precisely, 3.0103 dB.
To a filter designer, the attenuation of a first-order
filter is quoted as 6 dB per octave which
means that amplitudes change by a factor of 2 when frequencies change by an
octave (which is a factor of 2 in frequency).
A second-order low-pass filter would have an ultimate slope of 12 dB per octave, etc.
(2003-07-26) g =
0.577215664901532860606512090082402431+
The limit of
[1 + 1/2 + 1/3 + 1/4 + ... + 1/n] - ln(n) ,
as n ® ¥
The previous sum can be recast as the partial sum of a convergent series, by introducing
telescoping terms. The general term of that series (for n ≥ 2) is:
1/n - ln(n) + ln(n-1)
=
1/n + ln(1-1/n) =
S p ≥ 2 (-1 / pnp )
Therefore, since terms in absolutely convergent series can be reordered:
1 - g =
S n ≥ 2
S p ≥ 2 1 / pnp
=
S p ≥ 2
S n ≥ 2 1 / pnp
Therefore, using the zeta function
1 - g =
S p ≥ 2 (z(p)-1) / p
The constant g was calculated to 16 digits by Euler
in 1781. The symbol g
is due to Mascheroni, who gave 32 digits in 1790
(his other claim to fame is the Mohr-Mascheroni theorem).
Only the first 19 of Mascheroni's digits were correct.
The mistake was only spotted in 1809 by
Johann von Soldner
(the eponym of another constant) who obtained 24 correct decimals...
In 1878, the thing was worked out to 263 decimal places by the astronomer
John Couch Adams (1819-1892)
who had almost
discovered Neptune as a young man (in 1846).
In 1962, gamma
was computed electronically to 1271 digits by
D.E. Knuth,
then to 3566 digits by Dura W. Sweeney
(1922-1999)
with a new approach.
7000 digits were obtained in 1974 (W.A. Beyer & M.S. Waterman)
and 20 000 digits in 1977 (by
R.P. Brent,
using Sweeney's method). Teaming up with
Edwin McMillan
(1907-1991; Nobel
1951) Brent would produce more than 30 000 digits in 1980.
Alexander J. Yee, a 19-year old freshman at Northwestern University,
made UPI news (on 2007-04-09) for
his computation
of 116 580 041 decimal places in 38½ hours
on a laptop computer, in December 2006.
Reportedly, this broke a previous record of 108 million digits,
set in 47 hours and 36 minutes of computation (from September 23 to 26, 1999)
by the Frenchmen Xavier Gourdon (X1989)
and Patrick Demichel.
Unbeknownst to Alex Yee and
the
record books (kept by Gourdon and Sebah)
that record had been shattered earlier (with 2 billion digits) by
Shigeru Kondo and Steve Pagliarulo.
Competing against that team, Alexander J. Yee and Raymond Chan have since computed about 30 billion
digits of g (and also of Log 2)
as of 2009-03-13.
Kondo and Yee then collaborated to produce 1 trillion digits of
Ö2 in 2010.
Later
that year, they computed 5 trillion digits of
p, breaking the previous record of
2.7 trillion digits of p
(2009-12-31) held by the
Frenchman Fabrice Bellard
(X1993, born in 1972).
Everybody's guess is that g is transcendental
but this constant has not even been proven irrational yet...
Charles
de la Vallée-Poussin (1866-1962) is best known for
having given an independent proof of the
Prime Number Theorem in 1896, at the same time as
Jacques
Hadamard (1865-1963).
In 1898, he investigated the average fraction by which
the quotient of a positive integer n by a lesser prime
falls short of an integer.
Vallée-Poussin proved that this tends to
g for large values of
n (and not to ½, as might have been guessed).
What caused the admiration of Alf van der Poorten is the proof of the irrationality of
z(3) by the French mathematician
Roger Apéry (1916-1994) in 1977.
That proof
is based on an equation featuring a rapidly-converging series:
z(3)
5
¥
å
k=1
(-1) k-1
2
k3
ì î
2k k
ü þ
The reciprocal of Apéry's constant 1/z(3)
is equally important:
(A088453)
Such a shortcut must be avoided
unless one is prepared to give up the most trusted properties of the
square root function, including:
Ö(xy) = Öx Öy
If you are not convinced that the square root function
(and its familiar symbol) should be strictly limited to
nonnegative real numbers, just consider what the above relation
would mean with x = y = -1.
Neither of the two complex numbers (i and -i)
whose square is -1 can be described as the "square root of -1".
The square root function cannot be defined
as a continuous function
over the domain of complex numbers.
Continuity can be rescued if the domain of the function is changed to a
strange beast consisting of two properly connected copies
(Riemann sheets) of the complex plane sharing the same origin.
Such considerations do not belong in an introduction to complex numbers.
Neither does the deceptive square-root symbol
(Ö).
These important mathematical constants
are much less pervasive than the above ones...
(2008-04-13) 21/3 =
1.25992104989487316476721060727822835+
The Delian constant is the scaling factor which doubles a volume.
The cube root of 2 is much less commonly encountered than
its square root (1.414...).
There's little need to remember that it's roughly equal to 1.26
but it can be useful
(e.g., a 5/8" steel ball weighs almost twice as much as a 1/2" one).
The fact that this quantity cannot be constructed "classically"
(i.e., with ruler and compass alone)
shows that there's no "classical" solution to the so-called
Delian problem
whereby the Athenians were asked by the
Oracle of Apollo at Delos to resize the altar of Apollo
to make it "twice as large".
The Delian constant has also grown to be a favorite
example of an algebraic number of degree 3
(arguably, it's the simplest such number).
Thus, its continued fraction expansion
(CFE) has been under considerable scrutiny...
There does not seem to be anything special about it, but the question remains
theoretically open whether it's truly
normal or not (by contrast,
the CFE of any algebraic number of degree 2 is periodic ).
In Western music theory, the chromatic octave (the interval which doubles
the frequency of a tone) is subdivided into 12 equal intervals
(semitones). That's to say: three equal steps of four semitones each result in
a doubling of the frequency. An interval of four semitones is known
as a major third. Three consecutive major thirds correspond to a doubling of the
frequency. Thus,
the Delian constant (1.259921...)
is the frequency ratio corresponding to a major third.
A Delian brick is a cuboid with sides proportional to
1, 21/3 and 22/3.
That term was coined by Ed Pegg on
2018-06-19.
A planar cut across the middle of its longest side splits a Delian brick into
two Delian bricks.
That's the 3-D equivalent of a
Ö2 aspect ratio for rectangles,
on which is based the common A-series of paper sizes (as are the B-series,
used for some playing cards, and the C-series for envelopes.)
(2015-07-12)
Rayleigh factor: 1.219669891266504454926538847465+
Conventional coefficient pertaining to the diffraction limit on resolution.
This is equal to the first zero of the J1Bessel function
divided by p.
Commonly approximated 1.22 or 1.220.
This coefficient appears in the formula which gives the limit
q of the
angular resolution
of a perfect lens of diameter D for light of wavelength
l :
q = 1.220
l / D
This precise coefficient is arrived at theoretically by using
Rayleigh's criterion
which states that two points of light (e.g., distant stars)
can't be distinguished if their angular separation is less than the diameter
of their Airy disks
(the diameter of the first dark circle in the interference pattern
described theoretically by
George Airy in 1835).
The precise value of the factor to use is ultimately a matter
of convention about what constitutes optical distinguishability.
The theoretical criterion on which the above formula is based was
originally proposed by Rayleigh
for sources of equal magnitudes.
It has proved more appealing than all other considerations,
including the empirical
Dawes' limit,
which ignores the relevance of wavelength.
Dawes' limit would correspond to a coefficient of about 1.1
at a wavelength of 507 nm
(most relevant to the scotopic astronomical observations used by Dawes).
Note that the digital deconvolution of images allows finer resolutions
than what the above classical formula implies.
This is often called Mertens' constant in honor of the number theorist
Franz
Mertens (1840-1927).
It is to the sequence of primes what
Euler's constant is to the sequence of integers.
It's sometimes also called Kronecker's constant
or the Reciprocal Prime Constant.
Proposals have been made to name this constant
after Charles de la Vallée-Poussin (1866-1962) and/or
Jacques Hadamard (1865-1963),
the two mathematicians who first proved (independently)
the Prime Number Theorem, in 1896.
(2006-06-15) Artin's Constant :
C = 0.373955813619202288054728+
The product of all the factors [ 1 - 1 / (q2- q) ]
for prime values of q.
For any prime p besides 2 and 5, the decimal
expansion of 1/p has a period at most equal
to p-1 (since only this many different nonzero "remainders" can possibly
show up in the long division process).
Primes yielding this maximal period are called
long primes [to base ten] by recreational mathematicians and others.
The number 10 is a primitive root modulo such a prime p,
which is to say that the first p-1 powers of 10 are
distinct modulo p (the cycle then
repeats, by Fermat's little theorem).
Putting a = 10, this is equivalent to the condition:
a (p-1)/d ¹ 1
(modulo p) for any prime factor d of (p-1).
For a given prime p, there are f(p-1)
satisfactory values of a (modulo p),
where f
is Euler's totient function.
Conversely, for a given integer a,
we may investigate the set of
long primes to base a...
Emil Artin
It seems that the proportion C(a) of such primes
(among all prime numbers) is equal to the above numerical
constant C, for many values of a
(including negative ones) and that it's always a
rational multiple of C.
The precise conjecture tabulated below originated with
Emil
Artin (1898-1962) who communicated it to
Helmut Hasse
in September 1927.
Neither -1 nor a
quadratic residue can be a
primitive root modulo p > 3.
Hence, the table's first row is as stated.
Artin's
conjecture for primitive roots (1927)
first refined byDick
Lehmer (For a given "base" a,
use the earliest applicable case, in the order listed.)
Base a
Proportion C(a)
of primes p for which
a is a primitive root
-1 or b 2
0
a = b k
C(a) =
v(k) C(b)
v is multiplicative:
v(qn ) = q(q-2) / (q2-q-1) if q is prime
sf(a) mod 4 = 1 See notation below*
C(a) =
[ 1 -
q prime
1
1 + q - q2
] C
Õ
q | sf (a)
Otherwise,
C(a) = C =
0.3739558136192022880547280543464164151116...
This last case applies to all integers, positive
(A085397)
or negative (A120629)
that are not perfect powers and whose
squarefree part isn't congruent to 1 modulo 4, namely:
2, 3, 6, 7, 10, 11, 12, 14, 15, 18, 19, 22, 23, 24, 26, 28,
30, 31, 34, 35, 38, 39, 40 ...
-2, -4, -5, -6, -9, -10, -13, -14, -16, -17, -18, -20, -21, -22, -24, -25,
-26, -29, -30, -33
...
(*) In the above, sf (a) is the
squarefree part of a,
namely the integer of least magnitude which makes the product
asf (a) a square.
The squarefree part of a negative integer is the opposite of the
squarefree part of its absolute value.
The conjecture can be deduced from its special case about
prime values of a,
which states the density is C unless a
is 1 modulo 4, in which case it's equal to:
[ ( a 2 - a ) /
( a 2 - a - 1 ) ] C
In 1984, Rajiv Gupta and M. Ram Murty showed Artin's conjecture to be true
for infinitely many values of a.
In 1986, David Rodney ("Roger")
Heath-Brown proved
nonconstructively
that there are at most 2 primes for which it fails...
Yet, we don't know about any
single value of a for which the result is certain!
(2003-07-30) m =
1.451369234883381050283968485892027449493+
Ramanujan-Soldner constant, zero of the logarithmic integral:
li(m) = 0
m is the
only positive root of the logarithmic integral function "li"
(which shouldn't be confused with the older capitalized offset logarithmic integral
"Li", still used by number theorists when x is large:
Li x = li x - li 2 ).
The above integrals must be understood as
Cauchy principal values
whenever the singularity at t = 1 is in the interval of integration...
This last caveat fully applies to Li when x isn't known to be large.
The ad-hoc definition of Li
was made by Euler (1707-1783)
well before Cauchy (1789-1857)
gave a proper definition for the principal value of an integral.
Nowadays, there would be no reason to use the Eulerian logarithmic integral (capitalized Li)
except for compatibility with the tradition that some number theorists have kept to this day.
Even in the realm of number theory, I advocate the use of the ordinary
logarithmic integral (lowercase li) possibly with the second definition
given above (where the Soldner constant 1.451... is the lower bound of integration).
That second definition avoids bickering about principal values when the argument is greater
than one (the domain used by number theorists) although students
may wonder at first about the origin of the "magical" constant.
Wonderment is a good thing.
The function li is also called integral logarithm
(French: logarithme integral).
(2017-11-25) Landau-Ramanujan constant (Landau, 1908)
K = 0.7642236535892206629906987312500923281167905413934...
Defined by Landau and expressed as an integral by Ramanujan.
Asymptotically, the density of integers
below x expressible as the sum of two squares is inversely proportional to
the square root of the natural logarithm of x.
The coefficient of proprtionality is, by definition, the
Landau-Ramanujan constant.
Ramanujan expressed as an integral the constant so defined by Landau.
(2004-02-19) W(1) =
0.567143290409783872999968662210355550-
For no good reason, this is sometimes called the Omega constant.
It's the solution of the equation x = e-x  
or, equivalently, x = ln(1/x)
In other words, it's the value at point 1
of Lambert's W function.
The value of that constant could be obtained by iterating
the function e-x, but the convergence is very slow.
It's much better to iterate the function:
f (x) = (1+x) / (1+ex )
This has the same fixed-point but features a zero
derivative there,
so that the convergence is quadratic
(the number of correct digits is roughly doubled
with each iteration).
This fast approach is an example of
Newton's method.
(2003-07-30) The two Feigenbaum constants
rule theonset of chaos:
d =
4.669201609102990671853203820466201617258185577475769- a =
-2.502907875095892822283902873218215786381271376727150-
What's known as the [first] Feigenbaum constant
is the "bifurcation velocity" (d)
which governs the geometric onset of chaos via period-doubling
in iterative sequences (with respect to some parameter
which is used linearly in each iteration, to damp a given function
having a quadratic maximum).
This universal constant was unearthed in October 1975 by
Mitchell J. Feigenbaum (1944-2019).
The related "reduction parameter" (a) is the
secondFeigenbaum constant...
(2021-07-30) Bloch's Constant (upper bound is conjectured accurate).
B = 0.47186165345268178487446879361131614907701262173944324+
0.4330127... =
Ö3
< B ≤
G(1/3) G(11/12)
= 0.47186165345...
4
G(1/4) (1+Ö3)½
It's conjectured that the above upper bound, using the
Gamma function, is actually the true value of Bloch's constant,
but this hasn't been proved yet.
When André Bloch (1893-1948) originally stated
his theorem, he merely stated that the universal constant B he introduced was
no less than 1/72.
Bloch's theorem (1925)
Consider the space S of all schlicht functions
(holomorphic injections on the open disk D of radius 1 centered on 0).
The largest disk contained in f (D) has a radius which is no less than a certain
universal positive constant B.
Bloch's constant is defined as the largest value of B for which the theorem holds.
Originally, Bloch only proved that B ≥ 1/72.
The neat examples in this section seem
unrelated to more fundamental constants...
They're also probably useless
outside of the specific context in which they've popped up.
(2016-01-19) Gelfond's Constant: e p =
23.1406926327792690...
Raising this transcendental number to the power of i gives
e ip = -1.
Because i is irrational but not transcendental, the Gelfond-Schneider theorem
implies that Gelfond's constant is transcendental.
(2004-05-22) Brun's Constant:
B2 = 1.90216058321 (26)
Sum of the reciprocals of [pairs of] twin primes:
(1/3+1/5) + (1/5+1/7) + (1/11+1/13) + (1/17+1/19) + (1/29+1/31) + ...
This constant is named after the Norwegian mathematician who proved the
sum to be convergent, in 1919:
Viggo Brun (1885-1978).
The scientific notation used above and throughout
Numericana
indicates a numerical uncertainty by giving an
estimate of the standard deviation (s).
This estimate is shown between parentheses
to the right of the least significant digit (expressed in units of that digit).
The magnitude of the error is thus stated to be less than this
with a probability of 68.27% or so.
Thomas R. Nicely,
professor of mathematics at
Lynchburg College.
started his computation of Brun's constant in 1993. He made headlines in
the process, by uncovering a
flaw in the Pentium
microprocessor's arithmetic, which ultimately
forced a costly ($475M) worldwide recall by Intel.
Usually, mathematicians have to shoot somebody
to get this much publicity.
Dr. Thomas R. Nicely (quoted in The Cincinnati Enquirer)
Nicely kept updating his estimate of Brun's constant for a
few years until 2010 or so, at which point he was basing his computation
on the exact number of twin primes found below 1.6×1015.
Because he felt a general audience could not be expected to be familiar with
the aforementioned standard way scientists report uncertainties,
Nicely chose to report the so-called
99% confidence level, which is three times as big.
(More precisely, ±3s
is a 99.73% confidence level.)
The following expressions thus denote the same value,
with the same uncertainty:
The sum of the reciprocals of the Fibonacci numbers
proved irrational by Marc Prévost,
in the wake of Roger Apéry's celebrated proof
of the irrationality of z(3), which has been known as
Apéry's constant ever since.
The attribution to Prévost was reported by François Apéry
(son of Roger Apéry) in 1996: See
The Mathematical Intelligencer, vol. 18 #2, pp. 54-61:
Roger Apéry, 1916-1994: A Radical Mathematician available
online
(look for "Prevost", halfway down the page).
The question of the irrationality of the sum of the reciprocals of the Fibonacci numbers
was formally raised by Paul Erdös and may still be erroneously
listed
as open, despite the proof of
Marc Prévost
(Université
du Littoral Côte d'Opale).
(2003-08-05) 0.73733830336929...
Grossman's Constant. [Not known much beyond the above accuracy.]
A 1986 conjecture of Jerrold W. Grossman
(which was proved in 1987 by Janssen & Tjaden)
states that the following recurrence defines a convergent sequence for only one
value of x, which is now called
Grossman's Constant:
a0 = 1 ;
a1 = x ;
an+2 =
an
1 + an+1
Similarly, there's another constant, first investigated by Michael Somos in 2000, above which
value of x the following quadratic recurrence diverges (below it,
there's convergence to a limit that's less than 1):
0.39952466709679947-
(where the terminal "7-" stands for something probably close to "655").
a0 = 0 ;
a1 = x ;
an+2
=
an+1 ( 1 + an+1 -
an )
Early releases from Michael Somos contained
a typo in the digits underlined above ("666" instead of "66") which Somos corrected
when we pointed this out to him (2001-11-24).
However, the typo still remained for several years (until 2004-04-13) in a MathSoft online article whose original author (Steven Finch)
was no longer working at MathSoft at the time when a first round of
notifications was sent out.
(2003-08-06)
262537412640768743.9999999999992500725971982-
Ramanujan's number:
exp(p Ö163) is almost an integer.
The attribution of this irrational constant to Ramanujan was made
by Simon Plouffe,
as a monument to a famous 1975 April fools column by
Martin Gardner in Scientific American (Gardner
wrote that this constant had been proved to be an integer,
as "conjectured by Ramanujan" in 1914 [sic!] ).
Actually, this particular property of 163 was first noticed in 1859
by Charles Hermite (1822-1901).
It doesn't appear in Ramanujan's relevant 1914 paper.
There are reasons
why the expression exp (pÖn)
should be close to an integer for specific integral values of n.
In particular, when n is a large
Heegner number
(43, 67 and 163 are the largest Heegner numbers).
The value n = 58, which Ramanujan did investigate in 1914, is also
most interesting. Below are the first values of n for which
exp (pÖn)
is less than 0.001 away from an integer:
(2003-08-09)
1.1319882487943... Viswanath's constant
was computed to 8 decimals in 1999.
In 1960, Hillel Furstenberg and Harry Kesten showed that, for a certain class
of random sequences, geometric growth was almost always obtained,
although they did not offer any efficient way
to compute the geometric ratio involved in each case.
The work of Furstenberg and Kesten was used in the research that earned the
1977 Nobel Prize in Physics
for Philip Anderson, Neville Mott, and John van Vleck.
This had a variety of practical applications in many domains,
including lasers, industrial glasses,
and even copper spirals for birth control...
At UC Berkeley in 1999,
Divakar
Viswanath
investigated the particular random sequences in which each term is either
the sum or the difference of the two previous ones
(a fair coin is flipped to decide whether to add or subtract).
As stated by Furstenberg and Kesten, the absolute values of the numbers in almost all
such sequences tend to have a geometric growth whose ratio is a constant.
Viswanath was able to compute this particular constant to 8 decimals.
Currently, more than 14 significant digits are known
(see A078416).
(2012-07-01) Copeland-Erdös Number:
0.23571113171923293137...
Concatenating the digits of the primes forms a normal number.
Borel
defined a normal number (to base ten)
as a real number whose decimal expansion is completely random, in the sense that
all sequences of digits of a prescribed length are equally likely to occur
at a random position in the decimal expansion.
It is well-known that almost all real numbers are normal in that
sense (which is to say that the set of the other real numbers is contained in
a set of zero measure).
Pi is conjectured to be normal but this is not known for sure.
It is actually surprisingly difficult to define explicitely a number that can be proven
to be normal. So far, all such numbers have been defined in terms of
a peculiar decimal expansion. The simplest of those is
Champernowne's Constant whose
decimal expansion is obtained by concatenating the digits of all the integers in sequence.
This number was proved to be decimally normal in 1933, by
David G. Champernowne
(1912-2000)
as an undergraduate.
The 6+1 Basic Dimensionful Physical Constants
( Proleptic SI )
The Newtonian constant
of gravitation is the odd one out, but
each of the other 6 constants below either has
an exact value defining one of the 7 basic physical units in terms of the
SI second (the unit of time)
or could play such a role in the near future...
(The term "proleptic" in the title is a reminder that this may be wishful thinking.)
Some other set of independent constants could have been used to define the 7 basic units
(for example, a conventional value of the electron's charge could replace
the conventional permeability of the vacuum)
but the following one was chosen after careful considerations.
For the most part, it has already been enacted officially as part of the SI system
("de jure" values are pending for
Planck's constant,
Avogadro's number and
Boltzmann's constant).
The number of physical dimensions is somewhat arbitrary.
We argue that temperature ought to be an independent dimension,
whereas the introduction of the mole is more of a practical
convenience than an absolute necessity.
A borderline case concerns radiation measurements:
We have included the so-called luminous units (candela, lumen, etc.)
through the de jure mechanical equivalent of light, but have
left out ionizing radiation which is handled by other
proper SI units (sievert, gray, etc.).
Yet, both cases have a similarly debatable biological basis:
Either the response of a "standard" human retina (under photopic conditions)
or damage to some "average" living tissue.
On the other hand, the very important and very fundamental
Gravitational Constant (G)
does not make this list...
With 7 dimensions and an arbitrary definition of one unit (the second)
there's only room for 6 basic constants, and G was crowded out.
Other systems can be designed where G has first-class status, but there's a
price to pay:
In the Astronomical System of Units,
a precise value of G is obtained at the expense of an imprecise kilogram !
To design a system of units where both G and the kilogram have precise
values would require a major breakthrough
(e.g., a fundamental expression for the mass of the electron).
(2003-07-26)
c = 299792458 m/s Einstein's Constant
The speed of light in a vacuum. [Exact, by definition of the meter (m)]
In April 2000,
Kenneth Brecher
(of Boston University)
produced experimental evidence, at an unprecedented level of accuracy,
which supports the main tenet of Einstein's
Special Theory of Relativity,
namely that the speed of light (c)
does not depend on the speed of the source.
Brecher was able to claim a fabulous accuracy of less than one part in 1020,
improving the state-of-the-art by 10 orders of magnitude!
Brecher's conclusions were based on the study of the sharpness of
gamma ray bursts (GRB) received from very distant sources:
In such explosive events, gamma rays are emitted from points of very different
[vectorial] velocities. Even minute differences in the speeds of these
photons would translate into significantly different times of arrival,
after traveling over immense cosmological distances.
As no such spread is observed, a careful analysis of the data translates
into the fabulous experimental accuracy quoted above in support of Einstein's
theoretical hypothesis.
Because a test that aims at confirming SR must necessarily be evaluated in the context
of theories incompatible with SR, there will always be room for
fringe scientists to remain unconvinced by Brecher's arguments
(e.g., Robert S. Fritzius, 2002).
When he announced his results at the April 2000 APS meeting in Long Beach (CA),
Brecher declared that the constant c appears "even more fundamental than light itself"
and he urged his colleagues to give it a proper name and
start calling it Einstein's constant.
The proposal was well received and has only been gaining momentum ever since,
to the point that the "new" name seems now fairly well accepted.
Since 1983, the constant c has been used to define the meter in terms of
the second, by enacting as exact the above value of 299792458 m/s.
Where does the symbol "c" come from?
Historically, "c" was used for a constant which later came to be identified as the speed of
electromagnetic propagation multiplied by the square root of 2
(this would be cÖ2, in modern terms).
This constant appeared in
Weber's force law and was thus known as "Weber's constant" for a while.
On at least one occasion, in 1873, James Clerk Maxwell
(who normally used "V" to denote the speed of light)
adjusted the meaning of "c" to let it denote the speed of
electromagnetic waves instead.
In 1894, Paul Drude (1863-1906) made this explicit and was instrumental
in popularizing "c" as the preferred notation for the
speed of electromagnetic propagation.
However, Drude still kept using the symbol "V" for the speed of light in an
optical context, because the identification of light with
electromagnetic waves was not yet common knowledge:
Electromagnetic waves had first been observed in 1888,
by Heinrich Hertz (1857-1894).
Einstein himself used "V"
for the speed of light and/or electromagnetic waves as late as 1907.
c may also be called the celerity of light:
[Phase] celerity and [group] speed are normally
two different things,
but they coincide for light in a vacuum.
(2003-07-26)
mo =
4p 10-7 N/A2
= 1.256637061435917295... mH/m
Magnetic permeability of the vacuum. [Definition of the ampere (A)]
The relation eo mo c 2
= 1 and the
exact value of c yield an exact SI value, with a finite decimal
expansion, for Coulomb's constant
(in Coulomb's law):
1
=
8.9875517873681764 ´ 10 9
» 9 ´ 10 9
N . m 2 / C 2
4peo
Consequently, the electric constant (dielectric permittivity of the vacuum)
has a known infinite decimal expansion, derived from the above:
eo =
8.85418781762038985053656303171...
´ 10 -12 F/m
A photon of frequency n has an energy
hn where
h is Planck's constant.
Using the pulsatance w = 2pn
this is
w
where is
Dirac's constant.
The constant =
h/2p is actually known
under several names:
Dirac's constant.
The reduced Planck constant.
The rationalized Planck constant.
The quantum of angular momentum.
The quantum of spin
(although some spins are half-multiples of this).
The constant
is pronounced either "h-bar" or (more rarely) "h-cross".
It is equal to unity in the natural system
of units of theoreticians
(h is 2p).
The spins of all particles are multiples of
/2 = h/4p
(an even multiple for bosons,
an odd multiple for fermions).
There's a widespread
belief that the letter h initially meant
Hilfsgrösse ("auxiliary parameter" or,
literally, "helpful quantity" in German) because that's the neutral way
Max Planck (1858-1947) introduced it, in 1900.
Units :
As noted at the outset, the actual numerical value of Planck's constant
depends on the units used.
This, in turn, depends on whether we choose to express the rate of change of
a periodic phenomenon directly as the change with time of its phase
expressed in angular units (pulsatance) or as the number of cycles per
unit of time (frequency). The latter can be seen as a special
case of the former when the angular unit of choice is a complete revolution
(i.e., a "cycle" or "turn" of 2p radians).
A key symptom that angular units ought to be involved in the measurement
of spin is that the sign of a spin depends on the conventional orientation
of space (it's an axial quantity).
Likewise, angular momentum and the dynamic quantity which induces a
change in it (torque) are
axial properties normally obtained as the cross-product of two radial vectors.
One good way to stress this fact is to express torque in Joules per radian
(J/rad) when obtained as the cross-product of a distance in meters (m)
and a force in newtons (N).
1 N.m = 1 J / rad = 2p J / cycle
= 2p W / Hz
= 120 p W / rpm
Note that torque and spectral power have the same physical dimension.
Evolution from measured to defined values :
Current technology of the watt balance
(which compares an electromagnetic force with a weight)
is almost able to measure Planck's constant with the same
precision as the best comparisons with the International prototype of the kilogram,
the only SI unit still defined in terms of an arbitrary artifact.
It is thus likely that Planck's constant could be given a de jure
value in the near future, which would amount to a new definition of the SI unit of mass.
Resolution 7
of the 21st CGPM (October 1999) recommends
"that national laboratories continue their efforts to refine experiments that link
the unit of mass to fundamental or atomic constants with a view to a future redefinition
of the kilogram".
Although precise determinations of Avogadro's constant were mentioned
in the discussion leading up to that resolution, the watt balance approach was
considered more promising. It's also more satisfying to define the kilogram in terms
of the fundamental Planck constant,
rather than make it equivalent to a certain number of atoms in a silicon crystal.
(Incidentally, the mass of N identical atoms in a crystal is slightly less than N times
the mass of an isolated atom, because of the negative energy of interaction involved.)
In 1999, Peter J. Mohr and Barry N. Taylor have
proposed
to define the kilogram in terms of an equivalent
frequency n = 1.35639274 1050 Hz, which would
h equal to c2/n,
or 6.626068927033756019661385... 10-34 J/Hz.
Instead, it would probably be better to assign h or [rather]
h/2p a rounded decimal value de jure.
This would make the future definition of the kilogram somewhat less straightforward,
but would facilitate actual usage when the utmost precision is called for.
To best fit the "kilogram frequency" proposed by Mohr and Taylor,
the de jure value of
would have been:
1.054571623 10-34 J.s/rad
However, a mistake which was corrected with the 2010 CODATA set makes that
value substantially incompatible with our best experimental knowledge.
Currently (2011) the simplest candidate for a de jure definition is:
=
1.0545717 10-34 J.s/rad
Note:
" ħ " is how your browser displays UNICODE's "h-bar"
(ħ).
In 2018, an exact value of h will define the kilogram :
The instrument which will perform the defining measurement is the
Watt Balance invented in 1975 by
Bryan Kibble
(1938-2016).
In 2016,
the metrology community decided to rename the instrument a Kibble balance, in his honor
(in a unanimous decision by the CCU = Consultative Committee for Units).
Boltzmann's constant is currently a measured quantity.
However, it would be sensible to assign it a de jure
value that would serve as an improved definition of the unit of thermodynamic temperature,
the kelvin (K) which is currently defined in terms of the temperature of the triple point of
water (i.e., 273.16 K = 0.01°C,
both expressions being exact by definition ).
History :
What's now known as Boltzmann's relation was first formulated
by Boltzmann
in 1877. It gives the entropy S
of a system known to be in one of
W equiprobable states.
Following Abraham Pais,
Eric W. Weisstein reports that
Max Planck
first used the constant k in 1900.
S = k ln (W)
Epitaph
of Ludwig Boltzmann (1844-1906)
The constant k
became known as Boltzmann's constant around 1911
(Boltzmann had died in 1906) under the influence of Planck.
Before that time, Lorentz
and others had named the constant after Planck !
(2003-08-10)
Avogadro Number = Avogadro's Constant Number
of things per mole of stuff :
6.02214129(27)
1023/mol In
January 2011, the
IAC
argued for 6.02214082(18) 1023/mol
The constant is named after the Italian physicist
Amedeo Avogadro (1776-1856)
who formulated what is now known as Avogadro's Law, namely:
At the same temperature and [low] pressure,
equal volumes of different gases contain the same number of molecules.
The current definition of the mole states that there are as many
countable things in a mole as there are atoms in 12 grams of
carbon-12
(the most common isotope of carbon).
Keeping this definition and giving a de jure value to the Avogadro number
would effectively constitute a definition of the unit of mass.
Rather, the above definition could be dropped, so that a de jure value
given to Avogadro's number would constitute a proper definition of the mole
which would then be only approximatively equal to 12 g
of carbon-12 (or
27.97697027(23) g of silicon-28).
In spite of the sheer beauty of those
isotopically-enriched
single-crystal polished silicon spheres manufactured for the
International Avogadro Coordination (IAC),
it would certainly be much better for many generations of physicists yet to come to
let a de jure value of Planck's constant
define the future kilogram...
(The watt-balance approach is more rational but less politically appealing,
or so it seems.)
(2003-07-26) 683 lm/W (lumen per watt) at 540 THz
The "mechanical equivalent of light". [Definition of the candela (cd)]
The frequency of 540 THz (5.4 1014 Hz)
corresponds to yellowish-green light.
This translates into a wavelength of about 555.1712185 nm in a
vacuum,
or about 555.013 nm in the air, which is usually quoted as 555 nm.
This frequency,
sometimes dubbed "the most visible light",
was chosen as a basis for luminous units
because it corresponds to a maximal combined sensitivity for the
cones of the human retina (the receptors which allow normal
color vision under bright-light photopic conditions).
The situation is quite different
under low-light scotopic conditions, where human vision is
essentially black-and-white
(due to rods not cones )
with a peak response around a wavelength of 507 nm.
(2007-10-25)
The ultimate dimensionful constant...
Newton's constant of gravitation: G =
6.674
10-11 m3 / kg s2
Assuming the above evolutions
[ 1, 2, 3 ]
come to pass,
the SI scheme would define every unit in terms of
de jure values of fundamental constants, using only one
arbitrary definition for the unit of time
(the second).
There would be no need for that remaining arbitrary definition if the
Newtonian constant of gravitation
(the remaining fundamental constant) was given a
de jure value.
There's no hope of ever measuring the constant of gravitation
directly with enough precision to allow a metrological
definition of the unit of time (the SI second) based on such a measurement.
However, if our mathematical understanding of the physical world progresses
well beyond its current state, we may eventually be able to find a theoretical
expression for the mass of the electron in terms of G.
This would equate the determination of G to a measurement of the
mass of the electron. Possibly, that
could be done with the required metrological precision...
Fundamental Physical Constants
Here are a few physical constants of significant metrological importance,
with the most precisely known ones listed first.
For the utmost in precision, this is roughly
the order in which they should be either measured or computed.
One exception is the magnetic moment of the electron expressed in
Bohr magnetons: 1.00115965218076(27).
That number is a difficult-to-compute function of the
fine structure constant (a)
which is actually known with a far lesser relative precision.
However, that "low" precision pertains to a small corrective term
away from unity and the overall precision is much better.
The list starts with numbers that are known exactly
(no uncertainty whatsoever) simply because of the way SI
units are currently defined.
Such exact numbers include the speed of light (c) in meters per second
(cf. SI definition of the meter) or
the vacuum permeability (m0 )
in henries per meter (or, equivalently,
newtons per squared ampère, see SI definition of the ampere).
In this table, an equation between square brackets denotes a definition of an experimental
quantity in terms of fundamental constants known with a lesser precision.
On the other hand, unbracketed equations normally yields not only the
value of the quantity but the uncertainty on it (from the uncertainties on
products or ratios of the constants involved).
Recall that the worst-case uncertainty on a product of
independent factors is very nearly the sum of the uncertainties
on those factors. So is the uncertainty on a product of positive factors
that are increasing functions of each other: (e.g.,
the uncertainty on a square and a cube are respectively two and three times larger
than the uncertainty on the number itself).
The reader may want to use such considerations to establish that the
uncertainties on the Bohr radius, the Compton wavelength and the
"classical radius of the electron" are respectively proportional to 1, 2 and 3.
(HINT: The uncertainty on the fine-structure constant is much larger than
the uncertainty on Rydberg's constant.)
Another good exercise is to use the tabulated formula to compute Stefan's constant and the
uncertainty on it.
Except as noted, all values are derived from CODATA 2010.
Dx / x
Physical Constants (sorted by relative uncertainty)
Mass of an Alpha Particle, in daltons :
ma =
4.001506179125(62) u
4He Atom:
ma + 2me -
(24.58739+54.41531) eV/c2 =
4.002603254131(63) u
Mohr et al.
gave 4.002603254153(63) at odds with CODATA 2006 in the next-to-last digit (5 instead of 3).
3.810-11
Mass of a Deuteron, in daltons :
md =
2.013553212712(77) u
Electron Charge / Mass :
-q / me =
-1.758820088(38) 1011 C/kg
Mass Defect per eV, in daltons :
1 eV / c2 =
1.073544150(24) 10-9 u
Rydberg Voltage :
hc R¥ / q
= ½ (me /q) c2 a2
= 13.60569253(30) V
Tritium Ionization:
(hc R¥ /q) /
(1 + me /mt )
= 13.60321783(30) V
Deuterium Ionization:
(hc R¥ /q) /
(1 + me /md )
= 13.60198675(30) V
Protium Ionization :
(hc R¥/q) /
(1 + me /mp )
= 13.59828667(30) V
Ionization of 4He :
2 F0 (5945204223(42) MHz)
= 24.58738798(54) V
The correction factor for He-3 would be about 0.999955147 yielding
approximately 24.586285 V for He-3.
Second IP of4He+ :
4 (hc R¥/q) /
(1 + me /ma )
= 54.4153101(12) V
Newtonian Constant of Gravitation :
G = 6.67384(80) 10-11 m3/kg.s2
The CODATA value was downgraded from
6.67428(67) in 2006 to
6.67384(80) in 2010 but
combining the 2 most precise pre-2006 measurements would yield a value of 6.67425(14).
CODATA went from 6.6720(41) in 1973 to an overly optimistic
6.67259(85) in 1986, then
6.673(10) in 1998 and
6.6742(10) in 2002.
Radius
of a Proton (RMS charge) : Rp =
0.84184(67) 10-15 m
In July 2010,
Randolf Pohl & al.
found that the 2006 CODATA value of 0.8768(69) fm
was 4% too large! This was too late to revise the 2010 CODATA value of
0.8775(51) fm
which "stands" officially, for now.
Carl Sagan once needed an "obvious" universal length
as a basic unit in a graphic message
intended for [admittedly very unlikely] extra-terrestrial decoders.
That famous picture
was attached to the two space probes
(Pioneer 10 and 11, launched in 1972 and 1973)
which would become the first man-made objects ever
to leave the Solar System.
Sagan chose one of the most prevalent lengths in the Cosmos, namely
the wavelength of 21 cm corresponding to the
hyperfine spin-flip transition of neutral hydrogen
(isolated hydrogen atoms do pervade the Universe).
Hydrogen Line :
1420.4057517667(9) MHz
21.106114054179(13) cm
Back in 1970, the value of the hyperfine "spin-flip" transition
frequency of the ground state of atomic hydrogen (protium) had already
been measured with superb precision by Hellwig et al. :
1420.405751768(2) MHz.
This was based on a direct comparison with the hyperfine frequency of cesium-133,
carried out at NBS (now NIST).
In 1971, Essen et al pushed the frontiers of precision
to a level that has not been equaled since then. Their results stood for
nearly 40 years as the most precise measurement ever performed
(the value of the magnetic moment of the electron expressed in Bohr magnetons is now
known with slightly better precision).
1420.4057517667(9) MHz
Three years earlier (in 1967) a new definition of the SI second
had been adopted based on cesium-133, for technological convenience.
Now, the world is almost ripe for a new definition of the unit of time
based on hydrogen, the simplest element.
Such a new definition might have much better prospects of being ultimately tied to
the theoretical constants of Physics in the future.
A similar hyperfine "spin-flip" transition is observed for the
3He+ ion, which is another system consisting of
a single electron orbiting a fermion. Like the proton, the helion
has a spin of 1/2 in its ground state (unlike the proton, it also
exists in a rare excited state of spin 3/2). The corresponding
frequency was measureed to be:
8665.649905(50) MHz
E.N. Fortson, F.G. Major and H.G.Dehmelt
Phys. Rev. Lett., vol. 16, pp. 221-225
1966
8665.649867(10) MHz
Hans A. Schuessler,
E.N. Fortson and H.G. Dehmelt Phys. Rev., vol. 187, pp. 5-38
1969
A very common microscopic yardstick
is the equilibrium bond length in a hydrogen molecule
(i.e., the average distance between the two protons in
an ordinary molecule of hydrogen).
It is not yet tied to the above fundamental constants
and it's only known at modest experimental precision:
"The atomic hydrogen line at 21 cm has been measured to a precision of 0.001 Hz"
by L. Essen, R. W. Donaldson, M. J. Bangham, and E. G. Hope,. Nature (London) 229, 110 (1971).
Hydrogen-like
Atoms by James F. Harrison
(Chemistry 883, Fall 2008, Michigan State University)
Primary Conversion Factors
Below are the statutory quantities which allow exact conversions between various
physical units in different systems:
149597870700 m to the au:
Astronomical unit of length. (2012)
Enacted by the International Astronomical Union
on August 31, 2012.
This is the end of a long road
which began in 1672 as Cassini proposed a unit equal to the mean distance
between the Earth and the Sun. This was recast as the radius of the
circular trajectory of a tiny mass that would orbit an isolated solar mass in one "year"
(first an actual sidereal year, then a fixed approximation thereof,
known as the Gaussian year).
This gives also an exact metric equivalence for the parsec
(pc) unit, defined as 648000 au / p.
(The obscure siriometer introduced in 1911
by Carl Charlier (1862-1934) for interstellar distances is
1 Mau = 1.495978707 1017 m, or about 4.848 pc.)
25.4 mm to the inch:
International inch. (1959)
Enacted by an international treaty, effective January 1, 1959.
This gives the following exact metric equivalences for other units of length:
1 ft = 0.3048 m, 1 yd = 0.9144 m,
1 mi = 1609.344 m
39.37 "US survey" inches to the meter:
"US Survey" inch. (1866, 1893)
This equivalence is now obsolete, except in some records of the
US Coast and Geodetic Survey.
The International units defined in 1959 are exactly 2 ppm smaller than their
"US Survey" counterparts (the ratio is 999998/1000000).
1 lb = 0.45359237 kg:
International pound. (1959)
Enacted by an international treaty, effective January 1, 1959.
This gives the following exact metric equivalences for other
customary units of mass:
1 oz = 28.349523125 g,
1 ozt = 31.1034768 g,
1 gn = 64.79891 mg,
since there are 7000 gn to the lb, 16 oz to the lb, and 480 gn to the
troy ounce
(ozt).
231 cubic inches to the Winchester gallon:
U.S. Gallon. (1707, 1836)
This is now tied to the 1959 International inch, which makes the [Winchester]
US gallon equal to exactly 3.785411784 L.
4.54609 L to the Imperial gallon:
U.K. Gallon. (1985)
This is the latest and final metric equivalence for a unit
proposed in 1819 (and effectively introduced
in 1824) as the volume of 10 lb of water at 62°F.
9.80665 m/s2:
Standard acceleration of gravity. (1901)
Multiplying this by a unit of mass gives a unit of force equal to the weight
of that mass under standard conditions
approximately equivalent to
those that would prevail at 45° of latitude on Earth, at sea-level.
The value was enacted by the third CGPM in 1901.
1 kgf = 9.80665 N and
1 lbf = 4.4482216152605 N.
101325 Pa = 1 atm:
Normal atmospheric pressure. (1954)
As enacted by the 10th CGPM in 1954,
the atmosphere unit (atm) is exactly 760 Torr.
It's only approximately 760 mmHg, because of the following specification
for the mmHg and other units of pressure based on
the conventional density of mercury.
13595.1 g/L (or kg/m3 ):
Conventional density of mercury.
This makes 760 mmHg equal a pressure of (0.76)(13595.1)(9.80665) or
exactly 101325.0144354 Pa, which was rounded down in 1954
to give the official value of the atm stated above.
The torr (whose symbol is capitalized: Torr) was then defined
as 1/760 of the rounded value, which makes the mmHg very slightly larger
than the torr, although both are used interchangeably in practice.
The mmHg is based on this conventional density (which is close to the actual
density of mercury at 0°C) regardless of whatever the actual density of
mercury may be under the prevailing temperature at the time measurements are taken.
Beware of what apparently authoritative sources may say on this subject...
999.972 g/L (or kg/m3 ):
Conventional density of "water".
This is the conventional conversion factor between so-called
relative density and absolute density.
This is also the factor to use for units of pressure expressed as heights of
a water column (just like the above conventional density of mercury is
used for similar purposes to obtain temperature-independent pressure units).
This density is clearly very close to that of natural water at its densest point.
However, it's
best considered to be a conventional conversion factor.
The above number can be traced to the 1904 work of the Swiss-born French metrologist
Charles E. Guillaume (1861-1938;
Nobel 1920).
Guillaume had joined the BIPM in 1883 and would be its director from 1915 to 1936.
From 1901 (3rd CGPM) to 1964 (12th CGPM), the liter was (unfortunately)
not defined as a cubic decimeter, but instead
as the volume of 1 kg of water in its densest state under 1 atm of pressure
(which indicates a temperature of about 3.984°C) Guillaume measured
that volume to be 1000.028 cc, which is equivalent to the
above conversion factor (to a 9-digit accuracy).
The above conventional density remains universally adopted in spite of the advent of
"Standard Mean Ocean Water" (SMOW) whose density can be
slightly higher: SMOW around 3.98°C is about 999.975 g/L.
The original batch of SMOW came from seawater collected
by Harmon Craig on the equator at 180 degrees of longitude.
After distillation, it was enriched with heavy water
to make the isotopic composition match what would be expected of undistilled seawater
(distillation changes the isotopic composition,
because lighter molecules are more volatile).
In 1961, Craig tied SMOW to the NBS-1 sample
of meteoric water originally collected from the Potomac River
by the National Bureau of Standards
(now NIST).
For example, the ratio of Oxygen-18 to Oxygen-16 in SMOW was
0.8% higher than the corresponding ratio in NBS-1.
This "actual" SMOW is all but exhausted, but water closely matching its
isotopic
composition has been made commercially available, since 1968,
by the Vienna-based IAEA
(International Atomic Energy Agency) under the name of VSMOW or "Vienna SMOW".
4.184 J to the calorie (cal):
Thermochemical calorie. (1935)
This is currently understood as the value of a calorie, unless otherwise
specified (the 1956 "IST" calorie described below is slightly different).
Watch out!
The kilocalorie (1 kcal = 1000 cal) was
dubbed "Calorie" or "Cal" [capital "C"]
in dietetics before 1969 (it still is, at times).
2326 J/kg = 1 Btu/lb:
IST heat capacity of water, per °F. (1956)
This defines the IT or IST ("International [Steam] Tables") flavor of the Btu
("British Thermal Unit") in SI units, once the lb/kg ratio is known.
That value was adopted in July 1956 by the
5th International Conference on the Properties of Steam,
which took place in London, England.
The subsequent definition of the pound as 0.45359237 kg
(effective since January 1, 1959)
makes the official Btu equal to exactly 1055.05585262 J.
The rarely used centigrade heat unit (chu)
is defined as 1.8 Btu (exactly 1899.100534716 J).
The additional relation 1 cal/g = 1 chu/lb
has been used to introduce a dubious
"IST calorie" of exactly 4.1868 J
competing with the above thermochemical calorie
of 4.184 J, used by the scientific community since 1935.
Beware of the bogus conversion factor of 4.1868 J/cal
which has subsequently infected many computers and most
handheld calculators with conversion capabilities...
The Btu was apparently introduced by
Michael Faraday
(before 1820?) as the quantity of heat required to raise one pound (lb) of
water from 63°F to 64°F.
This deprecated definition is roughly compatible with the modern one
(and it remains mentally helpful) but it's metrologically inferior.
Dimensionless Physical Constants
Embedded into physical reality are a few nontrivial constants whose values
do not depend on our chosen system of measurement units.
Examples include the ratios of all elementary particles to the mass of the
electron. Arguably, one of the ultimate goals of theoretical physics is
to explain those values.
Other such unexplained constants have a mystical flair to them.
(2018-06-02)
"Galileo's constant" : Case closed!
A constant Galileo once had to measure is now known perfectly.
Galileo detected the simultaneity of two events by ear.
When two bangs were less than about 11 ms apart he heard a single sound
and considered the two events simultaneous.
That's probably why he chose that particular duration as his
unit of time
which he called a tempo
(plural tempi). The precise definition of the unit
was in terms of a particular water-clock which he was using to measure longer durations.
Using a simple pendulum of length R,
he would produce a bang one quarter-period after the release by
have metal gong just underneath the pivot point.
On the other hand, he could also release a ball in free fall from a height H
over another gong. Releasing the two things simultanously,
he could tell if the two durations were equal (within the aforementioned precision)
and adjust either length until they were.
Galileo observed that the ratio R/H was always the same and he measured
the value of that constant as precisely as he could.
Nowadays, we know the ideal value of that constant:
R/H = 8 / p2 = 0.8105694691387021715510357...
This much can be derived in any freshman physics class
using the elementary principles established by
Newton after Galileo's death.
Thus, Galileo's results can now be used backwards to estimate how good
his experiemental methods were. (Indeed, they were as good as can be expected
when simultaneity is appreciated by ear.)
The dream of some theoretical physicists is now to advance our theories
to the point that the various dimensionless physical constants which are
now mysterious to us can be explained as easily as what I've
called Galileo's constant here (for shock value).
Combining Planck's constant (h),
with the two electromagnetic constants
and/or the speed of light
(recall that eo mo c 2 = 1)
there's essentially only one way to obtain a quantity whose dimension is the
square of an electric charge.
The ratio of the square of the charge of an electron to that quantity is
a pure dimensionless number known as
Sommerfeld's constant or the fine-structure constant :
a =
m0 c e2 / 2h =
e2 / 2hce0 =
1 / 137.035999...
The value of this constant has captured the imagination of many generations
of physicists, professionals and amateurs alike.
Many wild guesses have been made,
often based on little more than dubious numerology.
In 1948, Edward Teller
(1908-2003) suggested that the electromagnetic interaction might be weakening
in cosmological time
and he ventured the guess that the fine-structure constant could
be inversely proportional to the logarithm of the age of the Universe.
This proposal was demolished by
Dennis Wilkinson
(1918-2013) using ordinary mineralogy, which
shows that the rate of alpha-decay for U-238 could not have varied by much more than
10% in a billion years (that rate is extremely sensitive
to the exact value of the fine-structure constant).
Teller's proposal was further destroyed by precise measurements from the
fossil reactors at Oklo (Gabon)
which show that the fine-structure constant had essentially
the same value as today two billion years ago.
In 1919, Hermann Weyl (1885-1955)
remarked that the radius of the Universe and the radius of an electron would be exactly in the above ratio
if the mass of the Universe was to gravitational energy what the mass of an
electron is to electromagnetic energy (using, for example, the electrostatic argument
leading to the classical radius of the electron).
In 1937, Dirac singled out
the interractions between an electron and a proton instead,
which led him to ponder a quantity equal to the above divided by the
proton-to-electron mass ratio :
In 1966, E. Pascual Jordan (1902-1980)
used Dirac's "variable gravity" cosmology to argue that the Earth had doubled in size since the
continents were formed, thus advocating a very misguided alternative to
plate tectonics (or continental drift).