The famous problems discussed below are either still unsolved
or remained open for quite a while,
in spite of considerable scrutiny.
Some entail substantial cash rewards.
Open Problems (or recently resolved
long-standing ones)
(2002-12-07) RH = Riemann's Hypothesis (August 1859)
The real part of all nontrivial zeroes of Riemann's
z function is ½.
I have temporarily put aside the search for a rigorous proof, after some futile attempts,
because it's not necessary for my immediate goal.
Bernhard Riemann, 1859.
As he solved the Basel problem
in 1735, Leonhard Euler (1707-1783)
introduced the following series, now universally called Zeta,
focusing on integral values of s > 1 (for which
the series converges).
Using the
fundamental theorem of arithmetic
(which states that every positive integer has a
unique factorization into primes)
Euler observed (in 1737) that
the term of rank n in the above series is obtained once and only once in the expansion
of the product of all the following geometric series
for any set of primes containing, at least, every prime divisor p of n.
So, the sum of the whole series is equal to the product of all such things for all
primes p. This yields the following celebrated
Euler product formula :
At least when z(s) converges,
that's equal to
z(s) -
2 z(s) / 2s . So:
z(s) =
h(s) / (1 - 21-s )
Except for s = 1, this provides directly an analytic expression for
z(s) in terms of an alternating series
which converges when Re(s) > 0. Nice...
Dirichlet
used the above to show that z
has a simple pole of residue
1 at s = 1. (HINT: The numerator and the
denominator are respectively asymptotic to
Log 2 and
(s-1) Log 2 as s goes to 1.)
In 1859,
Bernhard Riemann (1826-1866) showed that
the Zeta function can actually be extended to the entire complex plane,
except at the pole s = 1,
by analytic continuation
(a concept invented by Weierstrass
in 1842).
establishing in the process a relation between values at (1-s) and
s (which had been conjectured by Euler in 1749, in an equivalent form):
p-½ s
G(
s
)
z(s)
=
p-½(1-s)
G(
1-s
)
z(1-s)
2
2
Using the known properties of the Gamma function
(whose reciprocal 1/G
is an entire function
with zeroes at all the nonpositive integers) this relation confirms the existence of
a simple pole for the Zeta function at
s = 1 and reveals
trivial zeroes at negative even integers: -2, -4, -6...
Using the regularity of the Zeta function for Re(s) > 1
(due to the aforementioned convergence of the defining series in that domain)
this same relation shows that nontrivial zeroes can only exist
in the so-called critical strip (0 ≤ Re(s) ≤1).
They could thus a priori be of two different types:
Pairs of zeroes { s, s* } on the critical line
(i.e., Re(s) = ½).
Quadruplets of zeroes { s, s*, 1-s, 1-s* } off that critical line.
The famous Riemann Hypothesis (RH) is
the conjecture, formulated by
Bernhard Riemann in 1859,
that there are no zeroes of the latter type:
"RH" The Riemann Hypothesis
(1859)
All nontrivial zeroes of the Zeta function are on the critical line :
Re(s) = ½
Zeta shares its nontrivial zeros with the above convergent series: (Re(s)>0)
Listed below are the imaginary parts of the 29 smallest zeroes
of the Zeta function
located in the upper half-plane
(conjugate zeroes exist in the lower half-plane
whose imaginary parts are simply the opposites of these).
The gigantic ZetaGrid distributed project
of Sebastian Wedeniwski
managed to compute more than 10 trillion
zeros over the course of its lifetime (2001-2005) but they were scooped
by Xavier
Gourdon and Patrick Demichel who achieved that same goal earlier with modest means by using
superior software based on an
algorithm
devised in 1988 by
Andrew M. Odlyzko (1949-) and
Arnold Schönhage (1934-)...
Many statements have been shown to hold if RH is assumed to be true, and
a number of them are known to imply RH, so they are actually equivalent to it.
Several nice statements have been made which seem
true and are simpler and stronger than
RH (each implies RH but the converse need not be true).
This includes a conjecture made in the doctoral dissertation of
Sebastian Wedeniwski (the aforementioned mastermind of
ZetaGrid, 2001-2005) :
The Mertens conjecture
(Stieltjes, 1885. Mertens, 1897)
was yet another famous statement "stronger than RH" which seemed promising until it was disproved in 1985
by Andrew Odlyzko (b.1949) and
Herman te Riele (b.1947).
It had been formulated in 1885 by Thomas Stieljes (1856-1894)
after he thought he could prove the following weaker statement
(actually equivalent to RH) about the asymptotics of the
Mertens function M :
M(n) = O(n ½+e )
for any e > 0
The Mertens conjecture is the stronger (false) statement:
|M(n)| < n½
Where are the zeros of zeta of s ? G.F.B. Riemann has made a good guess:
" They're all on the critical line," said he.
" And their density's one over two pi Log t. "
This statement of Riemann's has been like a trigger
And many good men, with vim and with vigor,
Have attempted to find, with mathematical rigor,
what happens to zeta as mod t gets bigger.
The names of Landau
and Bohr
and of Cramér, Hardy
and Littlewood,
and Titchmarsh are there.
In spite of their efforts and skill and finesse,
in locating the zeros there's been no success.
In 1914, G.H. Hardy did find,
An infinite number that lay on the line,
His theorem however won't rule out the case,
That there might be a zero at some other place.
Let P be the function pi minus Li,
The order of P is not known for x high,
If square root of x times log x we could show,
Then Riemann's conjecture would surely be so.
Related to this is another enigma,
Concerning the Lindelöf function mu(sigma)
Which measures the growth in the critical strip,
On the number of zeros it gives us a grip.
But nobody knows how this function behaves.
Convexity tells us it can have no waves, Lindelöf said that the shape of its graph,
Is constant when sigma is more than one-half.
Oh, where are the zeros of zeta of s?
We must know exactly, we cannot just guess,
In order to strengthen the prime number theorem,
The integral's contour must not get too near 'em.
André Weil has bettered old Riemann's fine guess,
by using a fancier zeta of s.
He proves that the zeros are where they should be,
provided the characteristic is p.
There's a moral to draw from this sad tale of woe.
That every young genius among you should know:
If you tackle a problem and seem to get stuck,
just take it mod p and you'll have better luck.
What fraction of zeros on the line will be found,
when mod t is kept below some given bound?
Does the fraction, whatever, stay bounded below,
as the bound on mod t is permitted to grow?
The efforts of Selberg did finally banish,
All fears that the fraction might possibly vanish.
It stays bounded below, which is just as it should,
But the bound he determined was not very good.
Norm Levinson managed to show, better yet.
At two-to-one odds it would be a good bet,
If over a zero you happen to trip,
it would be on the line and not just in the strip.
Levinson tried in a classical way,
Weil brought modular means into play. Atiyah then left and Paul Cohen quit,
So now there's no proof at all that will fit.
But now we must study this matter anew, Serre points out manifold things it makes true.
A medal might be the reward in this quest,
For Riemann's conjecture is surely the best.
Nobody knows for sure whether that sequence is infinite or not,
although everybody's guessing that it is.
That's one of the two oldest
unsolved problems in mathematics (the other one pertains to
odd perfect numbers).
In 1966, Chen Jingrun (1933-1996)
proved that there are infinitely many primes p
such that p+2 is either prime or semiprime (in which case p
is called a Chen prime). A semiprime is defined as
the product of two primes.
The Twin Primes Conjecture says that the
following is true for K = 2:
There are infinitely many pairs of primes whose difference is K.
It's widely believed that the above statement holds for
any even integer K.
Polignac's conjecture (1849)
is the belief that there are infinitely many pairs of
consecutive primes differing by K, for any even K.
The weaker statement that the above holds for at least one nonzero
value of K is equivalent to saying that the difference
between consecutive primes doesn't tend to infinity.
This was only proved recently:
In April 2013, Yitang Zhang
established an upper-bound of 70 million for the least K verifying the above.
Zhang was quoted as saying that the methods in
his 55-page paper
could be perfected to pull this upper-bound downward.
Quickly, Terry Tao initiated a
polymath
project which reduced the upper-bound to 4680,
using Zhang's own methods.
In November 2013,
James Maynard
(b. 1987, 2002 Fields Medal)
found a streamlined approach giving directly an upper-bound of 600.
He joined Tao's group and they finally managed to reduce the bound to 246
(as of 2014-04-14).
Assuming a generalized version of the
Elliott-Halberstam
conjecture (1968) the above lower-bound would be reduced down to 6
[Polymath, August 2014].
However, new methods seem needed for the ultimate reduction to 2.
"Bounded Gaps between Primes" by Yitang Zhang (Annals of Mathematics, 179, pp.1121-1174)
(2016-01-29) Goldbach Conjectures (1742)
Every even number greater than 2 is the sum of two primes.
That conjecture was first formulated by the talented recreational mathematician
Christian Goldbach
(1690-1764) who wrote to Euler
about it in 1742.
An equivalent satement is obtained with odd primes by excluding the number
4 = 2+2 (which isn't the sum of two odd primes).
The weaker statement that odd numbers are sums of three primes can be construed
as a corollary, from the remark that an odd number above 5 is 3 plus an even number above 2.
That weaker statement is less formidable; it was shown to hold for sufficiently
large odd numbers in 1923 by
Hardy &
Littlewoodassuming
the Riemann Hypothesis.
Even Goldbach Conjecture : (Goldbach's conjecture)
All even numbers above 2 are sums of two primes.
All even numbers above 4 are sums of two odd primes.
Odd Goldbach Conjecture : (Goldbach's weak conjecture)
All odd numbers above 5 are sums of three primes.
All odd numbers above 7 are sums of three odd primes.
In 2012 and 2013, Harald Helfgott (b. 1977)
published two papers which provide a complete proof of the weak conjecture.
The strong conjecture remains an open question which has only been
checked by computer
for even numbers up to 4 1018 or so.
On 1752-11-18, Goldbach also formulated the lesser-known
conjecture that any odd number is twice a square plus a prime.
Two counterexamples (5777 and 5993) were discovered in 1856 by
Moritz A. Stern (of Stern-Brocot tree fame).
Asymptotically,
there should be as many primes between n2 and (n+1)2
as between 1 and n (roughly n / ln n,
by the prime-number theorem).
So, we're very confident that Legendre's conjecture won't fail
in the long run. That's a good heuristic argument
but it doesn't constitute a proof.
It seems likely that there are infinitely many primes of this form but this
is not known for sure. This old question
was popular enough in 1912 for Landau to include it in his
list of four unsolved problems related to primes.
(2017-11-25) Schinzel's Hypothesis H
(Schinzel & Sierpinski, 1958)
A set of irreducible integer-valued polynomials without a fixed prime divisor
are simultaneously prime infinitely often.
Integer-valued polynomials need not have integer coefficients.
½ n2 + ½ n
is integer-valued (A000217) but it's reducible.
A fixed divisor of several integer-valued polynomials is
defined to be an integer which divides their product at every point.
A set of polynomials without a prime
fixed divisor is said to verify Bunyakovsky's property.
Bunyakovsky's property is clearly necessary for nonconstant
polynomials to be simultaneously prime infinitely often.
Otherwise, there would be a fixed prime p
dividing the product of the polynomials at every point...
In this case, at every point where all polynomials are prime, at least one of
them must be equal to p (if a prime divides a product of primes,
it's equal to one of them).
This implies that at least one of the polynomials is equal to p
infinitely many times, which can only happen if it's
constant, which is ruled out.
(2002-11-17) P = NP (?)
What's an NP-complete problem?
A computational problem is said to be in the class P
of polynomial time problems whenever there's
an algorithm which can find a valid solution in
a number of elementary steps which is always less than a certain
polynomial function
of the size of the input data (one measure of this
size could be the number of digits used in a reasonnably thrift encoding of
the input data).
The class NP (nondeterministic polynomial time problems)
consists of those problems which could be so solved nondeterministically,
which is a fancy way to say that an unlimited number of lucky guesses are allowed
in the process which arrives at a solution.
Such a nondeterministic process must still be such that only valid solutions
are produced...
To put it in simpler words, a problem is in NP if and only if a solution of
it can be checked in polynomial time
(an explicit nondeterministic algorithm would then be to guess a correct solution
and check it).
In 1972, Richard Karp (1935-)
discovered that there are problems in NP which he dubbed "NP-complete"
because they are at least as tough to solve as any other problem in NP, in the
following sense:
Any NP problem can be reduced in polynomial time by a deterministic algorithm
to the solution of an NP-complete problem whose data size is no more than
a polynomial function of the original input data.
Therefore, if any NP-complete problem could be solved deterministically
in polynomial time (i.e., if it was a P problem) then all NP problems
would be in P and we would thus have P = NP.
Karp's original NP-complete problem (dubbed SAT)
was the satisfiability of boolean expressions:
Is there a way to satisfy a given
boolean expression (i.e., make it "true") by assigning true/false values
to the variables in it?
The SAT problem is clearly in NP (just guess
a correct set of values and compute the boolean
expression to make sure it's true).
Conversely,
Stephen Cook (1939-)
proved from scratch in 1971 that any problem in NP can be reduced in polynomial
time to a commensurable boolean satisfiability problem, thus establishing
SAT to be NP-complete. This result is now known as the
Cook-Levin theorem
because it was also obtained independently by
Leonid Levin (1948-).
If a known NP-complete problem like SAT can be reduced polynomially to some NP problem Q,
the problem Q is then established to be NP-complete.
This way, from Karp's original NP-complete problem,
the list of known NP-complete problems has grown to include
literally hundreds of "classical" examples.
The tantalizing thing is that many such NP-complete problems are very practical
problems which, at first, look like they could be solved in polynomial time.
Yet, nobody has ever "cracked" one of these in polynomial time
or proved that such a thing could not be done...
Therefore we still don't know whether P=NP or not.
(2002-12-21) Collatz Sequences and the Syracuse Conjecture
Collatz sequences are sequences of integers where an even N is followed by N/2
and an odd N by 3N+1.
Do they all land on 4, 2, 1, 4...?
The problem is most commonly named after the German mathematician
Lothar
Collatz (1910-1990) who formulated the conjecture in 1937
and shared it privately with
Stanislaw Ulam (1909-1984)
and Shizuo Kakutani (1911-2004)
at the ICM in 1950.
This was first described in print in 1971, in a transcript of a
talk given in 1970 by Harold Coxeter (1907-2003).
In 1975, Helmut Hasse (1898-1979) coined the name
Syracuse problem after
Syracuse University.
It's also called the hailstone problem (because of its dynamics)
and has been popularized under several other names
like Kakutani's problem (Yale University, early 1960's) and
Ulam's problem.
The sequence A087003
may be defined as the Dirichlet inverse of the character modulo 2.
It was first defined by Labos E.
as the sum of all the Möbius values found at the points of the Collatz trajectory
until a "4" is found (2003-10-02).
However, Marc Lebrun (2004-02-19) has shown that either definition simply
means that A087003 is equal to the Möbius function
at odd points and vanishes at even points...
All told, the Collatz trajectories turn out to be irrelevant !
Breakthrough in 2019
Almost all Collatz orbits attain almost bounded values.
This is about as close as anyone can get to the Collatz comjecture without actually solving it. Terry Tao (September 2019)
(2006-08-29) The Poincaré Conjecture (1904)
This was proved in 2002 by Grigori Yakovlevich Perelman,
in the generalized form proposed by
Thurston (geometrization conjecture).
(2010-10-15) Fermat's Last Theorem (Fermat 1637, Wiles 1995)
xn + yn = zn
has no solutions in positive integers for n > 2.
Hanc marginis exiguitas non caperet. This margin is too small to contain [my proof].
Pierre de Fermat (1601-1665)
Die Gleichung an=bn+cn
für n>2 in ganzen Zahlen [ist] nicht auflösbar. A. Ernest Wendt (1894)
Solutions for n = 2 are called Pythagorean
triples. They are fairly easy to enumerate systematically, starting with
x=3, y=4, z=5. Many special cases known in ancient times were recorded on
Chaldean clay tablets.
In the Middle Ages, Leonardo Fibonacci proved that there was no solutions for
n = 4 (Liber
Quadratorum, 1225).
(2013-01-09) The Oesterlé-Masser ABC Conjecture
Moshizuki has proposed a 500-page proof whose philosophy is obscure.
The radical (rad) of a positive integer n is the
integer whose prime factorization consist of the
same primes as that of n with multiplicity 1.
The function rad is a multiplicative function.
The ABC conjecture says that the inequality
rad ( a b ) > (a+b)e
has infinitely many exceptions when e = 0
but finitely many when e > 0.
The conjecture was formulated in 1985 by the French BourbakistJoseph Oesterlé (b. 1954) and the British mathematician David Masser
(b. 1948).
(2016-05-30) Union-closed sets conjecture (Péter Frankl, 1979)
In a nontrivial finite union-closed family of finite sets,
is there always an element that belongs to at least half of the sets?
The "nontrivial" qualifier indicates that we're considering only families containing at least one
nonempty set. A family of sets is said to be union-closed when it
contains any union of its members.
The conjecture clearly holds for families containing at least one singleton.
(2016-05-30) The Egyptian conjecture (Straus & Erdös, 1948)
For n≥2, is 4/n always the sum of three unit fractions?
Besides 2/3 and 3/4, ancient Egyptians only allowed fractions with numerator 1
(unit fractions).
They represent other fractions as sums of those (without repetitions).
(2017-11-13) Gilbreath's Conjecture (Proth 1878, Gilbreath 1958).
It's true of "sufficiently random" sequences. Is it true of primes ?
François Proth (in 1878) and Normal L. Gilbreath
(in 1958) independently considered that a sequence can be obtained from another as the absolute
value of the differences between consecutive terms.
When we start with the sequence of the prime numbers and apply that
process iteratively, we obtain the following intriguing table:
2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
1
2
2
4
2
4
2
4
6
2
6
4
2
4
6
6
2
6
4
2
6
4
6
1
0
2
2
2
2
2
2
4
4
2
2
2
2
0
4
4
2
2
4
2
2
1
2
0
0
0
0
0
2
0
2
0
0
0
2
4
0
2
0
2
2
0
0
1
2
0
0
0
0
2
2
2
2
0
0
2
2
4
2
2
2
0
2
0
1
2
0
0
0
2
0
0
0
2
0
2
0
2
2
0
0
2
2
2
2
1
2
0
0
2
2
0
0
2
2
2
2
2
0
2
0
2
0
0
0
1
2
0
2
0
2
0
2
0
0
0
0
2
2
2
2
2
0
0
0
1
2
2
2
2
2
2
2
0
0
0
2
0
0
0
0
2
0
0
1
0
0
0
0
0
0
2
0
0
2
2
0
0
0
2
2
0
0
1
0
0
0
0
0
2
2
0
2
0
2
0
0
2
0
2
0
1
0
0
0
0
2
0
2
2
2
2
2
0
2
2
2
2
0
1
0
0
0
2
2
2
0
0
0
0
2
2
0
0
0
2
This far, all the successive sequences so tabulated start with a leftmost 1.
The Proth-Gilbreath conjecture is the unproved statement that it's always so.
In the 1970s,
Bill Thurston (1946-2012) and
John Mather (1942-2017)
proved a highly nontrivial result:
In the group Diff r (M) of the
Cr-diffeomorphisms
of a compact manifold M,
the connected component of the identity is a simple subgroup,
in most cases...
The result need not hold when r is equal to dim(M)+1.
In particular, the case of Diff 2 ( S1 ) remains
completely open.
Do the diffeomorphisms of class C2 over the circle form a simple group?
The related group denoted Diff+1+bv( S1 )
is not simple, as pointed out by the late John Mather.
This one is a well-studied group consisting of the orientation-preserving diffeomorphisms
f of the circle, which are C1
(i.e., the first derivative f ' is a continuous function)
with the added condion that Log f ' is of
bounded variation
(French: à variation bornée).
That group is fundamental in dynamical systems,
as it meets the premises of
Denjoy's theorem, which was
established in 1932 by Arnaud Denjoy (1884-1974),
the thesis advisor of the bourbakistGustave Choquet (1915-2006).