(2006-12-26) Identity (I), Permutation Matrices, Exchange Matrix (J)
The various permutations of the columns of the identity (I).
Those square matrices whose elements vanish except for
one unit element per row and per column
are known as permutation matrices.
Each corresponds to a permutation of the columns of the identity matrix (I)
and they form collectively
a multiplicative group isomorphic to the
symmetric group on a set of n elements.
One particular permutation matrix is the so-called exchange matrix (J)
which is the "mirror image" of the identity matrix itself. For example:
I6 =
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
J6 =
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
In spite of this superficial finite-dimensional resemblance
with the identity matrix, there's no nice generalization
of the exchange matrix to infinitely many dimensions.
(2023-05-31) Gil Strang's Matrix
The favorite square matrix of professor W. Gilbert Strang (MIT).
S6 =
2
-1
0
0
0
0
-1
2
-1
0
0
0
0
-1
2
-1
0
0
0
0
-1
2
-1
0
0
0
0
-1
2
-1
0
0
0
0
-1
2
Of particular interest is Strang's matrix S11 because it has 121 elements...
(2007-02-13) Defining matrix operations
from operations on elements
Dirac's notation is convenient to express the elements of a matrix A.
The scalar at row i
and column j in matrix A,
is often denoted aij
Similar ad hoc notations are adequate when
juggling with just a few distinct matrices, in spite of the fact that
the name of the doubly-subscripted symbol is not directly related to the name
of the matrix itself (A)...
That flaw is avoided with the nice self-contained notation
(due to P.A.M. Dirac)
shown on the right-hand side of the following equation:
aij =
< i | A | j >
This is sometimes called the element of Abetween i and j.
Abstract meanings are given to separate parts of that notation:
| j > is called a ket
(or column vector)
in the canonical orthonormal basis
and < i | is the dual of
such a thing (it's called a bra or row vector).
Originally, the words bra and ket
where just the two halves of the word bracket,
a mnemonic invented by Dirac as a reminder that the row-vector comes
before the column-vector in the bracket notation normally used to
denote the inner product which forms a scalar.
(with the opposite order, you'd form a square matrix instead).
These interpretations are important and do repay study, but they're not a prerequisite
to use the notation as a whole.
For example, the following equation defines the
sum A+B of two matrices A and B by giving
every element of it, in terms of a simple addition of scalars :
< i | A+B | j >
=
< i | A | j > +
< i | B | j >
More interestingly, here's how to define
the product A B of two matrices:
< i | A B | j >
=
åk
< i | A | k > < k | B | j >
Jacques Binet
That general rule for matrix multiplication was made explicit in 1812
(using more traditional index notations) by the French mathematician
Jacques Binet
(1786-1856; X1804) who is also remembered for two utterly unrelated things,
both known as Binet's formulas :
1)Explicit expression for the n-th
term of a sequence obeying a linear recurrence relation (e.g., the Fibonacci sequence).
2) In celestial mechanics,
Binet's formulas
allow an easy proof that the orbit of two gravitating bodies must be a
conic section.
If we denote by z* the conjugate
a-ib of the complex
number z = a+ib,
(z=z* when z is a real number) then we may define as
follows the conjugate transposeA* of the complex matrix A:
" i ,
" j ,
< i | A* | j >
=
< j | A | i >*
This operation is far more important than transposition without conjugation
(which is virtually useless in the complex realm).
With respect to A, the matrix A* is variously called
conjugate transpose,
Hermitian conjugate,
adjoint,
Hermitian adjoint or dual.
(It's called transpose for real matrices.)
Here's how to prove for matrices the relation which
gives the adjoint of a product:
(A B)* = B* A*
(this holds for
whatever is called adjoint or dual among other
multiplicative objects for which the term is defined).
< i | (A B)* | j >
=
< j | A B | i >*
=
Sk
( < j | A | k > < k | B | i > )*
=
Sk
< j | A | k >* < k | B | i >*
=
Sk
< k | A* | j > < i | B* | k >
=
Sk
< i | B* | k > < k | A* | j >
=
< i | B* A* | j >
(2019-01-29) Triangular Matrices
The concepts we introduced in the previous
section are natural in the sense that they
do not depend on the structure of the set
used to index matrix coefficients. In particular,
any order between distinct indices
was utterly irrelevant in those definitions.
Incidentally, this is true because we only consider
matrices of finite dimension over a
ring of coefficeints where
addition is required to be commutative. If either condition fails,
the summation symbol S
(subscripted by the dummy name of the summation index) is no longer well-defined.
Even with a commutative addition, an infinite sum
could depend on the ordering of indices.
The identity matrix I has a natural definition
in this sense while other permutation matrices
do not (including mirror-image of I previously featured).
Transposition is one obvious anti-isomorphism
between upper-triangular matrices and lower-triangular matrices.
A straight isomorphism can be defined using any involution
s among the set of indices.
In dimension n+1, if we use the integers from 0 to n as
our set of indices, s(i) = n-i will do.
The induced involution over the set of matrices is defined by:
< i | s(A) | j > =
< s(j) | A | s(i) >
So defined, s is a ring isomorphism
between upper-triangular matrices and lower-triangular matrices.
(2010-08-10) Determinant of a Matrix
(Seki, 1683)
It's defined as a completely antisymmetrical multilinear form.
In a vector space of dimension n,
a completely antisymmetrical form is a scalar-valued function
of n vectors, which is linear with respect to every argument and
has the property of being unchanged by any
even permutation of the arguments
and negated by any odd one.
Even and odd permutations have
signature +1 and -1, respectively.
Permutations are bijective functions of a finite set
onto itself.
The signature of a function s
which is not bijective is zero
( sgn(s) = 0).
In dimension 3 for example,
such a function f would be trilinear and verify:
f ( v1 , v2 , v3 )
=
f ( v2 , v3 , v1 )
=
-
f ( v2 , v1 , v3 )
A key observation is that all such functions are proportional to each other.
Leibniz's formula entails a sum of n!
terms, each a product of n factors:
n
det ( A ) =
å
sgn ( s )
Õ
< i | A | s(i) >
sÎSn
i = 1
(2012-09-18) Minors & Cofactors
A minor is a determinant obtained by deleting n rows and n columns.
The number of deleted rows and columns can be zero
(the whole determinant is a minor of itself).
An n-th minor is obtained by deleting n rows and n columns.
Of particular interest are the first minors of a
determinant, which are obtained by deleting one row (the i-th one)
and one column (the j-th one).
(2010-08-12) Adjugate (classic adjoint):
Tranpose of the cofactors.
The adjugate of A verifies adj(A) A
= A adj(A) = det(A) I
The adjugate adj(A) of a matrix A
may no longer be called its "adjoint", which is now
a synonym of transpose conjugate
(denoted A* or
A ).
A matrix A is invertibleiff it has nonzero determinant, in which case:
A-1 =
adj(A) / det(A)
As the determinant of the matrix A is an homogeneous
polynomial of its elements, it can be written uniquely in the following
way, where the polynomials Aij and Bij
do not depend on the element aij.
det(A) = a ij A ij + B ij
By definition, A ij is the cofactor
of a ij
(2010-08-12) Eigenvectors and eigenvalues
The image of an eigenvector is proportional to it.
Finding Ellipses by
Ulrich Daepp, Pamela Gorkin, Andrew Shaffer, Karl Voss (MAA Press, 2018-10-30)
(2010-08-12) Characteristic Polynomial P of a square matrix
It's independent on the choice of a vectorial basis.
Trace :
The trace of a matrix is the sum of the elements in its diagonal:
n
tr ( A ) =
å
< i | A | i >
i = 1
It's also the coefficient of (-x) in the characteristic polynomial of
the operator.
As such, the trace is an intrinsic value associated to the operator itself,
irrespective of the choice of a coordinate basis.
So, we may talk freely about
the trace of an operator (or the trace of a tensor).
(2017-12-03) Minimal Polynomial Q of a Square Matrix
A polynomial Q easy to obtain from the characteristic polynomial P.
Defining Q as the least monic polynomial havind the same roots as P,
we obtain it as P/R where R is the greatest (monic) common divisor
of P and its derivative P'.
It's not even necessary to know the roots of P to perform that calculation.
The characteristic polynomial P is irreducible
when it's equal to the minimal polynomial Q.
(2010-08-12) Cayley-Hamilton Theorem (1855, 1858, 1878)
Any matrix is a root of its own characteristic polynomial.
That general theorem was first stated in 1858 by
Arthur Cayley who only checked it for
3-dimensional matrices.
A special case (3-dimensional rotations)
had been stated by Hamilton three years earlier.
The first general proof was published in 1878 by
Georg Frobenius (1849-1917).
As the coefficients of the characteristic polynomial of a matrix A
are polynomial functions of its elements, the assertion
states that a particular set of n2 polynomials functions
(of degree n) in n2 variables vanishes.
To prove that the coefficients of all the relevant polynomials vanish, it suffices to
establish that the associated polynomial functions vanish on a sufficiently large set
of points. In particular, a dense
set of points will do
(the zeroes of any continuous function form a closed set).
For an n-dimensional vector space over complex numbers,
the set of matrices with distinct eigenvalues is dense
among all square matrices.
(2017-12-03) Jordan-Chevalley-Dunford decomposition: M = D+N
Chevalley's algorithm finds the unique decomposition
of a matrix into the sum of its diagonalizable
and nilpotent parts, which commute.
History :
Although Camille Jordan published an early
form of it in 1870, that result took a full century to make its way into
a graduate textbook (Dunford & Schwartz, 1971).
It's now common knowledge among graduate students and it has entered the
standard undergraduate curriculum for linear algebra.
What is still lost on most students and some faculty is the
remarkable fact that the decomposition can be effectively performed
without any knowledge of the eigenvalues of the matrix to be decomposed.
The ground field need not even be
algebraically closed
(like the complex numbers).
A perfect field is enough
(e,g., the rationals or anyfinite field).
That crucial observation is due to Claude Chevalley (1951)
which fully justifies the name of Jordan-Chevalley decomposition,
which is now commonly used. Here's the chronology of the four key publications:
1870: Camille Jordan (1838-1922; X1855)
Traité des substitutions et des équations algébriques (Paris, 1870).
1951: Claude Chevalley (1909-1984)
Théorie des groupes de LieII. Groupes Algébriques
(Hermann, Paris, 1951).
1971: Nelson Dunford & Jacob T. Schwartz
(1930-2009)
Linear OperatorsIII. pp.1937-1941 & 2096 (Wiley, 1971).
Since the Dunford & Schwartz textbook (1971) was so instrumental in its
popularizarion, the concept was known for a while as
Dunford's decomposition, a term which is now being deprecated.
(Dunford's earliest research on the topic appeared 3 years after Chevalley's final results.)
As a clear example of reverse-chauvinism, the French have held on
to the attribution to Dunford longer than anyone else...
Proof :
In a context where the matrix could be put in
Jordan's normal form, the existence and uniqueness
of the Jordan-Chevalley decomposition are fairly trivial.
However, as advertised, we aim for a stronger result and won't assume
that the characteristic polynomial or the minimal polynomial can be fully factored.
We merely assume that the gound field is a perfect field.
This is all the following algorithm need to uniquely determine the diagonalizable
part D of the matrix M. The nilpotent part N is then obtained as M-D.
The ground field need not be algebraically closed
(as Chevalley's algorithm doesn't require that).
The last step will be to remark that N and D commute because they're
both obtained as polynomial functions of M
Chevalley's Algorithm :
Let M be the matrix to decompose.
Let Q be its minimal polynomial.
The diagonalizable part D of M is a zero of Q.
In elementary numerical analysis, the
Newton-Raphson method
can be used to find a numerical zero of a polynomial Q (or any
other differentiable function) as a limit of a sequence of approximations,
defined by recurrence:
xn+1 = xn
- Q ( xn ) / Q' ( xn )
If the starting point is close enough to the root, the sequence so defined
converges quadratically to that root
(i.e., the number of correct digits roughly doubles withh each iteration).
Chevalley's idea was to apply the same strategy and the same iteration
formula to matrices instead of numbers.
The nice surprise is that the convergence is even much
better, as the limit is reached
in finitely many steps (no more than the
binary logarithm of the dimension of the matrix).
Let's show that:
(2017-01-04) Perron Vector of a Matrix with Positive Elements
The spectral radius r
of such a matrix is a simple positive eigenvalue.
The spectral radius of a square matrix is the largest modulus of
its eigenvalues. A simple eigenvalue is a simple root
of the characteristic polynomial. It's associated with an eigenspace of dimension 1.
(2006-01-18) Vandermonde Matrices
A Vandermonde matrix is also sometimes called an alternant matrix.
The Vandermonde matrix associated with a sequence (xn ) is the square matrix
of the successive powers of the elements in that sequence.
The element on row i and column j is
(xj ) i.
[Some authors consider the transpose of this.]
Mn+1 =
1
1
1
1
...
1
x0
x1
x2
x3
...
xn
x02
x12
x22
x32
...
xn2
x03
x13
x23
x33
...
xn3
...
...
...
...
...
...
x0n
x1n
x2n
x3n
...
xnn
The determinant of the above Vandermonde matrix has a nice expression:
n-1
n
det ( Mn+1) =
Õ
Õ
( xj - xi )
i = 0
j = i+1
Proof :
Both sides of the equation
are polynomials of degree n(n+1)/2
in n+1 variables, which vanish when two of the variables are equal.
Since such polynomials are necessarily proportional, they must be
equal if some pair of matching coefficients are equal.
That's easily proven for the last pair, by induction on n.
(HINT: The coefficient of the last variable raised
to the n-th power is the Vandermonde determinant for the n previous variables.)
Earlier authors
had stumbled upon determinants when solving several
simultaneous linear equations in as many unknowns.
Those include Cardano (in the 2 by 2 case only)
Leibniz, Cramer (1750)
and Bézout (1764).
However, Vandermonde published the first investigation (1771)
of determinants in their own right, as functions of the coefficients of a matrix.
The importance of the idea was immediately recognized by
Laplace (1772) and Lagrange (1773).
Curiously, the determinant concept was made clear before the concept of a matrix itself.
The modern name "determinant" was coined by Gauss in 1801.
When it does not vanish, this quantity "determines" that a system of equations has
a unique solution
(in which case it's called a "Cramer system", in high-school parlance).
Strangely enough, Vandermonde himself
never mentioned the above type of matrices, now named after him.
For lack of a better explanation, it has been speculated (by
Lebesgue
and others)
that whoever atttributed this enduring "credit" mistook one of Vandermonde's
superscripts for an exponent !
What's perhaps the greatest contribution of Vandermonde is often overlooked:
Vandermonde came up with the idea of studying functions which are
invariant under any permutation of a given polynomial's roots.
He did so in the first of only four mathematical papers he ever published:
Mémoire sur la résolution des équations (1770).
Arguably, this publication marks the very beginning of modern algebra.
This idea of Vandermonde's would ultimately lead to the celebrated Abel-Ruffini
theorem (which states the impossibility of a general solution by radicals for algebraic
equations of degree 5 or more). The subsequent work of
Lagrange, Ruffini,
Abel and
Galois begat
Group Theory and
Field Theory.
Xinglongdada
(2012-05-04, e-mail)
A Totally Positive Matrix
If 0 < a1 < a2 < ... <
an and
0 ≤ l1 < l2
< ... < ln prove that the determinant of the following matrix An is positive.
An =
a1l1
a2l1
a3l1
...
anl1
a1l2
a2l2
a3l2
...
anl2
...
...
...
...
...
a1ln
a2ln
a3ln
...
anln
Such a matrix is known as a generalized Vandermonde matrix.
It's a classical example of a totally positive matrix
(i.e., a real matrix whose minors are all positive)
which is to say that we always obtain a square matrix of positive
determinant by deleting from it an equal number
(possibly zero) of rows and columns.
(Note that, if a matrix is a generalized Vandermonde matrix, so are all its submatrices.)
The fact that all generalized Vandermonde determinants are positive happens to be
a consequence of the fact that they are all nonzero...
Indeed, there's
a continuous path (depending on a parameter u that goes continuously
from 0 to 1) which goes continuously from the above determinant to an
ordinary Vandermonde determinant (which is clearly positive by virtue
of the above formula which expresses it as a product
of positive factors). The determinant at point u along that path is obtained
by replacing the exponent
li with the positive quantity:
l'i =
(1-u) li + u (i-1)
For any u in [0,1], the above inequalities between exponents are preserved.
A continuous function of u over [0,1] cannot go from a negative to a positive
value without vanishing at some point.
So, to show that
all generalized Vandermonde determinants are positive, we only need to establish that
none of them is zero. Let's prove that:
Proof :
We proceed by induction on the size n
of the matrix (n = 1 being trivial).
Let's assume (by induction hypothesis) that there are no
vanishing generalized Vandermonde determinants of size less than n.
Then, if we had a vanishing generalized Vandermonde determinant of size n,
there would be a linear combination of different powers of x vanishing
for at least n positive values of x (namely,
a1 ...an).
That is to say that we'd have coefficients ci
(not all vanishing) for which:
0 =
c1 xl1 +
c2 xl2 + ... +
cn xln
for at least n positive values of x.
So,
0 =
c1 +
c2 xl2- l1 + ... +
cn xln- l1
for those same values.
By Rolle's theorem, the
derivative of the right-hand side would
vanish at least once in each of the n-1 intervals separating the n known distinct roots.
So would that derivative multiplied into the positive quantity x. Therefore:
0 =
c2 (l2- l1)
xl2- l1 + ... +
cn (ln- l1)
xln- l1
for at least n-1 values of x>0
(2012-10-07) Cholesky reduction of a Hermitian matrix:
There's a lower-triangular matrix L such that L L* = A
By definition, a lower-triangular matrix is a matrix with
only zero elements above the diagonal.
This decomposition was introduced by the French engineer
André-Louis Cholesky
in the special case where the matrix A is a positive-definite
symmetrical real matrix. In that case,
the matrix L has real coefficients and nonzero diagonal elements.
The matrix L isn't unique, as
-L,
(L*)t and -(L*)t are satisfactory too.
A related construction which applies to any
definite Hermitian matrix
(not necessarily positive) involves a diagonal real matrix D
and a lower-triangular matrix L
whose diagonal elements are all equal to 1 (unity).
L D L* = A
Conversely, any matrix of this form is clearly Hermitian (A* = A).
In the special case where A is real, L
and D are real too.
The requirement of definiteness (no zero eigenvalue) can be dropped.
It's only useful to make the actual construction of L more
straightforward.
Applications :
Because the decomposition can be performed quite efficiently,
it can be used to solve a symmetric system of linear equations in real numbers
much faster than with general-purpose Gaussian elimination
(such symmetrical systems arise in the
linear
least square method, in particular):
L L* x = b
The triangular systems
Ly = b and
L* x = y
are sequentially solved.
Cholesky's method was published posthumously in 1924.
It received little or no attention until World War II,
when Jack Todd
taught it in his analysis course at King's College, London.
In 1948, Alan Turing proved the
numerical stability of the procedure (after empirical results published the same year by
J.D. Fox, H.D. Huskey & J.H. Wilkinson).
(2006-01-18) Hankel Matrices & Hankel Transform
A Hankel matrix is a matrix whose skew-diagonals are constant.
(Such a matrix is also known as persymmetric or orthosymmetric.)
In other words, the value of the element on the ith row and jth
column of a persymmetric matrix
depends only on the sum of the indices (i+j).
Mn =
x0
x1
x2
x3
...
xn-1
x1
x2
x3
x4
...
xn
x2
x3
x4
x5
...
xn+1
x3
x4
x5
x6
...
xn+2
...
...
...
...
...
...
xn-1
xn
xn+1
xn+2
...
x2n-2
With those notations, the Hankel transform
of the sequence xn
is defined to be the sequence
yn = det ( Mn+1 )
which starts with y0 = x0.
Unfortunately, the above is not the only
thing called Hankel transform...
(2006-01-18) Catberg Matrix The Hankel matrix of the reciprocals of Catalan numbers...
Its element on the ith row and jth column is
(i+j+1) / C(2i+2j , i+j).
(2007-02-16) Sign Matrices, Phase Matrices
The elements of a phase matrix are complex numbers of module 1.
Thus, the elements of a phase matrix are
complex numbers of the form:
e iq =
cos q +
i sin q
[ where q is real ]
If such a matrix is real, it's called a sign matrix
(its elements are +1 or -1).
(2007-01-24) Hadamard Matrices
( real or complex coefficients )
An Hadamard matrix consists of orthogonal
rows of unit elements.
Introduction (real coefficients) :
In the realm of real numbers,
Hadamard matrices are square matrices whose elements are
either -1 or +1 and whose rows are pairwise orthogonal.
They're named after the Frenchman Jacques
Hadamard (1865-1963) the very influential long-lived analyst who gave
functional analysis its name (1910) and will always be
remembered for his 1896 proof of the Prime Number Theorem,
simultaneous with the independent proof from Charles de la
Vallée-Poussin
(1866-1962).
In 1893, Hadamard did write an important paper about those matrices,
but they were actually first investigated in 1867,
under the quaint name of anallagmatic pavements,
by James Joseph Sylvester (1814-1897).
Such matrices are often represented graphically as square mosaics of
square tiles that are either light or dark...
Here are special Hadamard matrices,
known as Walsh matrices, whose orders are
powers of 2 (1, 2, 4, 8, 16).
In the so-called normalized cases,
one row and one column have a uniform color which is traditionally light
(so that there are ½ n(n+1) light squares
and ½ n(n-1) dark ones).
The exact color-code can't be fixed (as the opposite of
an Hadamard matrix
is also Hadamard) but we usually assume that the dominant (light) color stands
for the value +1. When its first column and its first row have a uniform value
of +1, the matrix is said to be dephased
(a jargon term discussed below,
originally applied to complex Hadamard matrices c. 1970).
The above serves to illustrate a remark of Sylvester (1867)
that an Hadamard matrix of order 2n
may be obtained from a known Hadamard matrix H of order n,
simply by juxtaposing four copies of H and negating one of those:
H
H
H
-H
More generally, an Hadamard matrix of order mn
may be constructed from an Hadamard matrix K of order m and
an Hadamard matrix H of order n by substituting in K all occurrences of
+1 by H and all occurences of -1 by -H
(the above is the special case m = 2).
In other words, the
Kronecker product
of two Hadamard matrices is also an Hadamard matrix...
Abstract Algebraic Definition (real or complex realm) :
Hadamard matrices of order n may be defined as square matrices of order n
whose elements have unit length ( modulus )
and which verify the relation:
M M* = n In
In this, M* is the transpose of a real matrix M or, more generally,
the transpose conjugate
of a complex one. Matrices with complex
elements of unit moduli which satisfy the above relation are called
complex Hadamard matrices.
They were introduced in 1970 by Richard J. Turyn
(1930-?)
as interesting generalizations of the more traditional (real) Hadamard matrices,
which can be considered to be merely of a restricted subtype.
A (complex) Hadamard matrix is trivially obtained from another by using one or
several of the following types of transformations:
Multiply a row or a column by an element of unit modulus (like -1).
Replace a row or a column by its (element-wise) conjugate.
Permutate rows or permutate columns.
Matrices so obtained from each other are said to be
Hadamard-equivalent.
A complex Hadamard matrix where all first-row and first-column elements
are equal to 1 is said to be dephased.
Any complex Hadamard matrix is equivalent to a dephased one
(HINT: Use the first type of transformations.)
It's not difficult to prove that all
complex Hadamard matrices of order 2 are
Hadamard-equivalent to
the aforementioned Walsh matrix of order 2.
=
1
1
1
-1
Similarly, there's essentially only one complex Hadamard matrix of order 3.
It involves a primitive cube root of 3,
denoted w :
1
1
1
where
w
= w3
= ½ (-1 + i Ö3 )
1
w
w2
1
w2
w
That's an example of a so-called Butson-Hadamard matrix,
namely a complex Hadamard matrix of order n whose elements are
p-th roots of unity. Those were first discussed in 1961 by
Alton T. Butson
(1926-1997)
before Turyn's general approach to complex Hadamard matrices (1970).
For any order n, one example of a Butson-Hadamard matrix
is given by the Vandermonde matrix
of all the n-th roots of unity
(which is Hadamard-equivalent
to their circulant matrix).
Another example is provided, for any
composite
order, by extending to complex Hadamard matrices what we've
already remarked about real ones, namely that the Kronecker product of
two (complex) Hadamard matrices is a (complex) Hadamard matrix.
In particular, if H is a complex Hadamard matrix of order m,
a complex Hadamard matrix of order 3m is obtained from the following pattern:
H
H
H
where
w
= w3
= ½ (-1 + i Ö3 )
H
w H
w2 H
H
w2 H
w H
This can be applied repeatedly to obtain the following ternary equivalent of
the aforementioned
(binary) Walsh matrices, using a three-color code:
For any odd prime p,
a complex Hadamard matrix whose elements are pth
roots of unity must have an order which is a multiple of p.
Conversely, it is conjectured that such matrices exist for any order which is a multiple
of p.
The existence of ordinary (real) Hadamard matrices
(as discussed next) can be construed to
be the case p=2,
for which a slightly different rule holds...
Existence of (Real) Hadamard Matrices of Order n :
Back to ordinary Hadamard matrices (whose elements are either -1 or +1).
The empty matrix (of order 0) is vacuously
an Hadamard matrix.
The order of any Hadamard matrix
must be 1, 2, or a multiple of 4 (Hadamard, 1893).
Conversely,
the Hadamard-Paley conjecture (better known as the Hadamard Conjecture )
states that there are Hadamard matrices of all such orders...
In 1933, Raymond Paley used finite fields
to construct Hadamard matrices of all orders of the form q+1
(resp. 2q+2) where q is the power of
a prime congruent to 3 (resp. 1) modulo 4.
Paley's Theorem states that Hadamard matrices
can be constructed (using
Paley's construction
and the aforementioned construction of Sylvester)
for all positive orders divisible by 4 except
those in the following sequence (i.e.,
multiples of 4 not equal to a power of 2 multiplied
by q+1, for some power q of an odd prime):
For many of those remaining orders,
Hadamard matrices have been discovered by other methods.
Here's a brief summary...
In 1962, Baumert, Golomb,
and Hall found an Hadamard matrix of order 92, using
the general approach
introduced in 1944, by John Williamson
(who so constructed an Hadamard matrix of order 172).
A Williamson matrix of order 4m is an Hadamard matrix obtained as follows from 4 matrices of
order m (with unit elements) A, B, C and D,
provided 4 matching relations are satisfied:
For example, we may build an Hadamard matrix of order 12 with 4 matrices of order 3,
by letting one of them be the matrix U whose 9 elements are equal to +1
(the square of U is 3U) while the 3 others are equal to U-2I.
With A = U, the resulting Hadamard matrix of order 12
(which isn't normalized in the above sense)
is shown graphically below, next to the Hadamard matrices of order 24 and 48
obtained from it with the aforementioned Sylvester construct:
It turns out that all (real) Hadamard matrices of order 12 are
equivalent.
In particular, the above order-12 Hadamard matrix is Hadamard-equivalent to
the following, which is normalized (dephased) and symmetrical:
In 1985, K. Sawade found an Hadamard matrix of order 268.
If we denote by M the symmetrical of any M with respect to its
antidiagonal,
the above matrix
is based on 4 matrices A, B, C, D of order m = 107
put together in the following pattern, which gives an Hadamard matrix
of order 4m if and only if all the 10 conditions listed are met.
(Note that MN = N M .)
A
B
C
D
AA* + BB* + CC* + DD* = 4 m Im
B B* = BB*
C C* = CC*
D D* = DD*
-B
A
-D
C
-C
D
A
-B
-D
-C
B
A
A B* - B A* = D C* - C D*
A C* - C A* = B D* - D B*
A D* - D A* = B C* - C B*
A B* - B A* = D C* - CD*
A C* - C A* = B D* - DB*
A D* - D A* = C B* - BC*
Since June 2004, the order
668 has been the smallest multiple of 4
for which no Hadamard matrix is yet known
(as of July 2014).
Usage note :
Both spellings "a Hadamard matrix" and "an Hadamard matrix" can be construed
as correct:
The former one applies to an anglicized pronounciation of "Hadamard"
("h" and final "d" both pronounced) whereas the latter must be used with
the original French pronounciation
(initial "h" and final "d" both silent).
(2006-12-31) Sylvester matrix of two polynomials
The resultant of two polynomials is the determinant of that matrix.
Consider two polynomials A and B of respective degrees m and n.
m
n
A =
å
ai x i
B =
å
bj x j
i = 0
j = 0
Their Sylvester matrix is a square matrix (of dimension m+n) whose
first n rows feature the m+1 coefficients of A and whose last m rows feature the
n+1 coefficients of B. It is defined as follows:
Mm+n =
am
am-1
am-2
am-3
...
0
0
am
am-1
am-2
...
0
0
0
am
am-1
...
0
...
...
...
...
...
...
0
0
0
...
a0
0
0
0
0
...
a1
a0
bn
bn-1
bn-2
bn-3
...
0
...
...
...
...
...
...
0
0
0
...
b1
b0
The determinant of that matrix equals the so-called
resultant of A and B
which may be expressed as follows,
in terms of all the complex roots
(ai ,
bj )
of those polynomials and their leading coefficients
am and bn :
(2006-12-31) Discriminant of a polynomial
(Sylvester, 1851)
The resultant of a polynomial and its derivative.
I fondly remember learning this orally from Claude Delagarde (1974).
The word "discriminant" was coined by James Joseph Sylvester
(1814-1897) who found the correct expression for cubic polynomials (in 1851)
and generalized it to any polynomial (including the well-known quadratic case).
(2018-06-30) Pfaffian (Cayley, 1852)
The determinant of an antisymmetric matrix is the square of its Pfaffian.
The Pfaffian is zero for odd orders.
For an antisymmetrical matrix of even order 2n, the Pfaffian
is an homogeneous polynomial of degree n in the coefficents of the matrix.