A bit of history view
The history of numbers, or shall we say the history of discovery of number systems, is quite fascinating. This history goes back many millennia, where the usual story starts with the discovery that rational numbers are not enough to fancy all the work with Euclidian geometry and the discovery was elevated to cult status.
Fast forward to the 16th century, where a young gifted physician and spare time mathematician, astrologer, and general allround artist, with name Girolamo Cardano strived to rationalize about nature with mathematics, hence framing nature into numbers. He couldn’t make any sense of the discovery that solving cubic equation may require a Pythagorean square root of negative numbers and it is that little detail which will shape history. Well, why bother? It is claimed, that the art of solving the cubic equations had already been established by Scipione del Ferro, but he kept it secret. A long story short, another person with name Niccolo Tartaglia in contest with Scipione’s pupil Antonio Maria Fior wanted not to be outdone and Niccolo learned the solution of the cubic problem himself, but of course in the course he divulged some secrets to Cardano ... .
Anyway you get the idea, the discovery of new numbers can be quite involved and shows human nature with traits of betrayal, depth of despair and tragedy and the emotional elevation of success.
It is not a coincidence that throughout the next centuries, the brilliant mathematicians at the times were hands on folks in geometry. Mostly, the kings were interested in getting accurate maps about their existing and newly gained kingdoms, and as a great geometer one could enjoy the monetary benefits of such an endowment. Obviously, geometers couldn’t rely on our modern satellite based GPS system, hence the arts and crafts of practical astronomy came into the game, but the so called longitude problem was effectively solved through the engineering genius of a master clockmaker with name John Harrison. His solution was demonstrated on the H.M.S. Resolution by none less than James Cook personally.
The stories behind these discoveries are fascinating, they should be made into a series like Connections with James Burke. Teaching Mathematics nowadays, takes away all the emotional roller coaster stories and connections of how we got here.
Today, we have centuries of evolution of algebraic concepts and notations and their respective geometric inspirations behind us. Notoriously, physics is littered with algebraic notations from the GibbsHeaviside vector notation starting in classical mechanics, matrix algebras, tensor calculus, differential forms (do I have to mention the big black bible?), Dirac matrices, you name it.
Good news, many many algebras you encounter in physics can be placed into one big unifying pool: Clifford Algebras!
A bit of history view
The history of numbers, or shall we say the history of discovery of number systems, is quite fascinating. This history goes back many millennia, where the usual story starts with the discovery that rational numbers are not enough to fancy all the work with Euclidian geometry and the discovery was elevated to cult status.
Fast forward to the 16th century, where a young gifted physician and spare time mathematician, astrologer, and general allround artist, with name Girolamo Cardano strived to rationalize about nature with mathematics, hence framing nature into numbers. He couldn’t make any sense of the discovery that solving cubic equation may require a Pythagorean square root of negative numbers and it is that little detail which will shape history. Well, why bother? It is claimed, that the art of solving the cubic equations had already been established by Scipione del Ferro, but he kept it secret. A long story short, another person with name Niccolo Tartaglia in contest with Scipione’s pupil Antonio Maria Fior wanted not to be outdone and Niccolo learned the solution of the cubic problem himself, but of course in the course he divulged some secrets to Cardano ... .
Anyway you get the idea, the discovery of new numbers can be quite involved and shows human nature with traits of betrayal, depth of despair and tragedy and the emotional elevation of success.
It is not a coincidence that throughout the next centuries, the brilliant mathematicians at the times were hands on folks in geometry. Mostly, the kings were interested in getting accurate maps about their existing and newly gained kingdoms, and as a great geometer one could enjoy the monetary benefits of such an endowment. Obviously, geometers couldn’t rely on our modern satellite based GPS system, hence the arts and crafts of practical astronomy came into the game, but the so called longitude problem was effectively solved through the engineering genius of a master clockmaker with name John Harrison. His solution was demonstrated on the H.M.S. Resolution by none less than James Cook personally.
The stories behind these discoveries are fascinating, they should be made into a series like Connections with James Burke. Teaching Mathematics nowadays, takes away all the emotional roller coaster stories and connections of how we got here.
Today, we have centuries of evolution of algebraic concepts and notations and their respective geometric inspirations behind us. Notoriously, physics is littered with algebraic notations from the GibbsHeaviside vector notation starting in classical mechanics, matrix algebras, tensor calculus, differential forms (do I have to mention the big black bible?), Dirac matrices, you name it.
Good news, many many algebras you encounter in physics can be placed into one big unifying pool: Clifford Algebras!
Clifford Algebras practical (tensor up)
Complex sure! Nobody will doubt the value of the complex number system in sciences and engineering. They’re as easy to play with as real numbers, multiplication can be done in any order! Science and Engineering disciplines have fully embraced matrix algebras for a long time, and Clifford Algebras are no different, see addendum below. Today, there’s even plenty of applications in digital signal processing (wavelett’s in images), robotics, computer science (lots of libraries), that make use of quaternions or their cousins. Everything that goes beyond the complex numbers has a “warpspeed” like name, hypercomplex!
My bet is that once we embrace the ideas behind all that, hence doing mathematics, the basic stuff of Geometric Algebra is so simple, that it will be taught in Highschools (in 60 years?). Surely, Mr. Wolfram will highly approve of that. Of course it takes many more practitioners and evangelists to steer from an established course.
Clifford Algebras practical (tensor up)
Complex sure! Nobody will doubt the value of the complex number system in sciences and engineering. They’re as easy to play with as real numbers, multiplication can be done in any order! Science and Engineering disciplines have fully embraced matrix algebras for a long time, and Clifford Algebras are no different, see addendum below. Today, there’s even plenty of applications in digital signal processing (wavelett’s in images), robotics, computer science (lots of libraries), that make use of quaternions or their cousins. Everything that goes beyond the complex numbers has a “warpspeed” like name, hypercomplex!
My bet is that once we embrace the ideas behind all that, hence doing mathematics, the basic stuff of Geometric Algebra is so simple, that it will be taught in Highschools (in 60 years?). Surely, Mr. Wolfram will highly approve of that. Of course it takes many more practitioners and evangelists to steer from an established course.
My bet is that once we embrace the ideas behind all that, hence doing mathematics, the basic stuff of Geometric Algebra is so simple, that it will be taught in Highschools (in 60 years?). Surely, Mr. Wolfram will highly approve of that. Of course it takes many more practitioners and evangelists to steer from an established course.
I’m not an evangelist, at most a practitioner amongst the many other ones. Even if I only repeat what already has been established, I’d like to purvey a sort of simplicity and practicality of the beyond the complex number system with an almost nonmathematical approach. None of the presented material is entirely new!
Let’s start, but not from the very bottom of mathematical set theory, I already assume knowledge about what algebras over vector spaces are. A simple follow up question is how can we construct new algebras from existing ones? There are many pathways to go, the CayleyDickson way gives us some interesting algebras which we can identify as complex numbers, quaternions, octonions and an infinite number of so far useless algebras with no desirable properties in higher dimension.
Well it turns out, that if we at least require associativity, then starting with basic associative algebras and utilizing the tensor product ⊗ and the direct sum ⊕ is the way to go!
I’m not aware of a general classification of all possible associative algebras, but if you start playing with a vector space, then tensor up and toss in the property that there is a well behaved inner product between the elements of the underlying vector space (that is the most common case in physics and engineering), then you’ll have the answer at hand: They are all Clifford Algebras (CA)!
They can be completely classified, and for that we only need some very basic identities or better said isomorphisms. The simplest (almost self evident) isomorphisms for any algebra A,B,D and the n times n matrix algebra of real numbers R(n) are:
Let’s start, but not from the very bottom of mathematical set theory, I already assume knowledge about what algebras over vector spaces are. A simple follow up question is how can we construct new algebras from existing ones? There are many pathways to go, the CayleyDickson way gives us some interesting algebras which we can identify as complex numbers, quaternions, octonions and an infinite number of so far useless algebras with no desirable properties in higher dimension.
Well it turns out, that if we at least require associativity, then starting with basic associative algebras and utilizing the tensor product ⊗ and the direct sum ⊕ is the way to go!
I’m not aware of a general classification of all possible associative algebras, but if you start playing with a vector space, then tensor up and toss in the property that there is a well behaved inner product between the elements of the underlying vector space (that is the most common case in physics and engineering), then you’ll have the answer at hand: They are all Clifford Algebras (CA)!
They can be completely classified, and for that we only need some very basic identities or better said isomorphisms. The simplest (almost self evident) isomorphisms for any algebra A,B,D and the n times n matrix algebra of real numbers R(n) are:
(ISO 0)
A⊗B = B⊗A
A⊗(B ⊕ D) = A⊗B ⊕ A⊗D
R(n)⊗R(m) = R(n m)
and a bit syntactic sugar in the form of the n times n matrix with entries from the algebra A
A(n)=A⊗R(n)
Now, there are also some not so self evident isomorphisms for particular algebras involving the complex C and quaternion H numbers
(ISO 1)
C⊗C = C ⊕ C
C⊗H = C(2)
H⊗H = R(4)
While one could tensor up any algebra with a matrix of any dimension, it wouldn’t really reflect a Clifford Algebra. There’s a little bit more to it, not much more. It is the connection to an underlying vector space. Without going into mathematical gibberish, there should be as many algebra elements as there are ways to combine n different vectors without order. That number is exactly 2n, the power set. Technically this is called Z2 grading, but that doesn’t explain it better!
Very interestingly, it is really easy to build up a type system in Haskell that would allow to represent the complex and quaternion numbers with the tensor product ⊗ or the direct sum (split) ⊕. The proximity of Haskell’s Generalized Algebraic Data Types (GADT) and the tensor product was already noted in a in series of articles, “Geometric Algebra for Free!” and the isomorphisms in “More Low Cost Geometric Algebra”. Thanks!
I compiled the Clifford Algebra code into a single Haskell module, Clifford.hs. Please use the following command to start the Haskell interpreter “ghci XTemplateHaskell” (requires GHC 7.0.1 or higher). Hence, writing a tensor C⊗C, C⊗H or H⊗H in Haskell is easy:
> C (C 2 1) (C 3 4)
> C (Q 1 0 0 0) (Q 3 4 0 1)
> Q (Q 1 0 0 0) (Q 0 1 0 0) (Q 0 0 1 0) (Q 0 0 0 1)
or a direct sum C ⊕ C
> (C 6 2) `S` (C (2) 4)
Even the isomorphisms (ISO 1) are easy to implement:
>cc2sc $ C (C 2 1) (C 3 4)
S (C 6 2) (C (2) 4)
As already said, the above set of isomorphisms are sufficient to build up all Clifford Algebras Clp,q over the real numbers. The result is a structure that is entirely due to the set of isomorphisms (ISO 0) and (ISO 1). Here, an abbreviated version (R = real, C = complex, H = quaternion):
(T0)
dim: p+q

pq=4

3

2

1

0

−1

−2

3

4

0

R
 
1

R⊕R

C
 
2

R(2)

R(2)

H
 
3

C(2)

R(2)⊕R(2)

C(2)

H⊕H
 
4

.

.

.

.

.

.

.

.

.

Also there’s a limit to how much variety you can get with Clifford Algebras Clp,q (Bott periodicity)!
That’s all! Completely classified!
I haven’t found a better way, but with Template Haskell, it is possible to generate this entire Clifford Algebra structure with a few lines of code:
 a nice application of Template Haskell: declare types at will
cliffpq :: Int > Int > Name > Q Type
cliffpq 0 0 k = do return $ AppT (ConT ''I) (ConT k)
cliffpq 0 1 k = [t Complex $(cliffpq 0 0 k) ]
cliffpq 1 0 k = [t Split $(cliffpq 0 0 k) ]
cliffpq p q k
 p > 0 && q > 0 = [t Matrix $(cliffpq (p1) (q1) k) ]
 p >= 2 && q >= 0 = [t Matrix $(cliffpq q (p2) k) ]
 p == 0 && q >= 2 = [t Quaternion $(cliffpq (q2) p k ) ]
This allows us to write the following element in Cl0,2
> one 2.0 :: $(cliffpq 0 2 ''Float)
that is indeed a quaternion, sweet!
Q ,2.0 ,0.0 ,0.0 ,0.0
And we can perform all the nice computations with addition and multiplication in the same algebra. It would be nice to introduce better type names for specific Clifford Algebras, i.e. a kind of Geometry Algebra:
type GA = $(cliffpq 3 0 ''Float)
Unfortunately, this is a point where Haskell starts to ‘suck’ a bit as a language. I’ve learned that type synonyms are evaluated before Template Haskell slices in the generated source code. The compiler doesn’t know about the sliced in type definition and assumes the worst of the GA type, namely a self reference, which results in a cyclic type definition... bang!
I also read that there is room for improvement in GHC 7.2.x, but we are not here yet. Anyway, we can work around the type problem by using the ‘newtype’ definition:
(GA0)
newtype GA = GA {cl :: $(cliffpq 3 0 ''Int)} deriving (Show, Eq, Num)
> GA (one 2)
GA {cl = M (C ,2 ,0) (C ,0 ,0) (C ,0 ,0) (C ,2 ,0)}
This seems to work just fine!
So where is the connection to my old vector algebra?
Rightfully so! Currently it’s very hard to see how we can connect a matrix of complex numbers (GA0) to anything we know in the usual Gibbs Heaviside vector notations. Indeed, the numerical “tensored up” representation like (GA0) is the most memory compact one you can go! Somehow, it encodes all the geometry.
Time to put geometry back into the game! As already said, the Clifford Algebra Clp,q has an underlying vector space with p+q basis elements, let’s give them names:
e1, … ep and e1, e2 ,... eq.
We can combine (multiply) vectors into multivectors, then the total number of distinct multivectors (including grade 0 = scalar e0) is 2p+q. The overall order is not really important here, hence we have a simple connection between the number of base vectors and the total size of the Clifford Algebra: It’s the power set!
As a side note, there is a Haskell idiosyncratic notation to calculate the power set of any list:
Now, specifically
will give you 23 = 8 numbers. They somehow relate to the power set of the three basis vectors e1, e2 ,e3 :
A little lingo here, the power set of 3 basis vectors can be presented in 4 groups, the empty set is the scalar, then of the vectors, then bivectors and finally one trivector (that’s as high as you can go in 3 dimensions). There’s only a bit more to it, each distinct pair of vectors e2e3 changes sign when the order is reversed, and multiplied vectors of type ekek contract to a scalar +1,1:
In brief, the general answer to map from a “super compact” representation (GA0) to something that could resemble (SET0) is: generalized inverse Fourier transform (GFT) for Clifford algebras.
However, there is still a simpler question unanswered. How do we identify the anticommuting base vectors em in a general tensor representation such as in (T0) or in (GA0) specifically? For the complex, split and quaternion numbers (C, R⊕R, H) this tuns out to be easy:
Rightfully so! Currently it’s very hard to see how we can connect a matrix of complex numbers (GA0) to anything we know in the usual Gibbs Heaviside vector notations. Indeed, the numerical “tensored up” representation like (GA0) is the most memory compact one you can go! Somehow, it encodes all the geometry.
Time to put geometry back into the game! As already said, the Clifford Algebra Clp,q has an underlying vector space with p+q basis elements, let’s give them names:
e1, … ep and e1, e2 ,... eq.
We can combine (multiply) vectors into multivectors, then the total number of distinct multivectors (including grade 0 = scalar e0) is 2p+q. The overall order is not really important here, hence we have a simple connection between the number of base vectors and the total size of the Clifford Algebra: It’s the power set!
As a side note, there is a Haskell idiosyncratic notation to calculate the power set of any list:
> filterM (const [True, False]) [1,2,3,4]
Neat!Now, specifically
one 2 :: $(cliffpq 3 0 ''Int)
will give you 23 = 8 numbers. They somehow relate to the power set of the three basis vectors e1, e2 ,e3 :
(SET 0)
e0={}, e1 , e2 , e3, e1e2 ,e2e3 ,e3e1 ,e1e2e3
A little lingo here, the power set of 3 basis vectors can be presented in 4 groups, the empty set is the scalar, then of the vectors, then bivectors and finally one trivector (that’s as high as you can go in 3 dimensions). There’s only a bit more to it, each distinct pair of vectors e2e3 changes sign when the order is reversed, and multiplied vectors of type ekek contract to a scalar +1,1:
eqeq = 1 , epep = +1 , emen = enem (p>0, q>0, m != n)
In brief, the general answer to map from a “super compact” representation (GA0) to something that could resemble (SET0) is: generalized inverse Fourier transform (GFT) for Clifford algebras.
However, there is still a simpler question unanswered. How do we identify the anticommuting base vectors em in a general tensor representation such as in (T0) or in (GA0) specifically? For the complex, split and quaternion numbers (C, R⊕R, H) this tuns out to be easy:
C e1 = C (0 1)
R⊕R e1 = S (1 1)
H e1 =Q (0 1 0 0) e2 =Q(0 0 1 0) or Q(0 0 0 1)
Now, the ambiguity starts with this tensor (2x2 matrix):
R(2) e1 = M (1 0 0 1) e2 = M(0 1 1 0)
or e1 = M (1 0 0 1) e1 = M(0 1 1 0)
I think at this stage I should mention that quantum physics is littered with the kind of matrices shown above, let’s mention them, Pauli matrices, Dirac matrices (I’ll dare to add complex numbers also into the mix). Many standard text books present them as some physical meaningful entities. One should quite doubtful by now, because isomorphisms between Clifford Algebra representations should strip any physical meaning from above mentioned matrices! Here, matrices are only a crutch for the kind of geometry!
In summary, there’s some tedious bookkeeping involved to map between a nice set of multivectors and the computational efficient tensor representation. How lucky, that another Haskell module (GAlgebra.hs) does exactly this kind of bookkeeping for us!
Let’s do geometry!
We need both files Clifford.hs and GAlgebra.hs, then let’s power up (need GHC 7.0.1)
ghci XTemplateHaskell
We need both files Clifford.hs and GAlgebra.hs, then let’s power up (need GHC 7.0.1)
ghci XTemplateHaskell
> :l GAlgebra.hs
Scalar values
> let x0 = 3 :: GA
> x0*x0
9.0
Vector with negative signature == Imaginary numbers
> let i1 = 1 `e` [1] :: GA
> i1*i1
1.0
> (3 + i1)*(3 + i1)
8.0 +6.0e[1]
Vectors, reflections, rotations in 2d space
> let i2 = 1 `e` [1,2] :: GA
> i2*i2
1.0  wow, this is like the imaginary number i1!!!
(Indeed we don’t need to introduce imaginary numbers at all!)
> (3 + i2)*(3 + i2)
8.0 +6.0e[1,2]
> let v = 1 `e` [1] + 1 `e` [2] :: GA
> let n1 =1 `e` [2] :: GA
>  n1 * v * n1
1.0e[1]1.0e[2]  n1 * v * n1 reflects v along the axis n!!
> let n2 =1 `e` [1] :: GA
> n2 * n1 * v * n1 * n2
1.0e[1]1.0e[2]  2 reflections make one rotation, here by 180 degrees
(the imaginary number is a rotor)
> i2 == n2 * n1
> True
(we could write the rotations in this form with multivector reversion:)
> (rev i2) * v * i2
Vectors in 3d space (GibbsHeaviside)
> let a = 2 `e` [1] + 5 `e` [2] :: GA
> let b = 3 `e` [2] + 1 `e` [3] :: GA
> a .* b  usual inner vector product
15.0
> a ^* b  new exterior product (antisymmetric)
5.0e[2,3]+6.0e[1,2]+2.0e[1,3]
> let i3 = 1 `e` [1,2,3] :: GA  full 3d oriented unit volume
> i3 == (1 `e` [1])*(1 `e` [2])*(1 `e` [3])
True
> i3 * i3
1.0  another imaginary number like i1, i2 !!
> ( i3) * (a ^* b)
5.0e[1]2.0e[2]+6.0e[3]  ahh, we recovered the 3d cross product!!
Rotations in 3d
> b
3.0e[2]+1.0e[3]
> n2 * n1 * b * n1 * n2  180 degree rotation in n1 n2 plane
3.0e[2]+1.0e[3]
Mixed grade vector algebra, geometric product
> let m = 1 + 2*a + a*b + i3
> m
16.0 +5.0e[2,3]+4.0e[1]+1.0e[1,2,3]+6.0e[1,2]+2.0e[1,3]+10.0e[2]
> maxgrd m  maximum grade of m
3
> grade 2 m  select a specific grade of m
5.0e[2,3]+6.0e[1,2]+2.0e[1,3]
As an overview, the MultiVectorSpace type class in “GAlgebra.hs” implements a number of important operations on multivectors. These operations should be directly applicable in many Geometric Algebra tutorials.
class MultiVectorSpace mv where
zero :: mv
(!*) :: Float > mv > mv  multiply scalar with a multivector (mv)
(^*) :: mv > mv > mv  mv exterior (outer) product
(.*) :: mv > mv > mv  mv left contraction
(*.) :: mv > mv > mv  mv right contraction
(.*.) :: mv > mv > mv  mv inner product
(*#) :: mv > mv > Float  mv scalar product (grade 0 == scalar)
x :: mv > mv > mv  commutator product
maxgrd :: mv > Int  maximum grade of multivector
mingrd :: mv > Int  minimum grade of multivector (>=0)
minmaxgrd :: mv > (Int, Int)
grades :: mv > [Int]  list of grades in multivector
grade :: Int > mv > mv  select grade of multivector
scalar :: mv > Float  extract scalar from multivector
fwdGFT :: [(Int,Float)] > mv  forward fast fourier transform
rev :: mv > mv  multivector reversion
invol :: mv > mv  multivector involution
e :: Float > [Int] > mv  convert from index set notation
zero :: mv
(!*) :: Float > mv > mv  multiply scalar with a multivector (mv)
(^*) :: mv > mv > mv  mv exterior (outer) product
(.*) :: mv > mv > mv  mv left contraction
(*.) :: mv > mv > mv  mv right contraction
(.*.) :: mv > mv > mv  mv inner product
(*#) :: mv > mv > Float  mv scalar product (grade 0 == scalar)
x :: mv > mv > mv  commutator product
maxgrd :: mv > Int  maximum grade of multivector
mingrd :: mv > Int  minimum grade of multivector (>=0)
minmaxgrd :: mv > (Int, Int)
grades :: mv > [Int]  list of grades in multivector
grade :: Int > mv > mv  select grade of multivector
scalar :: mv > Float  extract scalar from multivector
fwdGFT :: [(Int,Float)] > mv  forward fast fourier transform
rev :: mv > mv  multivector reversion
invol :: mv > mv  multivector involution
e :: Float > [Int] > mv  convert from index set notation
Of course, I haven’t done anything particularly useful here, after all it’s just basic algebra and we could have used the old notations just fine! However there are already some interesting insights:
 we recovered the complex numbers and the usual 3d GibbsHeaviside algebra
 there are many “Imaginary numbers” i1, i2, i3, … (they are not so unique)
 we see that rotations are part of the Clifford algebra, they are not matrix operators as it is usually introduced in the GibbsHeaviside notation.
Needless, to say that if we go to even higher dimension, 4 in physics, the geometric insights and equation simplifications are quite dramatic. Especially, the third of our insights saying that rotations are not special, eliminates the need to introduce unitary quantum rotation operators at all (we still have differential operators though)! The quantum state and the Hamiltonian is all part of the same algebra.
The same picture emerges in relativity, Lorentz transformations and "4vectors" all unified.
Electrodynamics, the sources, the fields and potentials all unified with the same algebra. That's pretty awesome!
In order to proceed and do something very useful and interesting with this algebra, we would need to introduce some further differential and regular calculus. We would need to define the exponential function (this leads to the unification of exp, sin, cos, sinh, cosh) and similarly differential operators.
Ideally, I would like Automatic Differentiation (AD) for functions defined over the Clifford Algebra.
Equipped with this kind of Geometric Differential calculus we should be able to handle, Maxwell’s equation and the Dirac equation equally well!
Well, next time!
More Links
Addendum
Mapping to real matrices
Another thing to notice is that all elements of the Clifford Algebras Clp,q can also be represented by a larger single matrix R(m) over just the real numbers, no complex ones no quaternions! That representation is not one to one (not isomorphic) it is just an embedding (injective homomorphic). My guess is that most people are very confident to work with real matrices (again it shows everything here is associative).
In order to achieve that, we just have to map split (direct sum ⊕), complex and quaternion numbers to matrices with minimal dimension.
In Haskell, the mappings looks like this, a,b,c,d are real numbers:
Split to Matrix: a⊕b s2m (S a b) = M (a+b) 0 0 (ab)
Complex to Matrix: a+i b c2m (C a b) = M a (b) b a
Quaterion to Matrix: a+i b+j c+k d q2mm (Q a b c d) =
M (M a b (b) a)
(M c d (d) c)
(M (c) d (d) c)
(M a (b) b a)
Notice, there’s a lot of duplication and zeros. Hence, don’t expect the R(m) representation of Clp,q to be a neat memory efficient compact one. There’s even a formula for the minimal embedding dimension m = 2M(p,q). I bet the formula reflects the increased dimensions through the above mentioned mappings.
Practically, let’s look at C⊗C = C ⊕ C. Hey, you probably can’t find them in the table (T0) nor in the entire classification table! That’s right, C ⊕ C has four real numbers, but the only Clifford Algebras that have 4 real numbers are either R(2) or H.
Well, it turns out that the Bicomplex numbers, or Splitcomplex, or Tessarines (so many names for one thing), hide as a commutative subalgebra of an 8 dimensional Clifford Algebra Cl3,1
They should be in the table (T0) where there is nada, nil at the place (pq=1, p+q=2)
C ⊕ C = C⊗C C⊗H = C(2)
In Haskell we can do some experimentation,
> s2m . fmap c2m $ S (C 1 2) (C 3 4)
M (M 4 (6) 6 4) (M 0 0 0 0) (M 0 0 0 0) (M (2) 2 (2) (2))
out of 4 numbers became 16 numbers arranged in a 4 by 4 matrix!
Another example is the Clifford Algebra Cl2,2 which is already isomorphic to a 4x4 matrix R(4) over the real numbers. No need to increase the dimension here!
As we know, Cl2,2 has 16 multivectors, let’s call it the standard basis. Interestingly, GFT which maps the 16 entries in the standard basis to 16 entries in R(4) exhibits a lot of structure (as ususal in fast fourier transform methods) when depicted graphically (red = 1, blue =1):
> one 3 :: $(cliffpq 2 2 ''Int)
> M (M ,3 ,0 ,0 ,3) (M ,0 ,0 ,0 ,0) (M ,0 ,0 ,0 ,0) (M ,3 ,0 ,0 ,3)
As we know, Cl2,2 has 16 multivectors, let’s call it the standard basis. Interestingly, GFT which maps the 16 entries in the standard basis to 16 entries in R(4) exhibits a lot of structure (as ususal in fast fourier transform methods) when depicted graphically (red = 1, blue =1):
Thanks. GA is OK. My take  needs paralle at bottom of stack, GPU/MIC, for performance. Any info appreciated. Art
ReplyDelete