Remember the multiplication law for complex numbers?
(a, b) * (a’, b’) -> (a*a’ - b*b’, a*b’ + b*a’)
Now, what is the inverse of (a, b) ? The answer most likely given in many math books is:
Almost right! It only depends on whether b*a - a*b = 0. This may not be the case when a and b are matrices, or quaternions or some other non-commutative entities. Luckily, one can derive a very general inversion formula without assuming that a and b are commutative:
of course it simplifies back to the commutative case. These are the joys of working with Clifford Algebras or Geometric Algebras!
So, what additive linear combination of powers of z will it take to uniquely come up with a 0? In the most general case it will be this:
p2(z) = z2 - (2 a,0) * z + (a2+b2, 0) = 0
This is the minimal polynomial of degree two! The complex inverse z-1 then exists if we can find a solution to z-1p2(z) = 0, or
z - (2 a,0) = - z-1 (a2+b2, 0)
which always exists, when the 0-th order component of p2(z) is not zero! Interestingly, this results also holds for the algebraic structures of quaternions and matrices which all have non-zero minimal polynomials of degree two and higher in the general case. The minimal polynomial for real numbers is totally easy, it is just of degree one! One can play this game for quite a while and ask, what is the minimal polynomial of algebraic numbers, these are numbers which result from a repeated application of square roots, cube roots, and higher roots of rational numbers and additive combinations. They all have minimal polynomials, with Mathematica one can explore this quite easily. Even better, minimal polynomials can be calculated very efficiently via repeated multiplication of the algebraic entity, here’s a very efficient one for matrices
One last, consideration before we delve into programming with non-commutative numbers in some computer language. What if the complex number z is composed of 2 non-commutative elements a, b? Equally here, there is a minimal polynomial:
z2 - (a,0) * z - z * (a,0) + (a2+b2, 0) = 0
Here, one has to be careful about (a,0) * z + z * (a,0)!
(a,b)-1 = (a * (a2+b2)-1, -b * (a2+b2)-1)
let’s prove with c = (a2+b2)-1
let’s prove with c = (a2+b2)-1
(a, b) * (a*c, -b*c) -> ((a*a + b*b)*c, (b*a - a*b)*c) = (1, 0) !
(a,b)-1 = ( (a + b * a-1 * b)-1, - a-1 * b * (a + b * a-1 * b)-1 ) when a-1 exists
= ( (b-1 * a * (b + a * b-1 * a)-1, - (b + a * b-1 * a)-1) when b-1 exists
of course it simplifies back to the commutative case. These are the joys of working with Clifford Algebras or Geometric Algebras!
It turns out, there is even more depth to the existence of a multiplicative inverse element in a mathematical ring. The concept of the minimal polynomial plays a key element to explore the existence of inverse elements. Let’s make it concrete! Say we have a complex number composed of two real numbers:
z = (a, b), a,b are real numbers
So, what additive linear combination of powers of z will it take to uniquely come up with a 0? In the most general case it will be this:
p2(z) = z2 - (2 a,0) * z + (a2+b2, 0) = 0
This is the minimal polynomial of degree two! The complex inverse z-1 then exists if we can find a solution to z-1p2(z) = 0, or
z - (2 a,0) = - z-1 (a2+b2, 0)
which always exists, when the 0-th order component of p2(z) is not zero! Interestingly, this results also holds for the algebraic structures of quaternions and matrices which all have non-zero minimal polynomials of degree two and higher in the general case. The minimal polynomial for real numbers is totally easy, it is just of degree one! One can play this game for quite a while and ask, what is the minimal polynomial of algebraic numbers, these are numbers which result from a repeated application of square roots, cube roots, and higher roots of rational numbers and additive combinations. They all have minimal polynomials, with Mathematica one can explore this quite easily. Even better, minimal polynomials can be calculated very efficiently via repeated multiplication of the algebraic entity, here’s a very efficient one for matrices
One last, consideration before we delve into programming with non-commutative numbers in some computer language. What if the complex number z is composed of 2 non-commutative elements a, b? Equally here, there is a minimal polynomial:
z2 - (a,0) * z - z * (a,0) + (a2+b2, 0) = 0
Here, one has to be careful about (a,0) * z + z * (a,0)!
Previously, I experimented with Geometric Algebra in Haskell. Well, Python is also a nice language, it has many applications in science and engineering. Compared to Haskell, Python is multi-paradigm and certainly not type safe! With Haskell’s static/inferred type system it turns out to be quite easy to use recursive abstract data types as the “backbone” for tensor algebras such as Clifford Algebras. How can we translate the code over to Python?
At first glance, it is not quite obvious how one should translate recursive data structures in a statically typed language to rather loosely typed Python. I thought it would be a nice exercise to see what we can learn. Here, I share my observations:
In Haskell, I used tensored up complex, quaternion and matrix algebras as the representational backbone to implement Clifford Algebras. Elements of these algebras can be treated as numbers.
Now, the usual way in object oriented languages (OOL) to represent structured data is the class concept. A pythonic feature is to use classes as a drop in replacements for numbers in numeric operations. We need to implement a minimum of functionality. Very useful is the pythonic operator overloading which is done by implementing special methods in classes, i.e. for “+, -, *” the equivalent methods “__add__, __sub__ , __mul__” need to be provided. Of course, there can be much more. As you may guess, implementing all useful special methods mostly turns into an exercise in writing tedious boiler plate code with not much substance, but plenty of opportunity to make mistakes!
At first glance, it is not quite obvious how one should translate recursive data structures in a statically typed language to rather loosely typed Python. I thought it would be a nice exercise to see what we can learn. Here, I share my observations:
In Haskell, I used tensored up complex, quaternion and matrix algebras as the representational backbone to implement Clifford Algebras. Elements of these algebras can be treated as numbers.
Now, the usual way in object oriented languages (OOL) to represent structured data is the class concept. A pythonic feature is to use classes as a drop in replacements for numbers in numeric operations. We need to implement a minimum of functionality. Very useful is the pythonic operator overloading which is done by implementing special methods in classes, i.e. for “+, -, *” the equivalent methods “__add__, __sub__ , __mul__” need to be provided. Of course, there can be much more. As you may guess, implementing all useful special methods mostly turns into an exercise in writing tedious boiler plate code with not much substance, but plenty of opportunity to make mistakes!
So we can start coding along the lines ...
The primary way in OOLs to add functionality to classes is by means of inheritance. Here in my examples, I use light weight non-extensible classes (indicated with the __slots__ parameter). It turns out, that Python metaclasses provide an easy way to automatically construct the needed class functionality. With the abstraction specified in the metaclass “AdditiveGroup” we sometimes can avoid typical repetitive boiler plate code and implement only specialities:
Of course pulling off the metaclass wizardry for just one single class is a bit of overkill, but it comes in handy if the infused functionality is shared amongst several class definitions. Here’s the metaclass factory:
All what the above code does, is to inject initialization and additive numeric operation methods the very first time the python interpreter reads in the applicable class definitions (very faintly, Python metaclasses remind me of the C preprocessor).
Ok, so far with the code above we can do simple calculations, i.e.
Let’s code up another useful class, complex numbers over a ring
Python is not known for type safety (things that I enjoy in Haskell). Here, I manually implemented type guards for binary operations of the kind
Without type guards, nonsense operations like this could happen:
So far so good! Now, implement the quaternion and matrix structure and then we are on the way to build Clifford Algebras. You don’t have to do it yourself, here’s the entire code, Clifford.py!
Core to building up Clifford Algebras is a recursive formula, here’s how one could do it:
Let’s do some experimentation and see what happens:
a matrix of matrix of complex numbers (32 real numbers to be precise), but they are all zero! This is not very useful!
I thought about how to initialize or modify recursive structures in general and ran into this blog post about functional recursive types. The implementation I chose here can be generally understood in terms of folding data over a recursive type. Let's look at a specific example:
and the verification:
Finally, let’s do the full loop:
So, the function cc2sc nicely maps over the product and inverse, as it should be for isomorphisms.
Well, you could say that the experiments with inversions only involved commutative numbers, split of complex and complex of complex numbers are commutative. But
That’s it for today! In summary, thanks to Python metaclasses and some smart pythonic operator overloading the experiment to translate recursive data types with the respective functions from Haskell to Python didn’t turn out ugly. Quite en contraire, the syntax is succinct and we avoided most of the usual code bloat.
Some Links
Clifford Numbers and their inverses
Geometric Algebra in Python
class Number:
"""basic commutative number"""
__slots__ = 'I'
def __repr__(self):
return repr(self.I)
…
def __mul__(self,other):
return Number(self.I * other.I)
def __invert__(self):
return Number(1/self.I)
def __call__(self,*gft):
setattr(self,'I',float(gft[0] if gft else 0))
return gft[1:]
...
The primary way in OOLs to add functionality to classes is by means of inheritance. Here in my examples, I use light weight non-extensible classes (indicated with the __slots__ parameter). It turns out, that Python metaclasses provide an easy way to automatically construct the needed class functionality. With the abstraction specified in the metaclass “AdditiveGroup” we sometimes can avoid typical repetitive boiler plate code and implement only specialities:
class BasicAlgebra:
__slots__ = []
__metaclass__ = AdditiveGroup
class Split(BasicAlgebra):
"""direct sum"""
__slots__ = ['a','b']
def __repr__(self):
return 'S ('+ repr(self.a) + ', ' + repr(self.b) + ')'
def __mul__(self,other):
return Split(
self.a*other.a,
self.b*other.b) \
if type(self) == type(other) else None
def __invert__(self): # inversion is straight forward
return Split(~self.a,~self.b)
def __call__(self,*gft):
return (lambda self,v2,a: (self.b(*v2),\
setattr(self,'a',a+self.b),
setattr(self,'b',a-self.b))[0]) (self,self.a(*gft),self.a)
Of course pulling off the metaclass wizardry for just one single class is a bit of overkill, but it comes in handy if the infused functionality is shared amongst several class definitions. Here’s the metaclass factory:
def add(attr):
return lambda x,y: \
type(x)(*[getattr(x,name)+getattr(y,name) for name in attr]) \
if type(x) == type(y) else None
def sub(attr):
return lambda x,y: \
type(x)(*[getattr(x,name)-getattr(y,name) for name in attr]) \
if type(x) == type(y) else None
def equal(attr):
return lambda x,y: reduce(\
lambda a,b: a and b, \
[getattr(x,name)==getattr(y,name) for name in attr]) \
if type(x) == type(y) else False
def neg(attr):
return lambda x: type(x)(*[-getattr(x,name) for name in attr])
class AdditiveGroup(type):
def __new__(self, name, bases, cdict):
if cdict['__slots__']:
cdict.setdefault('__add__', add(cdict['__slots__']))
cdict.setdefault('__sub__', sub(cdict['__slots__']))
cdict.setdefault('__pos__', lambda x: x) # identity
cdict.setdefault('__neg__', neg(cdict['__slots__']))
cdict.setdefault('__eq__', equal(cdict['__slots__']))
return type.__new__(self, name, bases, cdict)
def __call__(cls, *args, **kwargs):
instance = type.__call__(cls)
# we could equally use kwargs.items()
for name, value in zip(cls.__dict__['__slots__'],list(args) or [0,0,0,0]):
try:
getattr(instance, name)
except AttributeError:
setattr(instance, name, value)
return instance
All what the above code does, is to inject initialization and additive numeric operation methods the very first time the python interpreter reads in the applicable class definitions (very faintly, Python metaclasses remind me of the C preprocessor).
Ok, so far with the code above we can do simple calculations, i.e.
>>> Split(3,4) + Split(2,1)
S (5, 5)
Let’s code up another useful class, complex numbers over a ring
class Complex(BasicAlgebra):
"""complex number over a ring"""
__slots__ = ['a','b']
def __repr__(self):
return 'C ('+ repr(self.a) + ', ' + repr(self.b) + ')'
def __call__(self,*gft):
return (lambda self,v: self.b(*self.a(*v)))(self,gft)
def __mul__(self,other):
return Complex(
self.a*other.a - self.b*other.b,
self.a*other.b + self.b*other.a) \
if type(self) == type(other) else None
def __invert__(self): # inversion with non-commutative numbers
inv = (lambda x,y,ixy: (lambda ca: (ca, -ixy*ca)) (~(x+y*ixy)))
try:
return Complex(*inv(self.a,self.b,~self.a*self.b))
except ZeroDivisionError:
return Complex(*(lambda (a,b): (b,a))(inv(-self.b,self.a,~self.b*self.a)))
Python is not known for type safety (things that I enjoy in Haskell). Here, I manually implemented type guards for binary operations of the kind
if type(self) == type(other) else None
In order to implement these consistently, metaclasses turned out to be really useful.Without type guards, nonsense operations like this could happen:
>>> Complex(3,4) + Split(2,1)
So far so good! Now, implement the quaternion and matrix structure and then we are on the way to build Clifford Algebras. You don’t have to do it yourself, here’s the entire code, Clifford.py!
Core to building up Clifford Algebras is a recursive formula, here’s how one could do it:
# recursively tensor up basic algebras
def genClpq(p,q):
if p==0 and q==0: return Number(0.0)
if p==0 and q==1: return Complex(Number(0.0),Number(0.0))
if p==1 and q==0: return Split(Number(0.0),Number(0.0))
if p>0 and q>0:
return Matrix(
genClpq(p-1,q-1),
genClpq(p-1,q-1),
genClpq(p-1,q-1),
genClpq(p-1,q-1)
)
if p>=2 and q>=0:
return Matrix(
genClpq(q,p-2),
genClpq(q,p-2),
genClpq(q,p-2),
genClpq(q,p-2)
)
if p==0 and q>=2:
return Quaternion(
genClpq(q-2, p),
genClpq(q-2, p),
genClpq(q-2, p),
genClpq(q-2, p)
)
Let’s do some experimentation and see what happens:
>>> from Clifford import *
>>> c1 = genClpq(0,2)
>>> c1
Q (0.0, 0.0, 0.0, 0.0)
and we see the quaternions again! And another one, a kind of Geometric Algebra:>>> g1 = genClpq(4,1)
>>> g1
M (M (C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0)), M (C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0)), M (C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0)), M (C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0)))
a matrix of matrix of complex numbers (32 real numbers to be precise), but they are all zero! This is not very useful!
I thought about how to initialize or modify recursive structures in general and ran into this blog post about functional recursive types. The implementation I chose here can be generally understood in terms of folding data over a recursive type. Let's look at a specific example:
>>> c0 = Complex(Complex(Number(0),Number(0)),Complex(Number(0),Number(0)))
>>> cc0
C (C (0, 0), C (0, 0))
Now, there's a higher cost associated with constructing new recursive data types (memory needs to be allocated...), however modifying an existing one is relatively cheap! Therefore, all basic algebraic types provide another call function such as the one for complex types def __call__(self,*gft):
return (lambda self,v: self.b(*self.a(*v)))(self,gft)
This call method is recursive, hence it allows a simple syntax to modify arbitrarily recursive types from numeric data in given argument list:>>> cc0(1.,2.,3.,4.)
C (C (1.0, 2.0), C (3.0, 4.0))
As mentioned here, there is an isomorphism that translates complex of complex numbers into a direct sum (split) of complex numbers:>>> cc2sc(cc0)
S (C (5.0, 1.0), C (-3.0, 5.0))
Let’s see, how we deal with the inverse of such numbers:>>> ~cc0
C (C (0.05203619909502262, -0.054298642533936646), C (-0.092760180995475103, 0.14027149321266968))
>>> ~cc0 * cc0
C (C (0.99999999999999989, -4.8572257327350599e-17), C (0.0, 2.7755575615628914e-17))
Bummer, that we have to deal with numeric precision issues!Finally, let’s do the full loop:
>>> ~cc2sc(cc0) * cc2sc(cc0)
S (C (0.99999999999999989, -2.7755575615628914e-17), C (1.0, -5.5511151231257827e-17))
So, the function cc2sc nicely maps over the product and inverse, as it should be for isomorphisms.
Well, you could say that the experiments with inversions only involved commutative numbers, split of complex and complex of complex numbers are commutative. But
>>> g1 = genClpq(4,1)
from above isn’t! Let’s set it somehow to a non-zero value:>>> g1(2,4,8,16,-2,-4)
M (M (C (-6.0, -12.0), C (-2.0, -4.0), C (-2.0, -4.0), C (10.0, 20.0)), M (C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0)), M (C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0), C (0.0, 0.0)), M (C (-6.0, -12.0), C (-2.0, -4.0), C (-2.0, -4.0), C (10.0, 20.0)))
This looks quite non-commutative to me! Now the test:>>> one=genClpq(4,1)
>>> one(1.0)
>>> ~g1 * g1 == one
True
That’s it for today! In summary, thanks to Python metaclasses and some smart pythonic operator overloading the experiment to translate recursive data types with the respective functions from Haskell to Python didn’t turn out ugly. Quite en contraire, the syntax is succinct and we avoided most of the usual code bloat.
Some Links
Clifford Numbers and their inverses
Geometric Algebra in Python
No comments:
Post a Comment