This page was generated from docs/tutorials/linear-transformations.ipynb. Interactive online version: .

# Linear transformations¶

When working in regular vector spaces, a common tool is a linear transformation, typically in the form of a matrix.

While geometric algebra already provides the rotors as a means of describing transformations (see the CGA tutorial section), there are types of linear transformation that are not suitable for this representation.

This tutorial leans heavily on the explanation of linear transformations in GA4CS, chapter 4. It explores the clifford.transformations submodule.

## Vector transformations in linear algebra¶

As a brief reminder, we can represent transforms in \(\mathbb{R}^3\) using the matrices in \(\mathbb{R}^{3 \times 3}\):

```
[1]:
```

```
import numpy as np
rot_and_scale_x = np.array([
[1, 0, 0],
[0, 1, -1],
[0, 1, 1],
])
```

We can read this as a table, where each column corresponds to a component of the input vector, and each row a component of the output:

```
[2]:
```

```
def show_table(data, cols, rows):
# trick to get a nice looking table in a notebook
import pandas as pd; return pd.DataFrame(data, columns=cols, index=rows)
```

```
[3]:
```

```
show_table(rot_and_scale_x, ["$\mathit{in}_%s$" % c for c in "xyz"], ["$\mathit{out}_%s$" % c for c in "xyz"])
```

```
[3]:
```

$\mathit{in}_x$ | $\mathit{in}_y$ | $\mathit{in}_z$ | |
---|---|---|---|

$\mathit{out}_x$ | 1 | 0 | 0 |

$\mathit{out}_y$ | 0 | 1 | -1 |

$\mathit{out}_z$ | 0 | 1 | 1 |

We can apply it to some vectors using the `@`

matrix multiply operator:

```
[4]:
```

```
v1 = np.array([1, 0, 0])
v2 = np.array([0, 1, 0])
v3 = np.array([0, 0, 1])
(
rot_and_scale_x @ v1,
rot_and_scale_x @ v2,
rot_and_scale_x @ v3,
)
```

```
[4]:
```

```
(array([1, 0, 0]), array([0, 1, 1]), array([ 0, -1, 1]))
```

We say this transformation is linear because \(f(a + b) = f(a) + f(b)\):

```
[5]:
```

```
assert np.array_equal(
rot_and_scale_x @ (2*v1 + 3*v2),
2 * (rot_and_scale_x @ v1) + 3 * (rot_and_scale_x @ v2)
)
```

## Multivector transformations in geometric algebra¶

How would we go about applying `rot_and_scale_x`

in a geometric algebra? Clearly we can apply it to vectors in the same way as before, which we can do by unpacking coefficients and repacking them:

```
[6]:
```

```
from clifford.g3 import *
v = 2*e1 + 3*e2
v_trans = layout.MultiVector()
v_trans[1,], v_trans[2,], v_trans[3,] = rot_and_scale_x @ [v[1,], v[2,], v[3,]]
v_trans
```

```
[6]:
```

```
(2.0^e1) + (3.0^e2) + (3.0^e3)
```

However, in geometric algebra we don’t only care about the vectors, we want to transform the the higher-order blades too. This can be done via an outermorphism, which extends \(f(a)\) to \(f(a \wedge b) = f(a) \wedge f(b)\). This is where the `clifford.transformations`

submodule comes in handy:

```
[7]:
```

```
from clifford import transformations
rot_and_scale_x_ga = transformations.OutermorphismMatrix(rot_and_scale_x, layout)
```

To apply these transformations, we use the `()`

operator, rather than `@`

:

```
[8]:
```

```
rot_and_scale_x_ga(e12)
```

```
[8]:
```

```
(1^e12) + (1^e13)
```

```
[9]:
```

```
# check it's an outermorphism
rot_and_scale_x_ga(e1) ^ rot_and_scale_x_ga(e2)
```

```
[9]:
```

```
(1^e12) + (1^e13)
```

It shouldn’t come as a surprise that applying the transformation to the psuedoscalar will tell us the determinant of our original matrix - the determinant tells us how a transformation scales volumes, and `layout.I`

is a representation of the unit volume element!

```
[10]:
```

```
np.linalg.det(rot_and_scale_x), rot_and_scale_x_ga(layout.I)
```

```
[10]:
```

```
(2.0, (2^e123))
```

### Matrix representation¶

Under the hood, clifford implements this using a matrix too - it’s just now a matrix operating over all of the basis blades, not just over the vectors. We can see this by looking at the *private* `_matrix`

attribute:

```
[11]:
```

```
show_table(rot_and_scale_x_ga._matrix, ["$\mathit{in}_{%s}$" % c for c in layout.names], ["$\mathit{out}_{%s}$" % c for c in layout.names])
```

```
[11]:
```

$\mathit{in}_{}$ | $\mathit{in}_{e1}$ | $\mathit{in}_{e2}$ | $\mathit{in}_{e3}$ | $\mathit{in}_{e12}$ | $\mathit{in}_{e13}$ | $\mathit{in}_{e23}$ | $\mathit{in}_{e123}$ | |
---|---|---|---|---|---|---|---|---|

$\mathit{out}_{}$ | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

$\mathit{out}_{e1}$ | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |

$\mathit{out}_{e2}$ | 0 | 0 | 1 | -1 | 0 | 0 | 0 | 0 |

$\mathit{out}_{e3}$ | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |

$\mathit{out}_{e12}$ | 0 | 0 | 0 | 0 | 1 | -1 | 0 | 0 |

$\mathit{out}_{e13}$ | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |

$\mathit{out}_{e23}$ | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 0 |

$\mathit{out}_{e123}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |