Orthosymmetrical matrices
The code for this implementation can be found in its git repository at git://git.lemen.xyz/orthosymmetrical.git or as a tarball. A PDF transcript is also available here. The outputs of the program take a while to compute. For this reason they are available in the data directory.
Let be a field and let denote the set of orthogonal symmetric matrices over . Does this set have an interesting structure with regards to the regular matrix operations?
The following simple lemma will be used throughout the following discussion.
Let be a field and . Then implies .
Proof. Factoring the polynomial to get . As is field it has no zero divisors and so either or .
Notation
- denotes the identity matrix of size
- denotes the cyclic group of order i.e.
Regarding addition
The zero matrix is not an element of since all elements of it are invertible (all elements have order ). Let be in . Then by definition and implies that . So is not closed under regular matrix addition.
Regarding multiplication
The identity matrix is fortunately included in .
The case
Consider the matrix product:
The resulting matrix is not symmetric and by that is not closed under multiplication for .
The case
All elements trivially fulfill This reduces the set to or in other words all square roots of unity. Using the above lemma you can conclude . This implies
The case
First case
Since the elements and are not equal and the matrix product is not contained in Further can be embedded into larger via: This mapping preserves multiplication and restates the case fo .
The interesting case
All elements have order so you get: If then which implies . Otherwise as implies since has no zero divisors. Focusing on the first entry you get which using the lemma implies or in other words . Putting these all together you get: Furthermore the map is a bijection between and . Whats left to show is that is closed under multiplication: So has a group structure with regards to the regular matrix multiplication. This calculation also shows that the group is abelian as the addition is commutative in .
is an abelian group.
Proof All elements have order by definition. Let and be in . This means that all elements commute with each other.
This gives a simple classification for the finite fields:
Let be a finite field with , then .
Proof. Using the classification of finite abelian groups you get that for some . Using the above bijection you get and you can conclude .
Numerical exploration
The above shows that carries no relevant structure for . Noting that the field has exactly elements, which can be easily mapped to binary bits, the following mapping into the integers is immediate: This mapping sends a matrix to the integer represented by a binary string (in least significant bit order and gives also rise to an ordering on by retracting the ordering on back onto the matrices. In other words: You interpret the matrix as a single binary represented integer. The ordering is used in the following discussion when talking about the smallest and largest matrices. With this mapping you can easily enumerate all matrices in by using its partial inverse map.
ExampleComputing the sizes of the for some small by brute force checking results in this table:
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |
1 | 2 | 4 | 20 | 76 | 752 | 5104 | 104960 | ? |
This sequence is currently not found in the OEIS which indicates that the above is not of greater significance.
Implementation details
Some speedups can be achieved when considering the inherent structure of . As we are only considering symmetric matrices it is enough to set half the matrix and reflect it along its diagonal. This reduces the search space from to and cuts it approximately in half. It is also useful to note that this reduction preserves the ordering. In the case for :
Some marginal time improvements can be gained by optimizing the start and end indices.
The smallest index is a matrix with on its anti-diagonal:
Proof.
You can calculate that the proposed starting index is indeed in .
As each element in is invertible, the minimal element can not have a zero row or column.
This implies that each row must have at least one .
Indeed it is the case that there is no smaller invertible matrix with that property:
None of the entries right of the anti-diagonal can be as they would be larger than the proposed starting index.
This leaves a matrix in triangle form.
As the matrix is invertible it needs to have full rank.
If any of the entries on the anti-diagonal would be the rank would not be full.
Then if any of the entries left of the anti-diagonal would be it would again be bigger than the proposed matrix.
The end index depends on the dimensions parity: First is even dimension, second is odd dimension.
Proof.
You can check that both type of matrices are contained in their respective .
Consider the even case first.
Every row and column needs to have an odd number s or the square will not be equal to .
It follows that every row needs to have at least one .
This implies that the last row is equal to the proposed row as shifting the to any other place would decrease the associated index.
The same reasoning can be applied to the next row with the caveat that the can't be further left as this would duplicate a row beneath it.
The argument for the odd case is similar with the difference that every row needs to have an even number of s.
No row can have only s as this would make the square not equal again.