Orthosymmetrical matrices

The code for this implementation can be found in its git repository at git://git.lemen.xyz/orthosymmetrical.git or as a tarball. A PDF transcript is also available here. The outputs of the program take a while to compute. For this reason they are available in the data directory.

Let k be a field and let OSn(k) = { A=AT  and  AAT = In } denote the set of orthogonal symmetric matrices over k. Does this set have an interesting structure with regards to the regular matrix operations?

The following simple lemma will be used throughout the following discussion.

Lemma
Let k be a field and λk. Then λ2=1 implies λ=±1.
Proof. Factoring the polynomial to get 0 = λ2-1 = ( λ-1 ) ( λ+1 ) . As k is field it has no zero divisors and so either λ=1 or λ=-1.

Notation

Regarding addition

The zero matrix is not an element of OSn(k) since all elements of it are invertible (all elements have order 2). Let A be in OSn(k). Then -A = ( -A ) T by definition and -A (-A)T = (-A) (-A) = In implies that -A OSn(k) . So OSn(k) is not closed under regular matrix addition.

Regarding multiplication

The identity matrix is fortunately included in OSn(k).

The case n3

Consider the matrix product:
[ 1 0 0 0 0 1 0 1 0 ] [ 0 1 0 1 0 0 0 0 1 ] = [ 0 1 0 0 0 1 1 0 0 ] OSn(k) The resulting matrix is not symmetric and by that OSn(k) is not closed under multiplication for n3.

The case n=1

All elements trivially fulfill A=AT This reduces the set to OS1(k) = { A2 =1 } or in other words all square roots of unity. Using the above lemma you can conclude A=±1. This implies OS1(k) = { C2  if  chark 2 C1  if  chark =2

The case n=2

First case chark2

Since chark the elements 1 and -1 are not equal and the matrix product [ 1 0 0 -1 ] [ 0 1 1 0 ] = [ 0 1 -1 0 ] OSn(k) is not contained in OS2(k) Further can OS2(k) be embedded into larger OSn(k) via: [ OS2(k) 0 0 I n-2 ] OSn(k) This mapping preserves multiplication and restates the case fo n3.

The interesting case chark=2

All elements have order 2 so you get: [ 1 0 0 1 ] = [ a b b c ] 2 = [ a2 + b2 a b + b c b a + c b b2 + c2 ] = [ a2 + b2 b ( a + c ) b ( a + c ) b2 + c2 ] If b=0 then a2 = 1 = c2 which implies a=1=c. Otherwise a=c as 0 = b ( a + c ) implies a + c = 0 since k has no zero divisors. Focusing on the first entry you get 1 = a2 + b2 = ( a + b ) 2 which using the lemma implies a + b = 1 or in other words b = a + 1 . Putting these all together you get: OS2(k) = { [ λ λ+1 λ+1 λ ] |  for  λk } Furthermore the map k OS2(k) : λ [ λ λ+1 λ+1 λ ] is a bijection between k and OS2(k). Whats left to show is that OS2(k) is closed under multiplication: [ λ λ+1 λ+1 λ ] [ ρ+1 ρ ρ ρ+1 ] = [ λ + ρ λ + ρ + 1 λ + ρ + 1 λ + ρ ] So OS2(k) has a group structure with regards to the regular matrix multiplication. This calculation also shows that the group is abelian as the addition is commutative in k.

Lemma
OS2(k) is an abelian group.
Proof All elements have order 2 by definition. Let A and B be in OS2(k). A B = A B   I2 = A B   ( B A ) 2 = A B   BA   BA = B A This means that all elements commute with each other.

This gives a simple classification for the finite fields:

Theorem
Let k be a finite field with |k| = 2 m , then OS2(k) = C 2 m .
Proof. Using the classification of finite abelian groups you get that OS2(k) = C 2 M for some M. Using the above bijection you get 2m = |k| = |OS2(k)| = | C2M | = 2M and you can conclude m=M.

Numerical exploration

The above shows that OSn(k) carries no relevant structure for n3. Noting that the field k = F 2 = /2 has exactly 2 elements, which can be easily mapped to binary bits, the following mapping into the integers is immediate: M n×n : [ b1 b2 ... bn bn+1 ... ... ... ... ... ... ... ... ... ... bn×n ] i=1 n×n bi   2 i-1 This mapping sends a matrix to the integer represented by a binary string (in least significant bit order and gives also rise to an ordering on Mn×n by retracting the ordering on back onto the matrices. In other words: You interpret the matrix as a single binary represented integer. The ordering is used in the following discussion when talking about the smallest and largest matrices. With this mapping you can easily enumerate all matrices in Mn×n by using its partial inverse map.

Example n=3
1 = 100000000 2 [ 1 0 0 0 0 0 0 0 0 ] 311 = 111110100 2 [ 1 1 1 0 1 1 0 0 1 ]

Computing the sizes of the OSn(F2) for some small n by brute force checking results in this table:

n 1 2 3 4 5 6 7 8 9
OSn(F2) 1 2 4 20 76 752 5104 104960 ?

This sequence is currently not found in the OEIS which indicates that the above is not of greater significance.

Implementation details

Some speedups can be achieved when considering the inherent structure of OSn(k). As we are only considering symmetric matrices it is enough to set half the matrix and reflect it along its diagonal. This reduces the search space from n 2 to n ( n + 1 ) 2 and cuts it approximately in half. It is also useful to note that this reduction preserves the ordering. In the case for n=4: b1 b10 [ b1 b2 b3 b4 b2 b5 b6 b7 b3 b6 b8 b9 b4 b7 b9 b10 ]

Some marginal time improvements can be gained by optimizing the start and end indices. The smallest index is a matrix with 1 on its anti-diagonal: [ 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 ] Proof. You can calculate that the proposed starting index is indeed in OSn(k).
As each element in OSn(k) is invertible, the minimal element can not have a zero row or column. This implies that each row must have at least one 1. Indeed it is the case that there is no smaller invertible matrix with that property:
None of the entries right of the anti-diagonal can be 1 as they would be larger than the proposed starting index. This leaves a matrix in triangle form. As the matrix is invertible it needs to have full rank. If any of the entries on the anti-diagonal would be 0 the rank would not be full. Then if any of the entries left of the anti-diagonal would be 1 it would again be bigger than the proposed matrix.

The end index depends on the dimensions parity: First is even dimension, second is odd dimension.

[ 1 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 ] [ 1 0 0 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 0 1 0 1 1 1 0 1 1 0 1 1 0 1 1 1 0 1 0 1 1 1 1 0 0 1 1 1 1 1 ]

Proof. You can check that both type of matrices are contained in their respective OSn(k).
Consider the even case first. Every row and column needs to have an odd number 0s or the square will not be equal to In. It follows that every row needs to have at least one 0. This implies that the last row is equal to the proposed row as shifting the 0 to any other place would decrease the associated index. The same reasoning can be applied to the next row with the caveat that the 0 can't be further left as this would duplicate a row beneath it.
The argument for the odd case is similar with the difference that every row needs to have an even number of 0s. No row can have only 1s as this would make the square not equal In again.