Last fall, me and Ilkka were asked to contribute to a secret volume dedicated to Iiro Honkala’s 60th birthday. (It is not out yet, but I assume he won’t see this post, and this is sort of spoiled anyway by Fundamenta Informaticae’s policy of taking submissions through arXiv.)
We decided to try out the methods we’ve developed for Game of Life to some problems in coding theory, and had at least some success: we managed to drop the known minimal density of an identifying code on the hexagonal grid from the long-standing record 3/7 (established here by Cohen, Honkala, Lobstein and Zémor) to 53/126. Our paper can be found here. Sorry about all the ergodic theory, density is just annoying to discuss without it.
The way we found codes was to simply write a finite-state automaton for periodic configurations of an SFT, and then apply Karp’s algorithm, I won’t explain this here, just look at the paper. While the codes in our paper were found by ad hoc programs, we also began writing a general toolbox, under the working name Diddy, which can be found in this Git repository.
What I want to write about here is the other half of the tool, which is not (at the time of writing) available anywhere, but already works well-enough that I can give a short demo. Specifically, what we are making is a programming language for making SFTs (subshifts of finite type), which here means shift-invariant (Cantor-)topologically closed subsets of for some finite alphabet
and some dimension
.
The idea is that you specify
- a dimension
- a topology (a graph where
acts freely and almost transitively),
- an alphabet (of a subshift)
- a first-order formula (with bounded quantifiers) with variables ranging over positions
From this data, a subshift of finite type is computed. The idea is to evolve this into a general tool box for symbolic dynamics, or at least for the parts of it we work on. In particular, we are currently working on integrating density calculation into this framework.
For now, the main thing that has been implemented is comparison of SFTs, i.e. you can define two SFTs and ask whether they are equal as sets. The equality of SFTs is literally an impossible (undecidable) problem. Specifically it is -complete, i.e. in a sense exactly as hard as the halting problem. Nevertheless, the program seems to work well on simple examples that arise in practice.
Here’s a first example:
%alphabet 0 1
%SFT fullshift Ao 0=0
%SFT fullshift2 Ao o=o
%SFT not_fullshift Ao o=0
%compare_SFT_pairs
Let’s go through what this says. Each line starting with % is a command. The first command sets the alphabet to (it’s the default alphabet, so this is mainly for the reader’s benefit). The next three lines define subshifts of finite type, by default in two dimensions, on the standard grid. The %SFT command takes two arguments, the name of the SFT, and a formula defining it, so we are defining three SFTs here. The last command compares all 6 ordered pairs of SFTs for inclusions.
Before we look at the output, let’s try to make sense of the formulas, and explain what these SFTs are supposed to be. First we have the formula Ao 0=0
. The symbol A
stands for (just like in Walnut).
0=0
simply stands for “true”, as there is no keyword for this yet (we compare the element 0
of the alphabet to itself). We should read this formula as , and the SFT it defines is
This, of course, is a full shift, as the name suggests it is intended to be.
The next formula can be interpreted similarly, but now instead of comparing 0
and 0
for equality, we are comparing o
to itself. This is interpreted as comparing the values in position o
, thus the SFT defined is
Finally, the third formula compares the value in position o
to the symbol 0
, and defines
This, of course, is the one-point subshift .
The output of %compare_SFT_pairs
, then, should indicate that the two full shifts are equal, and the one-point subshift is strictly smaller than the them. This is indeed what happens:
Testing whether fullshift contains fullshift2.
fullshift CONTAINS fullshift2 (radius 1, time 0.0)
Testing whether fullshift contains not_fullshift.
fullshift CONTAINS not_fullshift (radius 1, time 0.002385377883911133)
Testing whether fullshift2 contains fullshift.
fullshift2 CONTAINS fullshift (radius 1, time 0.0)
Testing whether fullshift2 contains not_fullshift.
fullshift2 CONTAINS not_fullshift (radius 1, time 0.0020220279693603516)
Testing whether not_fullshift contains fullshift.
not_fullshift DOES NOT CONTAIN fullshift (radius 1, time 0.0009927749633789062)
Testing whether not_fullshift contains fullshift2.
not_fullshift DOES NOT CONTAIN fullshift2 (radius 1, time 0.0020012855529785156)
The time is in seconds, and radius refers to how large blocks we need to look at in the proof. As mentioned, the question of whether holds is literally impossible to solve in general. The way this is checked is basically Wang’s classical partial algorithm for the tiling problem, i.e. for the impossible direction we rely on periodic points.
Specifically, to prove the inclusion , we check that for some
, there is no tiling an
box under the constraints of
such that a forbidden pattern of
appears. This we check with a SAT solver. For non-inclusion (which is the generally impossible direction), we show that
has a periodic point that contains a forbidden pattern for
. This works at least if
has dense periodic points, and sometimes works even if it doesn’t. Again, we check this using a SAT solver.
Now let’s look at a more advanced example.
%topology hex
%SFT idcode Ao let c u v := v = 1 & u ~ v in
(Ed[o1] c o d) & (Ap[o2] p !@ o -> Eq[o1p1] (c o q & ! c p q) | (c p q & !c o q))
%SFT idcode2
(0,0,0):0 (0,0,1):0 (1,0,0):0 (0,-1,0):0;
(0,0,1):0 (1,1,1):0 (2,0,0):0 (1,-1,0):0;
(0,0,1):0 (1,1,0):0 (1,0,1):0 (2,1,0):0;
(0,0,0):0 (0,0,1):0 (0,-1,0):0 (1,1,0):0 (1,1,1):0 (2,1,0):0;
(0,0,0):0 (0,0,1):0 (1,0,0):0 (0,-1,1):0 (1,-1,0):0 (0,-2,0):0;
(0,0,0):0 (0,0,1):0 (0,-1,0):0 (1,0,1):0 (2,0,0):0 (1,-1,0):0;
(0,0,1):0 (1,0,0):0 (1,0,1):0 (1,1,1):0;
(0,0,0):0 (0,-1,0):0 (1,0,1):0 (1,1,1):0;
(0,0,1):0 (1,0,0):0 (1,1,1):0 (0,-1,1):0 (1,-1,0):0 (1,-1,1):0;
(0,0,1):0 (1,0,0):0 (1,0,1):0 (2,1,0):0 (2,1,1):0 (2,2,1):0;
(0,0,1):0 (1,0,0):0 (1,1,1):0 (2,0,0):0 (2,0,1):0 (2,1,1):0;
%compare_SFT_pairs
Here, the topology is the hexagonal one (also known as the honeycomb grid), which looks like this:

The code defines the SFT of identifying codes on this grid in two ways. Let’s recall what an identifying code is: and identifying code on a graph is a set
such that, if we write
for
(this is the closed neighborhood of
), then
- for each
,
, and
- for each
,
.
The standard way of thinking about this, and the rationale for the term “identifying code”, is that is a set of error-detecting nodes, which can detect “defects” in the grid, by reporting an error if their closed neighborhood contains a node with a defect, but the report does not tell which node has the defect.
Assuming at most one defect occurs in the grid, an identifying code precisely allows you to tell, just by looking at the reports of all the error-detecting nodes, whether there is a defect in the grid (due to the first item above) and also identify which node has the defect (by the second item). This explains the word identifying. The word “code” is used simply because all sets studied in coding theory are called codes; accordingly, the elements of are called codewords.
Of course, a subshift, being a subset of , is not literally a set of codes, but we simply identify a code
with its characteristic function, so
and
denotes a codeword.
Now let’s look at the formula Ao let c u v := v = 1 & u ~ v in (Ed[o1] c o d) & (Ap[o2] p !@ o -> Eq[o1p1] (c o q & ! c p q) | (c p q & !c o q))
defining the first SFT idcode
. We claim that this simply states the definition of an identifying code.
First, as in our first example, Ao
quantifies over all positions. The subformula let c u v := v = 1 & u ~ v in
defines a predicate that holds in the formula after in
. The predicate c u v
being defined has two parameters u
and v
(by default, all variables denote positions on the grid). The predicate says v = 1 & u ~ v
, i.e. the position v
in the grid holds the symbol 1
, and ~
denotes that u
and v
are adjacent or equal positions. In other words, this predicate says that v
is a codeword neighbor of u
, or in math notation .
After this definition, we have (Ed[o1] c o d) & (Ap[o2] p !@ o -> Eq[o1p1] (c o q & ! c p q) | (c p q & !c o q))
. The symbols &
, |
, !
refer to Boolean operations AND, OR and NOT, so this is the conjunction of two fomulas Ed[o1] c o d
and (Ap[o2] p !@ o -> Eq[o1p1] (c o q & ! c p q) | (c p q & !c o q))
. The first formula Ed[o1] c o d
corresponds to the first item of the definition of an identifying code, and says o
must have at least one codeword neighbor d
. The symbol E
means .
The main thing to explain is in the first formula Ed[o1] c o d
is [o1]
. This is a technical limitation: all quantifiers except the first one must be bounded in the defining formula. In the definition of an identifying code, we simply say that a set (namely ) is nonempty, so when translating to a first-order formula, a natural way would be to say there is some element in the graph which is in this intersection (which could be anywhere), but we must explicitly indicate that we only look at the immediate neighborhood of
o
. There is much to say about bounded quantifiers, and I may write about this in the future, but not here. In any case, here this is not a real limitation since of course, a codeword neighbor is in particular a neighbor.
Next, we have Ap[o2] p !@ o -> Eq[o1p1] (c o q & ! c p q) | (c p q & !c o q)
which is supposed to correspond to the second item. To read this, one must know that our convention is that quantifiers see everything after them unless prevented by parentheses, i.e. (c o q & ! c p q) | (c p q & !c o q)
is seen by Eq
. The operator a !@ b
means “a
and b
do not denote the same position”.
We can now read the formula in plain English: “for all positions p
at most two steps away from o
, if p
and o
are not the same position, there must exist some position q
(next to p
or o
which is a codeword neighbor of exactly one of them”. Apart from the quantifier bounds, this is quite literally the definition of an identifying code, except here we remembered to be careful and not require distinct codeword neighborhoods when the nodes are literally the same; in the official definition above I accidentally left this out; a human reader will presumably read this as a typo (do I guess correctly?), while our program will happily let the identifying codes form an empty SFT, and I certainly did this mistake many times also during testing.
As for the quantifier bounds, we simply gave it a moment of thought to see how big they must be. If in doubt, one can make them larger and check that the SFT does not change.
Ok so that’s all about the definition of idcode
. Next let’s look at idcode2
. This is simply a long list of forbidden patterns, which we worked out on the blackboard. I won’t try to explain them in this post, because they deal directly with the internal representation of the hexagonal grid, maybe I’ll write about that later. In any case, we can now check that indeed the forbidden patterns are correct, by inspecting the output of our code:
Testing whether idcode contains idcode2.
idcode CONTAINS idcode2 (radius 2, time 0.9418075084686279)
Testing whether idcode2 contains idcode.
idcode2 CONTAINS idcode (radius 1, time 0.1818685531616211)
Pretty cool, right?