Trillian log sequencing: demystified?
One way to view Trillian is as a database with an append-only Merkle tree. That Merkle tree is managed by a separate component called a log signer. It runs in a dedicated process that basically merges pending leaves that have yet to be incorporated into the Merkle tree. This part of the log signer is structured as a sequencing job that runs periodically. I spent a few hours learning more about the details and thought shared knowledge is better.
Problem definition and overview
Trillian’s log
signer
comes with a whole bunch of configuration options that are spread across several
different files. Some of these options are more difficult to grasp than others,
such as num_sequencers
, sequencer_interval
, and batch_size
. I don’t mean
difficult as in understanding that there may be several sequencers that run
periodically, but rather what that actually means in terms of concurrency and
how much can be sequenced per time unit.
The short answer is as follows:
- Regardless of how many sequencers you configure there will be no concurrent sequencing of a particular Merkle tree. The number of sequencers is only relevant if there are multiple Merkle trees.
- The sequencer interval tells you how often the log signer wakes up to do a sequencing job. It sequences no more than the configured batch size before going back to sleep. If the interval elapsed already there will be no sleep.
In other words, to avoid building up a large backlog you need to consider how often the sequencer runs and how much work it is willing to do every time. The longer answer is detailed below, and it includes a little bit of additional context regarding what you may (not) do with these three parameters. For example, the sequencer job is most reliable if it runs often over small batches.
Log signer
The most central part of the log signer is probably its forever loop. For
reference it is implemented by Trillian’s operation
manager,
see the OperationLoop
function. In a nutshell, the log signer wakes up,
performs some jobs, and goes back to sleep. Pretty much a busy working day!
Coordination
Before proceeding you need to know that a log signer may manage multiple different Merkle trees. It might sound odd at first, but hopefully less so if you think about the existence of transparency applications that use temporal sharding: the split of one transparency log into multiple smaller ones so that the leaves are grouped by, say, yearly relevance like 2020 and 2021. This allows parts of the log to be retired as soon as they are no longer needed. You can also run multiple different log signers to avoid single points of failure. At most one log signer is responsible for a particular Merkle tree at a time though. This requires coordination, which is one part of the forever loop.
Sequencing
The other part is log sequencing. After waking up once per interval as defined
by the configured sequencer_interval
, the log signer takes a pass over all
Merkle trees that it manages. A sequencing job is scheduled for each Merkle tree
that the log signer is currently the master for. It is determined by an election
protocol which is used for coordination in the case of multiple log signers,
otherwise master is assumed as there is nothing to coordinate. Next, these jobs
are distributed to a number of concurrent sequencers that work on different
Merkle trees. This means that there is no concurrency whatsoever when a
particular Merkle tree is being sequenced. Moreover, what is selected for
sequencing
is deterministic based on Trillian’s internal timestamp order and the number of
leaves that the sequencer is willing to process per job.
In other words, there is no point setting num_sequencers
higher than the
number of Merkle trees that you manage. You may set it lower, in which case at
least one sequencer will move on to a different sequencing job (and thus a
different Merkle tree) once it is done. Now it is also evident that the
configured sequencer_interval
and batch_size
determines an upper bound for
the number of writes per time unit. It is an upper bound because your hardware
might not be capable of handling, say, 10k writes per second.
Concluding remarks
When I noticed the sequencer_interval
parameter I wondered if it could be used
to configure a tree head frequency, such that the Trillian front-end personality
would only see a fixed number of updates per time interval. For example, you
might want that because the second version of Certificate Transparency requires
control over it and some gossip-audit models assume it. If not supported by the
Trillian back-end, the front-end personality has to take on the role. While it
is trivial in the case of a single personality instance (e.g., pick a tree head
every hour), it might require some additional coordination if there are multiple
concurrent instances running. So, it would be convenient if the already
coordinated log signer could enforce a tree head frequency.
In theory it is of course possible to set the sequencer interval and batch size to accommodate both a frequency and an upper bound for writes per second. In practice it is apparently recommended to use short intervals and batch sizes of up to 1000 leaves. This recommendation involves quite a bit of nuance, and relates to things like which back-end is used for the underlying database. If tree head frequencies are generally useful it might land as a back-end feature at some point. For now, it is better enforced by the front-end personality.
Acknowledgments
This story is sponsored by my System Transparency employment at Mullvad VPN. Martin Hutchinson and Pavel Kalinnikov provided valuable insights as part of a conversation in the Trillian slack. Mistakes, if any, are my own.