RFC 8439, "ChaCha20 and Poly1305 for IETF Protocols", June 2018Source of RFC: IRTF
Errata ID: 6025
Publication Format(s) : TEXT
Reported By: James Muir
Date Reported: 2020-03-21
Section 8439 says:
From Section 3 (re implementation advice for Poly1305): A constant-time but not optimal approach would be to naively implement the arithmetic operations for 288-bit integers, because even a naive implementation will not exceed 2^288 in the multiplication of (acc+block) and r.
It should say:
It is possible to create a constant-time, but not optimal, implementation by implementing arithmetic operations for 256-bit integers, because even a naive implementation will not exceed 2^256 in the multiplication of (acc+block) and r (note that we have r < 2^124 because r is "clamped").
There are two issues 1) 288 bits is too big, and 2) a naive implementation of 288 bit integer arithmetic isn't necessarily constant time.
re #1: 288 seems to be tied to the machine int size and assumes 32-bit integers (288 is nine 32-bit integers). It is probably better to give a number independent of the machine int size.
You can compute Poly1305 using 255 bit arithmetic.
Padded blocks of the message are in the range 2^8, 2^8 +1,..., 2^129 -1.
Assuming that the partial reduction step always reduces the accumulator to 130 bits, we have acc < 2^130, so acc+block < 2^131.
r is a 16 byte value, but some of its bits are "clampled", so we have r < 2^124.
Thus (acc+block)*r < 2^255; so we can get by with 255 bit big-integer arithmetic (probably 256 bits is more convenient to work with).
re #2: big-integer arithmetic can be implemented in constant time, but perhaps not in a obvious or naive way. Keeping things constant time seems to depend on the characteristics of the underlying processor.