While seeking attacks on my variation on CBCC in the last few weeks, I have learned a great deal about authentication of CBC packets. I record here much or most of what I have learned because it is enough to easily forget and the source material is hard to find and read. Others may find it useful.
First an important misconception that some may hold as I did. CBC encryption of a packet smears plaintext differences thruout the current and subsequent cipher blocks. By contrast, CBC decryption of ciphertext with a one bit error in the ciphertext shows up only thruout the current block and that bit in the next block. This is indeed what it means to be self synchronizing. An immediate consequence of this is that cutting and pasting ciphertext from different packets but with the same key, will result largely in the corresponding plaintext upon decryption. There will be one garbled block at the splice of the plaintext. The inventors of CBCC presumed that this garble will cause a failure in the simple redundancy check.
First I collect a few schemes. They have in common some sort of simple cheap redundancy added to the plaintext check before the encryption and checked after the decryption.
In each of the following it is required that the payload, P1, P2, ...Pn, satisfy a redundancy check to be specified later.
Schemes 1 and 2 are identical but these expressions fit into different patterns. Scheme 3 Characterizes the Kerberos V5 message packet where the random number is called the “confounder”.
The Kerberos redundancy check used a 32 bit CRC. See:
S. Stubblebine and V. Gligor, On Message Integrity in Cryptographic Protocols, IEEE Computer Society Symposium on Research in Security and Privacy, Oakland, CA, May, 1992, pp. 85-104. (Electronic version not available :(That attack manages, with about 5500 chosen plaintext packets, to produce a bogus ciphertext which decrypts to plaintext where most of the payload is controlled by the attacker. The attack requires chosen plaintext of about 2n/2 blocks where n is the number of redundant bits.
Given that ciphertext errors do not propagate to the end of decrypted plaintext, the redundancy check must cover all of the plaintext, as does a simple checksum.
I advocate either a simple arithmetic add or boolean xor of 128 bits, probably in agreement with the width of the block cipher. The redundancy requirement is that the sum of the plaintext be 0. I think that the only properties of this redundancy that we require stem from being an abelian group. Indeed any simply computed abelian group with 2128 elements will do. If “•” is the group operator then R0 = group identity and for i from 1 to n: Ri = Ri−1 • Pi. The redundancy requirement is that Rn = R0.
I have chosen to describe the redundancy check without saying which part of the plaintext serves as “the check”. The code that performs this check after the decryption does not depend on this. There are even situations where the payload provider can ensure redundancy without extra space.
The purposes that I have in mind require strong control over packet serial numbers. I assume a crypto environment where it is as hard to defeat the serial number counter on either end, as it is to steal the key. I think that either of the equivalent schemes 1 or 2 above, combined with the abelian redundancy provides this protection. I can imagine other environments where this may not be the case. Some additional attacks assuming oracles without counters, may be possible where one can reset the counter without changing or discovering the key. If the decryptor or encryptor, with its state, can be duplicated there may be a problem.