My web site host has just now terminated servers for sftp and SSH due to a denial of service attack. I presume that the problem is that the server must perform a multi precision modular exponential computation to discover that the new client is bogus. This costs many CPU cycles.

I now realize (subsequent to writing the stuff after the first rule below) that I need to collect the reasons why people want session keys. On page 494 of Handbook of Applied Cryptography they say:

  1. to limit ciphertext (under a fixed key) for cryptanalytic attack;
  2. to limit exposure, with respect to both time period and quantity of data, in the event of (session) key compromise;
  3. to avoid long term storage of a large number of distinct secret keys (in the case where one terminal communicates with a large number of others), by creating keys only when actually required;
  4. to create independence across communications sessions or applications.
Recently forward secrecy has been added, which some old crypto systems provided (I think). Authentication on one or both sides may be a natural part of this. This assumes that one terminal has prior knowledge of the other. Man it the Middle must be addressed in this case.

See: Photuris, Guidelines for Cryptographic Key Management, Guidelines for Mandating Automated Key Management,


Subsequent to writing the section below titled “A version presented as a key agreement protocol” I talked to Alan Karp and Tyler Close. Alan suggests the venerable technique of one party choosing a random key and sending it to the other party encrypted under a little used shared symmetric key. They nudged me to into the following perspective. Sessions have a duration; in the case of SSH it is the duration of a TCP circuit. Shared crypto keys for encryption and message authentication have a duration as well. It is conventional to identify these durations. While this affords some simplicity it may be a bad decision in some contexts. Neither duration need be nested within the other. If either of these durations is short compared with the average time between sessions or shared keys, then events that change the security requirements are more likely to occur between sessions than during sessions and this makes it natural to reestablish a shared key simultaneously with reestablishing a TCP circuit. Nonetheless, there is design leverage here.

At one extreme the shared keys are never changed. This has been seen as a bad idea since before computers were involved. At the other extreme symmetric keys are changed every few minutes even within a session. Issues of forward secrecy are not orthogonal here. It would seem we must throw issues of forward secrecy, perhaps plausible deniability, DoS attacks into the choice of key management tradeoff.

Long term secrets seem required. One vague argument is that

Other ideas: This provides some rationale for session keys. Another reason I have heard is that the session key is in constant use and is thus vulnerable to explicit theft thru violation of whatever physical and logical steps are in place to protect it. This can be anything from timing attacks dependent on cache timing, to breaking down a door while an electro mechanical system is in use.

A version presented as a key agreement protocol

The task is to choose a shared session secret to serve in message authentication and/or symmetric encryption. This scheme is much cheaper than public key solutions which invite denial of service attacks based on forcing the server to do expensive computations before rejecting a new session. We assume here that the server knows each client. I propose length parameters on the small end.

For each registered client, the server holds a c bit secret C which is also known to the client. c is about 64. Upon contacting the server, the client sends its alleged identity and the server responds with a 32 bit number N not used before with that client. N need not be concealed. The client computes a crypto hash of N and C and returns a k bit portion K of the hash. k is about 32. The server computes the same hash and compares it with the returned value. The nascent session is ended with a diagnostic to the client if the hashes do not match. If the transmitted hash portion matches the other hash portion serves as the shared session secret.

If one of the client’s C’s is compromised, it is as if the client had been turned and the Denial-of-Service dilemma is solved by deleting the server’s record of that client’s C upon any excess resource demand by that client.

(The client might speculate on the value of N and usually avoid a round trip.)

Man in the Middle

The man in the middle does not know C. He knows N and half of the hash. In the passive attack he must guess the other half. This is a novel burden on the crypto hash. If the crypto hash is ‘like a random function’ this is assured. This seems to be no advantage for the MITM.

In the active attack he may change the value of N as seen by the client. I cannot see what this buys the attacker.

The attacker can also change the hash portion returned by the client but that will merely terminate the session at the server end. Changing both seems pointless.

Brute Force Attacks

The client identification and N is in the clear and known to the attacker. k bits of the hash are known as well. Subsequent to learning this 2c offline hash calculations will produce a good guess of that client’s C. If c is more than k (as proposed) then clear information from two sessions will be enough for this offline calculation.

The portion of the hash used for the symmetric encryption and message authentication must be enough conventional purposes.

The k bits of the hash transmitted in the clear for authentication must be enough to make a bluffing (guessing) attack pointless. An online attack that merely guesses values for K is not much different from protocol agnostic attacks, which this proposal does not address. Only a client table lookup and hash computation is imposed on the target of the attack.

Performance

On my 1.8 GHz Intel processor, openssl does the necessary SHA1 in 1.6 μsec. This produces 160 bits which is ample for k. If more are needed for MAC and symmetric crypto, they need be computed only for authentic clients. 1024 bit RSA takes 400 μsec. Diffie-Hellman should be about the same.

I do not address forward secrecy or plausible denial, as such requirements are not uniform over applications.


Older ideas:

Here is a cheap protocol to put ahead of the public key or key-agreement computation. It is for situations where the legitimate client authenticates itself.

A server has a number of semi-secrets, perhaps as many as legitimate clients. Each semi-secret is a 32 bit number S and a small guessable integer index I. A client must know one of the semi-secrets of the server. The first packet sent by the client to the server includes I of that semi-secret. The first packet from the server includes a 32 bit number N that it has not before used for this semi-secret. The client computes a secure hash of S and N and returns it to the server. The server computes then, or may have computed ahead of time, the same hash. If they agree the servers that it is talking to someone who indeed knows the semisecret.

When the hash does not match the server sends a message to the client so indicating. It also probably logs the event. It does not proceed with the key agreement protocol.

If the hash matches and the authentication fails then a black mark is made against the semisecret. After a few black marks the server cancels the semisecret.

Semi secrets are distributed thru the same channels that the authentication is distributed.

I must now determine why the hash does not serve as a symmetric crypto key. This idea is too simple not to have been thought of before. Perhaps it serves for authenticated clients. Diffie-Hellman works for non-authenticated clients.