Here is a pit-fall that seems hard to avoid. It is as follows. A client C establishes an TLS session. C knows and sends a password and thus convinces a new context at the web site that he is a certain known entity that deserves to interact with state C' at the site.
Meanwhile another user D has established another TLS session with the same site. D guesses the values of Cookies sent to C's browser and hijacks the session.
I presume that this can be countered by making all cookies large and noisy enough to be unguessable. But this is a step that makes cookies more like secret TLS keys. I see two nearly identical functions emerging.
Perhaps there are special considerations that should limit the duration of secret TLS keys, but when it is wise to kill the key I imagine it is wise to kill the cookie as well; the thugs are pounding on the door.
Another reason to limit the cookie to an opaque value is that C should not in general have read or write access to C', his internal state. The site must assume that the user may have modified his browser.
Imagine a packet arriving at the TCP logic of the server. First the TLS logic identifies which secret key decrypts the packet. Then it is decrypted and put in the general pool. The URI directs it to a page or process, the process extracts the unguessable cookie and perhaps dispatches among several current contexts. This seems inside out to me.
I can imagine a cipher clerk handing an ambassador a message saying that it came in on some back channel. The ambassador asks which but the clerk has forgotten, saying only that it is one of the good guys!
There have been a number of insidious failures of code to choose unguessable numbers. Lets not put this additional burden on each application. Having two pieces of code to produce the secret key and the secret cookie gives the attacker twice as many targets.
I rebel at redundant function.
See, however, two kinds of guessing, which blunts some of these arguments.
Don’t transmit cookies