In the office this morning we had a discussion about password hashing, in particular what we could do to improve the security of web servers around the university. (You might remember the NullCrew attacks on Cambridge University a few months ago.)
One of the points the standard password storage advice (which boils down to "use scrypt, or bcrypt, or PBKDF2, in decreasing order of preference") is that a decent iteration count should make a password hash take tens of milliseconds on a fast computer. This means your server is incapable of servicing more than a few dozen login attempts per second, and so your login form becomes quite a tempting denial of service target.
One way to mitigate this is to make the client perform a proof-of-work test that is more expensive than the password hash which brings you a little closer to fairness in the arms race. On Twitter, Dan Sheppard tersely suggested some refinements:
Improve client-side puzzles: bypassable with uid-tied unexpiring cookie. If supplied & not recently & not blacklisted, relax puzzles. Under DOS require this cookie to go into 1st queue, otherwise into 2nd with DOS. 5s at first login on new machine w/ spinner is ok.
But for real DoS resistance you need the client to be doing a lot more work than the server (something like 3 to 10 binary orders of magnitude), not roughly the same amount. So I wondered, can you move some of the work of verifying a password from the server to the client? And require that an attacker with a copy of the password file should have to do both the client's and server's work for each trial decrypt.
I thought you could use two hash functions, "hard" and "easy". ("Easy" should still be a slow-ish password hash.) The server sends some parameters (salt and work factor) to the client, which executes the "hard" function to crypt the password, and then sends password and crypt to the server. The server uses these as the input for the "easy" function, and checks this matches the stored crypt. The server stores only the first salt and the final output of the process, along with the work factors.
What might be the problems with this idea? (I will not be surprised or upset if it turns out to be completely laughable to cryptographers!)
The output of the "hard" KDF is password-equivalent, so it needs to be well protected. I'm assuming it is treated like an ephemeral key. Don't be tempted to put it in a cookie, for example.
This protocol is not a proof of work like what is described in the Stack Exchange question I linked to above: in my idea the challenge from the server to the client is always the same. (Which is why the result of the work is password-equivalent.) But this is OK for a threat model that is satisfied by basic password authentication.
The "easy" function needs to be resistant to attacks that aim to recover the salt. As I understand it, this is not a normal requirement for password hashes (the salt is usually considered public) so an unusual construction might be needed.
Any other problems? Is this a completely foolish idea?
Of course this is a mostly useless idea, since there are much better authentication protocols. In particular, SRP (Secure Remote Password authentication and session key establishment) does all the work of hashing the password and salt on the client - observe in the summary where Hash(salt, password) happens.
What other password authentication protocols should be mentioned in this context?