Client-Side Password Hashing

A lot of the advice on password hashing says that client-side password hashing is not necessary, provided you are using HTTPS or another secure protocol, for instance in

Is it Superfluous?

Typical argument is that hashing on the client-side means the hash becomes the password, so an attacker sniffing the hashed password could just reuse that to log in your service. This is true, and the obvious solution around that is to use HTTPS.

Another argument is that if you are not using HTTPS, the attacker could just replace the Javascript code with one of his/her choosing, and void the hashing or just record keystrokes. This is true as well, and means there is no way around HTTPS.

However, this would be ignoring why passwords are hashed in the first place, and what attacks password hashing defends against: passwords are hashed so that in case of a security breach on your side, the “plain text” user passwords will not be leaked.

Typically password hashing takes place on the server-side, to avoid storing plain text passwords in a database, so that if/when the database is leaked, user passwords are not at risk.

HTTPS only covers Transport

However even when using HTTPS, sending raw passwords opens up two server-side vulnerabilities:

  • Server code still manipulates raw passwords at some point, any unlucky bug could thus end up revealing those passwords
  • Server logs will have access to raw passwords, and could end up being a “shadow” database of user credentials

Performing some hashing, even lightweight hashing on the client side, will greatly reduce those vulnerabilities, as the server will then no longer be seeing the raw user passwords at any point.

It also closes another major, and often overlooked vulnerability.

Corporate Local / Transparent Proxies

This is probably the major reason why you should do some client-side hashing.

In many (most?) countries, corporations have a requirement to log traffic (and their electronic data) in case it is required for legal investigations. Reasons for this abound, and range from Enron-style fraud to trafficking or generally bad people (ab)using a corporate network.

To achieve this, a common solution is to install a custom Root CA, aka “Private Trust Anchor“. This custom Root CA is typically used to reduce signing costs and overhead for all internal HTTPS services, and less well known fact is that it opens up the ability to use local transparent proxies. Custom Root CAs are often silently deployed through a GPO in Windows and similar means for other OSes.

If you are using a regular Windows and a regular browser, access to HTTPS will go through the regular certificate chain, using regular certificate authority. You also benefit from extra security layers like Public Key Pinning.

But when a custom Root CA is installed, all that goes through the window: the custom Root CA allows the corporate proxies to issue “valid” certificates for any website (even google.com and the rest), and the public key pinning features are disabled:

How does key pinning interact with local proxies and filters?

Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. 

A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.

All the major browsers have a similar behavior… because it is required to allow transparent proxies. And transparent proxies are the means through which the legal logging requirements are fulfilled.

So besides introducing a major MITM opportunity, this also means that there are legally-required corporate logs somewhere of all that went through HTTPS… including plain text passwords, if you did not hash them on the client-side.

These logs will have varying degrees of security when in the corporate domain… and next to none if they are ever requested by the legal system for an investigation.

Conclusions

  • Always perform some (at least lightweight) hashing on the client side, in addition to heavier hashing on the server side.
  • Do not trust HTTPS security too much when in a corporate network.
  • Beware of installing custom Root CA if you Bring Your Own Device.

2 thoughts on “Client-Side Password Hashing

  1. @Arnaud hashing server-side most definitely has to be done, because if a server-side leak occurs, all the users will be at risk, so strengthening is certainly required. However server-side hashing will do nothing against leaked logs, transparent proxy mis-configurations or plain old bugs in the authentication code.

    Also a too high number of rounds of PKBDF2 (or any expensive hash) on the server-side opens up the server to Level 7 DDoS attacks: a relatively low number of fake password submissions can generate a very high server load, result in high server cost if the server-side is using cloud scaling, and prevent regular users from logging in. Deferring as many rounds as possible to the client side and using client-side puzzles can help mitigate that.

    Personally I am a big fan of using client-side puzzles, send the client a random string A, and ask for a string B for which SHA256(A+B) starts with x zeroes, it is cheap to check server-side, but can be made arbitrarily costly in CPU for an attacker. Even just 100 milliseconds puzzle will drastically increase the cost of a DDoS attack, while being negligible for a normal user logging in.

Comments are closed.