This is my version of HTTP TLS best practices, along with some hope for the future. I thought I would share it here.
If you disagree with anything here, and some will, please feel free to share so that others can see other points of view.
For the record, this is the result of my current best practices:
https://www.ssllabs.com/ssltest/analyze.html?d=pipfrosch.com&latest
Note that an A+ rating can be deceptive, I’ve seen many A or A+ rating servers that do not follow what I consider to be “best practices”.
-=-
A) Certificate Type:
Use ECC certificate. They work just fine with Let’s Encrypt. There is RSA in the signing chain, but that often is also the case with ECC certs from commercial certificate providers.
There are legitimate reasons to use RSA certs. Tumblr seems to fail at scraping OpenGraph data when ECC certs are used (I’m guessing their backend uses a very old version of libcurl that had issues). So if your business depends upon sharing links on Tumblr (one I am involved in does) then use RSA but otherwise, ECC certs are very well supported in clients and should be used.
For RSA - I use 3072-bit. It is currently safe to use 2048-bit and may be for a very long time, but 3072-bit (and 4096) work just fine with Let’s Encrypt and work just fine with Tumblr. There are no known brute force attacks on 2048 bit, but I still prefer stronger than minimum recommended. Reality is I encourage going to ECC anyway, so when using RSA I prefer to make it better than minimum RSA. Only clients I know of support 2048-bit but don’t support 3072-bit are old clients that also did not support SNI. And most of them allegedly support up to 8192 as well (but 8192 RSA is really slow, if you need better than 4096 cert just use ECC).
Do NOT run dual certificate. Few years ago when ECC was relatively knew, I was one of the “cool kids” that ran both. It caused issues. Many of the clients that did not support ECC also did not support dual and would just use the first they were served or not work at all. So the RSA had to be served first, which caused many clients that did support ECC to use the RSA instead of the ECC. Using two certs is not worth it, just use ECC or RSA, don’t try to use both, you won’t get the desired effect much of the time.
For ECC keys I use:
${OPENSSL} ecparam -name secp384r1 -genkey -out "${PVT}"
For RSA keys I use:
${OPENSSL} genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:3072 -out "${PVT}"
${OPENSSL}
is full path to your openssl implementation, ${PVT}
is full path to private key (I personally put them in /etc/pki/tls/eff_private/
which is directory owned by root:root
with 0700
permissions )
If you automate as I generally recommend, you probably have a configuration file specific to how you automate. I can’t speak to how to choose private key type if you automate.
B) Automate
Unless you use or intend to use DANE, automate certificate renewel. If you use or plan to use DANE then do not automate. DANE requires coordinates with your DNS server to properly rotate. With DANE, you probably are going to want to only change your private key once a year, so automation can work for the periodic renewals that Let’s Encrypt uses as long as your TLSA record is based on the public key and not the certificate. However with DANE you still should generate a new private key at least once a year and you have to get the fingerprint into DNS about a day before the new private key is used, so that does not lend itself very well to automation.
Note that AFAIK there are no browsers with current plans to support DANE so using DANE at this point does not have real world benefits. It is my hope that will change, but for the present, unless you are fan of DNSSEC trying to promote it, DANE is only useful on SMTP servers and is of no benefit on HTTPS servers. So just use one of the Let’s Encrypt clients that automates certificate rotation in your web server configuration files. Those Let’s Encrypt clients tend to also generate new private keys which is a good thing.
For those of us who do not automate and re-use private keys - I like to always generate new private key in January and get new certificates whenever it is an odd month (six times a year).
C) TLS version
Virtually every client supports TLS 1.2. When using RSA certs, I do support TLS 1.0 but I am planning on changing that in 2019. With ECC certs, I only support TLS 1.2.
Obviously support TLS 1.3 if your TLS stack supports it. Mine (LibreSSL) currently does not. Their devs took a conservative approach which I appreciate, wait until the final draft is standardized before adding code to support it. So LibreSSL will have TLS 1.3 but since TLS 1.2 is still needed and still safe when properly configured, it was not urgent for them to start implementing before finalized.
Personally I think it was irresponsible for production browsers and servers to implement draft protocol, draft protocols sometimes have flaws that then are not properly removed when final is ratified. But I digress…
Anyway - starting in 2019 I recommend only supporting TLS 1.2 or newer, and that is indeed what I already was doing starting in 2018 except for a few servers.
D) Cipher Suite Selection
This is something that annoys me about many servers. They have the server configured to support way way way way too many ciphers.
Your server should only support the minimum of the best ciphers needed to support the clients you wish to support.
Your server should ONLY support ciphers with Forward Secrecy.
Your server should NOT support weaker ciphers with Forward Secrecy just because it can.
With ECC certs where I only support TLS 1.2 this is what I use:
SSLCipherSuite “EECDH+CHACHA20 EECDH+AES256 -SHA”
With my build of LibreSSL and mod_ssl built against it, that results in the following three ciphers:
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca9)
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c)
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 (0xc024)
The only browsers that excludes are older browsers on platforms that are no longer supported. I need the third one with CBC for Safari, many versions of which don’t support AES with GCM. Effing Apple. I do ChaCha20 first because most mobile platforms do not have AES-NI support so ChaCha20 is faster for them.
With RSA certs where I still support TLS 1.0 this is what I use:
SSLCipherSuite “EECDH+CHACHA20 EECDH+AESGCM EECDH+AES+SHA384 EECDH+AES+SHA256 EECDH+AES EDH+AES256”
That results in more ciphers than I am comfortable with, it will be trimmed in 2019 when I stop supporting TLS 1.0 and TLS 1.1 clients when using RSA certs. However even though the list is longer than I would like, it is shorter than most servers - and all use Forward Secrecy.
E) HSTS (Strict Transport Security)
This is a header your server can send to clients telling them not to connect to your server unless it is using TLS.
Honestly I believe a header is the wrong place to do it, DNS would be better. But DNS would only work if DNSSEC was implemented and enforced.
I use to like HSTS preloading but no longer do because of browser behavior. With HSTS preloading, browsers will confuse to connect even if secure if the browser does not trust the CA. This caused some problems for me in the past where I used self-signed certs for some domains I did not intend to be public consumption. It also has caused problems when a browser does not trust the CA that signed the cert, I can’t choose to make a temporary exception like I can if the server didn’t use HSTS preloading. So I no longer recommend using HSTS preloading, the behavior of browsers screwed up what could have been a good thing.
Note that for the HSTS header, you can specify the domain only but for the preloading, it’s all sub-domains as well.
Send the HSTS header (with or without including sub-domains) but don’t bother getting your domain into preloading lists. That functionallity is good but incorrect behavior by browsers demonstrates exactly why it belongs in DNS and not a static list that is hard to get removed from.
A simple TXT record with the contents of the header would be a better way to do the same thing as browser preloading. Similar to how SPF is done.
In the off chance a browser vendor is reading this and a DNS record is supported, to avoid DoS attacks, it should only be cached when DNSSEC validates it. Similarly, the HSTS header from the server should only be cached when sent over TLS (which hopefully is already how it works).
Browser vendors, start supporting DNSSEC. It provides KISS solutions to a lot of problems.
F) Certificate Security
The private/public key pair on your server should never be used for actual encryption, encryption should be done by an ephemeral shared secret negotiated between client and server (forward secrecy). It is used for authentication, to give the client confifence it is talking to the actual server and not a MITM.
The PKI system however does not always work as intended. There are two issues that happen more often than they should:
A) Fraudulently issued certificate
B) Stolen private key used with valid certificate.
Some form of 2FA is needed to verify the certificate and reduce instances of fraud.
Fa) DNS CAA
This is a DNS record that specifies what certificate authorities are allowed to issue a certificate for a given domain. It is intended to help mitigate A above. I do not use it. It provides very little security. Clients do not check it and they shouldn’t, so it is only effective when a certificate authority checks to see if the DNS record exists and if it exists, follow what it says. If the DNS server is not protected by DNSSEC it is too easy for an adversary to return fraudulent results. Even if the certificate authorities did enforce DNSSEC, should a major browser vendor decide to revoke trust in the CA that I use, to get a new certificate issued by a different CA then I would have to update the DNS record, wait for the change to propagate, and then seek a signed cert. That time is business lost through no fault of my own.
I can’t say it is wrong to use DNS CAA, just that I don’t believe it is the right approach to solving the fraudulently issued certificate issue, especially since it provides no mechanism that browsers can use as a second authentication factor.
Fb) OCSP
Due to the life of a certificate, there has to be a way for CAs to revoke a certificate. This is needed for when a CA realizes it was tricked into issuing one, or for when there is reason to believe the private key associated with the certificate may have been compromised.
Initially this was addressed with Certificate Revocation Lists but that did not scale well. OCSP was invented to allow the client to check that the certificate was valid, but that also had problems. A popular website would result in high number of OCSP requests which caused response issues. High Availability under load is extremely expensive.
OCSP Stapling is a much more elegant solution and not only should be used, it should be required at least until browsers support DANE.
With OCSP Stapling, once a day or do the server makes the OCSP request and caches it, sending it to the clients along with the certificate. Since the response is singed by CA it can not easily be forged. The client then knows that the certificate has not been revoked, at least not before the timestamp in the signed OCSP response. It provides 2FA for the client with respect to the validity of the certificate.
OCSP stapling also solves a privacy issue. When browsers have to query the OCSP server themselves, it reveals to the OCSP server what websites the user visits. I see that argument used a lot in favor of enabling OCSP stapling, but oddly, that argument is often made on websites that use Google Analytics, Google Adsense, embedded YouTube videos, and other third party trackers. Very hypocritical of those web sites to advocate OCSP stapling for privacy reasons while at the same time supporting blatant tracking from companies specifically known to track users.
It is my opinion that browsers should issue an insecure connection warning whenever a cert is not accompanied by a valid OCSP response, but that is not what browsers do.
What you as a webmaster can do, in addition to enabling OCSP Stapling, is add “OCSP Must Staple” to your CSR when requesting a signed certificate. With at least some browsers, that tells them to behave the way they already should behave. Not all browers will, but FireFox does.
That won’t protect your users from fraudulently issued certificates, but it will protect your users in the event that your properly issued certificate needs to be revoked because the private key was possibly compromised.
Enable OCSP stapling on your server, and use the “Must OCSP Staple” feature of x509 certificates. At least for web servers, it is not useful for SMTP servers IMHO.
Fc) DANE
The best 2FA for your certificate is DANE but unfortunately browsers do not support it.
With DANE, a fingerprint of your public key is in a type of DNS record called a TLSA record. This allows the client to verify that the fingerprint of the server matches the fingerprint in DNS and reject it if it does not.
This would completely thwart fraudulently issued certificates, their public key fingerprint would not match what is in DNS unless the hacker controls your DNS server (in which case is game over anyway).
This also completely thwarts the issue of stolen private keys. When you have reason to believe your private key may have been compromised, revoking the certificate is only part of the recovery process. The other part is generating a new private key. Update the TLSA record to reflect the new public key fingerprint and remove the old public key fingerprint, and clients can reject certificates based on the old key whether or not they have an OCSP response telling them to. This btw results in faster invalidation of the old cert as OCSP responses are usually considered valid for more than 24 hours but DNS records rarely are valid for that long, I use an hour on my own TLSA records.
For this solution to work, you need to use DNSSEC on your zone and browsers need to enforce it.
This solution is already being used in SMTP. It is time for it to come to the web.