Best Practices for a TLS Server

This is my version of HTTP TLS best practices, along with some hope for the future. I thought I would share it here.

If you disagree with anything here, and some will, please feel free to share so that others can see other points of view.

For the record, this is the result of my current best practices:

https://www.ssllabs.com/ssltest/analyze.html?d=pipfrosch.com&latest

Note that an A+ rating can be deceptive, I’ve seen many A or A+ rating servers that do not follow what I consider to be “best practices”.

-=-

A) Certificate Type:

Use ECC certificate. They work just fine with Let’s Encrypt. There is RSA in the signing chain, but that often is also the case with ECC certs from commercial certificate providers.

There are legitimate reasons to use RSA certs. Tumblr seems to fail at scraping OpenGraph data when ECC certs are used (I’m guessing their backend uses a very old version of libcurl that had issues). So if your business depends upon sharing links on Tumblr (one I am involved in does) then use RSA but otherwise, ECC certs are very well supported in clients and should be used.

For RSA - I use 3072-bit. It is currently safe to use 2048-bit and may be for a very long time, but 3072-bit (and 4096) work just fine with Let’s Encrypt and work just fine with Tumblr. There are no known brute force attacks on 2048 bit, but I still prefer stronger than minimum recommended. Reality is I encourage going to ECC anyway, so when using RSA I prefer to make it better than minimum RSA. Only clients I know of support 2048-bit but don’t support 3072-bit are old clients that also did not support SNI. And most of them allegedly support up to 8192 as well (but 8192 RSA is really slow, if you need better than 4096 cert just use ECC).

Do NOT run dual certificate. Few years ago when ECC was relatively knew, I was one of the “cool kids” that ran both. It caused issues. Many of the clients that did not support ECC also did not support dual and would just use the first they were served or not work at all. So the RSA had to be served first, which caused many clients that did support ECC to use the RSA instead of the ECC. Using two certs is not worth it, just use ECC or RSA, don’t try to use both, you won’t get the desired effect much of the time.

For ECC keys I use:

${OPENSSL} ecparam -name secp384r1 -genkey -out "${PVT}"

For RSA keys I use:

${OPENSSL} genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:3072 -out "${PVT}"

${OPENSSL} is full path to your openssl implementation, ${PVT} is full path to private key (I personally put them in /etc/pki/tls/eff_private/ which is directory owned by root:root with 0700 permissions )

If you automate as I generally recommend, you probably have a configuration file specific to how you automate. I can’t speak to how to choose private key type if you automate.

B) Automate

Unless you use or intend to use DANE, automate certificate renewel. If you use or plan to use DANE then do not automate. DANE requires coordinates with your DNS server to properly rotate. With DANE, you probably are going to want to only change your private key once a year, so automation can work for the periodic renewals that Let’s Encrypt uses as long as your TLSA record is based on the public key and not the certificate. However with DANE you still should generate a new private key at least once a year and you have to get the fingerprint into DNS about a day before the new private key is used, so that does not lend itself very well to automation.

Note that AFAIK there are no browsers with current plans to support DANE so using DANE at this point does not have real world benefits. It is my hope that will change, but for the present, unless you are fan of DNSSEC trying to promote it, DANE is only useful on SMTP servers and is of no benefit on HTTPS servers. So just use one of the Let’s Encrypt clients that automates certificate rotation in your web server configuration files. Those Let’s Encrypt clients tend to also generate new private keys which is a good thing.

For those of us who do not automate and re-use private keys - I like to always generate new private key in January and get new certificates whenever it is an odd month (six times a year).

C) TLS version

Virtually every client supports TLS 1.2. When using RSA certs, I do support TLS 1.0 but I am planning on changing that in 2019. With ECC certs, I only support TLS 1.2.

Obviously support TLS 1.3 if your TLS stack supports it. Mine (LibreSSL) currently does not. Their devs took a conservative approach which I appreciate, wait until the final draft is standardized before adding code to support it. So LibreSSL will have TLS 1.3 but since TLS 1.2 is still needed and still safe when properly configured, it was not urgent for them to start implementing before finalized.

Personally I think it was irresponsible for production browsers and servers to implement draft protocol, draft protocols sometimes have flaws that then are not properly removed when final is ratified. But I digress…

Anyway - starting in 2019 I recommend only supporting TLS 1.2 or newer, and that is indeed what I already was doing starting in 2018 except for a few servers.

D) Cipher Suite Selection

This is something that annoys me about many servers. They have the server configured to support way way way way too many ciphers.

Your server should only support the minimum of the best ciphers needed to support the clients you wish to support.

Your server should ONLY support ciphers with Forward Secrecy.

Your server should NOT support weaker ciphers with Forward Secrecy just because it can.

With ECC certs where I only support TLS 1.2 this is what I use:

SSLCipherSuite “EECDH+CHACHA20 EECDH+AES256 -SHA”

With my build of LibreSSL and mod_ssl built against it, that results in the following three ciphers:

TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca9)
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c)
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 (0xc024)

The only browsers that excludes are older browsers on platforms that are no longer supported. I need the third one with CBC for Safari, many versions of which don’t support AES with GCM. Effing Apple. I do ChaCha20 first because most mobile platforms do not have AES-NI support so ChaCha20 is faster for them.

With RSA certs where I still support TLS 1.0 this is what I use:

SSLCipherSuite “EECDH+CHACHA20 EECDH+AESGCM EECDH+AES+SHA384 EECDH+AES+SHA256 EECDH+AES EDH+AES256”

That results in more ciphers than I am comfortable with, it will be trimmed in 2019 when I stop supporting TLS 1.0 and TLS 1.1 clients when using RSA certs. However even though the list is longer than I would like, it is shorter than most servers - and all use Forward Secrecy.

E) HSTS (Strict Transport Security)

This is a header your server can send to clients telling them not to connect to your server unless it is using TLS.

Honestly I believe a header is the wrong place to do it, DNS would be better. But DNS would only work if DNSSEC was implemented and enforced.

I use to like HSTS preloading but no longer do because of browser behavior. With HSTS preloading, browsers will confuse to connect even if secure if the browser does not trust the CA. This caused some problems for me in the past where I used self-signed certs for some domains I did not intend to be public consumption. It also has caused problems when a browser does not trust the CA that signed the cert, I can’t choose to make a temporary exception like I can if the server didn’t use HSTS preloading. So I no longer recommend using HSTS preloading, the behavior of browsers screwed up what could have been a good thing.

Note that for the HSTS header, you can specify the domain only but for the preloading, it’s all sub-domains as well.

Send the HSTS header (with or without including sub-domains) but don’t bother getting your domain into preloading lists. That functionallity is good but incorrect behavior by browsers demonstrates exactly why it belongs in DNS and not a static list that is hard to get removed from.

A simple TXT record with the contents of the header would be a better way to do the same thing as browser preloading. Similar to how SPF is done.

In the off chance a browser vendor is reading this and a DNS record is supported, to avoid DoS attacks, it should only be cached when DNSSEC validates it. Similarly, the HSTS header from the server should only be cached when sent over TLS (which hopefully is already how it works).

Browser vendors, start supporting DNSSEC. It provides KISS solutions to a lot of problems.

F) Certificate Security

The private/public key pair on your server should never be used for actual encryption, encryption should be done by an ephemeral shared secret negotiated between client and server (forward secrecy). It is used for authentication, to give the client confifence it is talking to the actual server and not a MITM.

The PKI system however does not always work as intended. There are two issues that happen more often than they should:

A) Fraudulently issued certificate
B) Stolen private key used with valid certificate.

Some form of 2FA is needed to verify the certificate and reduce instances of fraud.

Fa) DNS CAA

This is a DNS record that specifies what certificate authorities are allowed to issue a certificate for a given domain. It is intended to help mitigate A above. I do not use it. It provides very little security. Clients do not check it and they shouldn’t, so it is only effective when a certificate authority checks to see if the DNS record exists and if it exists, follow what it says. If the DNS server is not protected by DNSSEC it is too easy for an adversary to return fraudulent results. Even if the certificate authorities did enforce DNSSEC, should a major browser vendor decide to revoke trust in the CA that I use, to get a new certificate issued by a different CA then I would have to update the DNS record, wait for the change to propagate, and then seek a signed cert. That time is business lost through no fault of my own.

I can’t say it is wrong to use DNS CAA, just that I don’t believe it is the right approach to solving the fraudulently issued certificate issue, especially since it provides no mechanism that browsers can use as a second authentication factor.

Fb) OCSP

Due to the life of a certificate, there has to be a way for CAs to revoke a certificate. This is needed for when a CA realizes it was tricked into issuing one, or for when there is reason to believe the private key associated with the certificate may have been compromised.

Initially this was addressed with Certificate Revocation Lists but that did not scale well. OCSP was invented to allow the client to check that the certificate was valid, but that also had problems. A popular website would result in high number of OCSP requests which caused response issues. High Availability under load is extremely expensive.

OCSP Stapling is a much more elegant solution and not only should be used, it should be required at least until browsers support DANE.

With OCSP Stapling, once a day or do the server makes the OCSP request and caches it, sending it to the clients along with the certificate. Since the response is singed by CA it can not easily be forged. The client then knows that the certificate has not been revoked, at least not before the timestamp in the signed OCSP response. It provides 2FA for the client with respect to the validity of the certificate.

OCSP stapling also solves a privacy issue. When browsers have to query the OCSP server themselves, it reveals to the OCSP server what websites the user visits. I see that argument used a lot in favor of enabling OCSP stapling, but oddly, that argument is often made on websites that use Google Analytics, Google Adsense, embedded YouTube videos, and other third party trackers. Very hypocritical of those web sites to advocate OCSP stapling for privacy reasons while at the same time supporting blatant tracking from companies specifically known to track users.

It is my opinion that browsers should issue an insecure connection warning whenever a cert is not accompanied by a valid OCSP response, but that is not what browsers do.

What you as a webmaster can do, in addition to enabling OCSP Stapling, is add “OCSP Must Staple” to your CSR when requesting a signed certificate. With at least some browsers, that tells them to behave the way they already should behave. Not all browers will, but FireFox does.

That won’t protect your users from fraudulently issued certificates, but it will protect your users in the event that your properly issued certificate needs to be revoked because the private key was possibly compromised.

Enable OCSP stapling on your server, and use the “Must OCSP Staple” feature of x509 certificates. At least for web servers, it is not useful for SMTP servers IMHO.

Fc) DANE

The best 2FA for your certificate is DANE but unfortunately browsers do not support it.

With DANE, a fingerprint of your public key is in a type of DNS record called a TLSA record. This allows the client to verify that the fingerprint of the server matches the fingerprint in DNS and reject it if it does not.

This would completely thwart fraudulently issued certificates, their public key fingerprint would not match what is in DNS unless the hacker controls your DNS server (in which case is game over anyway).

This also completely thwarts the issue of stolen private keys. When you have reason to believe your private key may have been compromised, revoking the certificate is only part of the recovery process. The other part is generating a new private key. Update the TLSA record to reflect the new public key fingerprint and remove the old public key fingerprint, and clients can reject certificates based on the old key whether or not they have an OCSP response telling them to. This btw results in faster invalidation of the old cert as OCSP responses are usually considered valid for more than 24 hours but DNS records rarely are valid for that long, I use an hour on my own TLSA records.

For this solution to work, you need to use DNSSEC on your zone and browsers need to enforce it.

This solution is already being used in SMTP. It is time for it to come to the web.

3 Likes

There are some very positive recommendations in your post.
Some things you mention, but don’t/haven’t implemented in your sample site.
You also seem to discard some things (all to easily) when you haven’t been able to implemented them to their potential perfection.

All in all a good post; and a good start for positive dialog on a very timely topic.

I my review of your sample site…
When implemented correctly, you could benefit from the addition of:
RSA cert 4096 bit (in addition to the existing ECC cert - dual certs)
DNS CAA (RFC 6844)
TLS 1.3 (RFC 8446)
DHE (0x9F) [with 4096 bit DH prime]
ARIA (0xC05D)
HPKP (RFC 7469)
Additional Named Groups (brainpoolP512r1, sect409r1, brainpoolP384r1)

Things you might want to remove from your sample site report:
HTTP server signature = Apache/2.4.35 (LibreLAMP) LibreSSL/2.8.2 PHP/7.1.23
Server hostname = librelamp.com (rDNS for 45.79.96.192 & 2600:3c01:0:0:f03c:91ff:fee4:310c)

Again, ONLY “when implemented correctly”.
And I can’t stress this enough: If you can’t implement something correctly, then just don’t.
That goes to everyone [not just to you nor to myself]

TLS 1.3 will be implemented when LibreSSL supports it.
I am not aware of any attacks to TLS 1.2 as configured there that TLS 1.3 would defend again, so changing the TLS stack isn’t a concern.

HPKP I have always believed to be the wrong solution to key pinning. It is not flexible enough. If your private keys are compromised (e.g. admin who turns out to be untrustworthy) then your site is bricked for many users.

Server signature - hiding that is obscurity, not security. It generally isn’t hard for someone to find out what a server is running who wants to know.

rDNS - yes, it’s a virtual host, so the rDNS is going to reveal a different host. Again, hiding that is obscurity but not security.

I'm not here to argue.
But your points are very one sided.

So no one else should do so?
So we should all wait for LibreSSL to implement an RFC?
And ignore the fact that others are already doing it.
Hold on! Let's wait even longer!
Maybe late next year Microsoft will support TLS 1.3 in Schannel for Server 2019 R2.

I concens me when we are no longer using TSLv1 nor TSLV1.1 - which leaves us with only ONE secure protocol TLSv1.2 [that is putting all eggs in one basket - always a bad result = see epic example of ONE EGG in one basket gone bad: KRACK - Wikipedia]

If an admin "turns out to be untrustworthy"? And you can protect against that? At the DNS level? At the AD level? At the Bank level?
It pays to be paranoid - but it is insane to only allow people who you trust and whom will never become untrustworthy to do your trusted things - you will end up doing it all yourself (that doesn't scale well).
Trust but verify - checks and balances...

So let's make it even easier for them? Why don't you give me your back routing number and account number … it should easy enough to get that information from someone who already has it.. no?

If you don't understand the power/benefit of obscurity, why do you discount it? And basically profess against it?
Given: Obscurity is NOT security.
But it can add to it.
Much like hiding a keyhole; it is still there, but they first have to find it first.
Why would anyone waste time looking for one that is hidden when there are so many that are obvious out there?
I prefer NOT to be the obvious target.

Choose one:
A. not Obscure & not Secure
B. Obscure & not Secure
C. not Obscure & Secure
D. Obscure & Secure

I’m not arguing that no one should run TLS 1.3.

If your TLS stack supports it, run it. TLS 1.3 is a good thing.

I just don’t believe it is worth changing from LibreSSL to something else just to have it, especially when I know it is coming to LibreSSL. LibreSSL doesn’t have it because they have a smaller (but very dedicated) developer team that decided to wait with implementing until the draft was finalized.

-=-

HPKP only provides a keypin solution to HTTPS traffic. It was developed in-house at Google and deployed by Google without any input from the academic world. It does nothing for secure ftp, pop3, imaps, etc. and I believe it actually hampered the adoption of a better solution - DANE.

With HPKP you have to have two (or more) private keys. Rotating is more complex and a mistake bricks your website for the users that use it the most as those users are statistically more likely to visit while the mistake is live.

Granted, it is the only keypin solution browsers currently support, but DANE is a superior solution browsers should support but don’t because of HPKP.

Also, several browsers implemented it wrong and would allow alternate fingerprints in certain situations, so it didn’t even provide what it claimed.

DANE provides key pinning that is agnostic to port and protocol, a real solution, and it doesn’t have the risk of long term bricking if you make a mistake. It obviously can brick if you make a mistake, but no longer than the cache life of the TLSA record - which is quite a bit shorter than HPKP.

HPKP is Trust On First Use, always a bad model. DANE is Validate On Every Use. I can’t recommend a TOFU solution that is port and protocol specific and where mistakes brick a site for up to 30 days for users who visit when the mistake is active, or when both keys have been compromised requiring emergency fresh keys.

-=-

In my 20+ years of Linux server admin, one thing I’ve learned - attackers don’t care what you are running. They just don’t. Nor do they believe the server signature is always accurate, they know it often is not. If they have an attack, they (or rather their scripts) will try it. They aren’t to skip my server because the server string doesn’t advertise it is running the software that is vulnerable.

Advertising the string though is useful for researchers who look at what random sites are running for statistical analysis.

Emm...

Just saying, some of my sites already support TLS1.3 by manually compile Nginx / Apache with OpenSSL 1.1.1....

If you use Google's version , boringSSL, they have TLS1.3 support for a long time. (Not to mention it also brings it to CloudFlare sites)

If you hide the server signature, at least it won't be too obvious for bots to get the exact version / information for the server...

Yes... But hiding server signatures would effectively prevent some other people from (trying) to hack your site.

After this, it's WAF playing it's role.

If so, why major sites / security firms list "hide server signature" as a priority when they wrote some security guide or trying to protect others site?

Guess what would happen if your site shows a server signature with a definitely outdated version..... Not only the hackers you mentioned (that use scripts ) will come, the ones aren't familiar with hacking would try to throw some server version specific scripts onto the server...

Ask them. I have never seen them publish researched data that demonstrates it has any impact. I see why people would believe it does, and maybe if there were a few hackers in the world who didn't use bots it might - but attackers use scripts that don't care. They try, and if they fail they try the next on list, etc. - they really do not care about the server string, not from what I've seen. It's amazing how many times after getting a 404 on something like a wordpress login page - the SAME IP address tries a different attack on the same URL that just returned a 404. They really don't care. If you are vulnerable, patch or pay the price, obfuscating server signature isn't going to help.

It's claimed that it helps, but never with supporting evidence.

I should make it clear that I don’t think it is bad practice to obfuscate the server signature, I just don’t believe it ads any real world security. Nor apparently do the Apache developers or distributions, as they ship with it enabled.

Okay after some thought and PM conversations I want to clarify a few things.

A) Server Signature (aka Server Banner)

It certainly is not wrong to silence it, but honestly I don’t believe it helps. Different server software respond in different ways - from order of headers to how they respond to bad requests. This makes it fairly trivial to fingerprint servers that don’t give a banner, or give a dishonest banner. Hackers don’t believe the banner, trickery is their craft and so they expect things like headers to be dishonest.

It certainly doesn’t hurt to hide the server software, I just don’t believe it helps.

Hiding the version of PHP may be of some benefit - but only if you do not keep your PHP up to date. If you are running PHP you compiled yourself, always check every new release to see if it has a security fix and rebuild if it does. Even if the security issues doesn’t impact your particular web applications. Do that and it really doesn’t matter if they know what version of PHP you are running because it has known vulnerabilities patched.

B) HPKP

I’m not a fan of HPKP but my despite my reasons listed previously, it isn’t wrong to use it, as it is the only method of public key pinning that currently exists.

Using it with Let’s Encrypt can be tricky because you need to preload the new key before it goes into service, and because HPKP is TOFU, you want a fairly long cache life for the pinned keys - generally 30 days. So you have 90 day certs where you need the new fingerprint in the header 30 days before you rotate. That makes automation difficult.

If you do use HPKP - have an emergency private key that is not online and have the fingerprint for that emergency private key in the header, so that in an emergency you still have a private key you can use to generate a cert from that browsers will accept.

C) Named ECC Groups

Some of the “standard” ecc curves were created using parameters where an explanation behind the parameter choice was not disclosed. This does not mean they are not safe, but it is a cause for concern.

As browsers start to support the newer named curves that have full disclosure behind all the curve parameter choices, it probably is a good to idea to both support those curves and give them server priority.

I currently don’t as browser support for them is still waiting, but it is coming.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.