How do you test your SSL for readiness to go to production?

Hi guys,

In case of SSL for a web-site, how do you test if the configuration is ready for a production use?
I’ve implemented the certificate for my production on and thinking I did it too early.

I’m still receiving complaints from the customers that they aren’t able to land to my web-site due to some certification issues.
Last one was from Windows 7, latest Chrome with the error message: ERR_CERT_AUTHORITY_INVALID

I’ve tested it as much as I could through different OS/browsers before going to production, but seems there are some cases left outside with very specific configurations.

To list the questions I have:

  1. Do we still have a beta-program for Letsencrypt to include information about the certificate issued for my domain and should I apply there?
  2. I’m using latest letsencrypt X3 (v0.5) to issue certificates for my web-sites.
  3. I’m using stapling and below is the configuration from my production NGINX:

ssl_certificate /etc/letsencrypt/live/;
ssl_certificate_key /etc/letsencrypt/live/;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_prefer_server_ciphers on;

ssl_session_cache shared:SSL:50m;
ssl_session_timeout 180m;

ssl_dhparam /etc/nginx/cert/dhparam.pem;

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/;
resolver valid=86400s;
resolver_timeout 5s;

add_header Strict-Transport-Security max-age=15768000;

Could you please check my configuration/certificate if it’s reliable enough to run on production?

Thank you!!!

The sslabs server test is a good start:


Your configuration looks good to me and SSL Labs doesn’t report any issues.

There haven’t been any reports of users having trust issues on Windows 7 to my knowledge. The DST root CA shipped with Windows 7. I’ve seen outdated or otherwise buggy Antivirus software cause issues like the one you’re seeing in the past (they sometimes include components that scan https traffic using a local MitM proxy, and things tend to go wrong with that …), but that’s just a guess.

One other thing that I’ve noticed is that uses your Let’s Encrypt certificate, but seems to be hosted on CloudFlare and is using a CloudFlare certificate. I can’t think of a reason why this would cause the issue you’re having, but I thought it was odd, so I mentioned it.

1 Like

Conceptually, part of the game is the client side. Your server side is just about 50% of the equation.

To test all possible scenarios (like browsers of all versions, OS’es, etc.), you would need to have a set-up of… Each scenario. Which of course, isn’t doable.

There was a time when introducing dynamic content on your site created the same uncertainty about user’s access possibility.

I guess a hard redirection from http to the https version of the site is probably a drastic step for a commercial site, even though this is probably the type of sites that would benefit the most from serving https.

If you have concerns for your clients not being able to reach your site, mostly (but not exclusively) due to them having ‘outdated’ software to access your site, then you might consider having a http ‘backup’ version, maybe with reduced functionality and/or providing some simple explanation to the user.

What I would then do, would be to log access to the http reduced version to track who fails to use the full https version and try to work it out on a case by case basis.

This is of course long and costly, but if we still need to handle visits from some obscure early version of Netscape Navigator and the first versions of Internet Explorer under Windows 95 etc, then we’re in the non-compatibility race. Which isn’t fun at all.

There are still a huge amount of Windows XP systems live, even though they’re left out in space by the original software provider.

1 Like

Already tested (before hard redirect to https) and good results seen:

Do you see any issue there?

Agree, I think the issue here is the broken DST root CA on the client’s computer.

I’ve removed a cloudflare acceleration on the CNAME record ( Though, I don’t think that could be a reason.

I would recommend looking at the cipher suit order. Currently, you’re having a preference for AES 128 bit in CBC-mode ánd non-Forward Secrecy suits with AES 128 bit above 256 bits AES.

There’s debate about whether 256 bits AES GCM is better than 128 bits CBC. Opinions vary. IMHO I’d always avoid CBC (i.e.: lower preference), because it has had its share of troubles and exploits in the past. In any case you should, IMHO, always prefer Forward Secrecy suits above non-FS suits for obvious reasons.

1 Like

So, you think, I should remove the following line from the configuration:
add_header Strict-Transport-Security max-age=15768000;

Any ideas how to implement that in real life scenario?

Right now, the site opens fine on most WinXP systems with FF or Chrome.
I’m concerned about the rare cases when the customer can’t land to https site due to some certificate invalid issues.

I’m not experienced enough in cipher suits, so I’ve gone for a recommended ciphers from cloudflare:

I bet they know it better than me.
What’s your recommendation regarding ciphers string in the production environment?

1 Like

See this thread and more exactly, the first comment:

It probably doesn’t really matter if you’d put the non-FS suits before or after the 256 bits AES suits, because I’d think every client supports 128 bits FS suits anyway…

But concluding CloudFlare would know it better, hmm, I dunno… Perhaps the Mozilla recommended cipher suits would be a better choice, as those recommendations aren’t for, for example, CloudFlare specific.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.