- you dont need to write everything in bold.
- when, no IF the automation works, users wont get the expiry warning because the automation would have already exchanged the cert, assuming the automation works.
- well it is a headache having to check the automation 6 times a year whether it works etc in my opinion.
- sadly not always the maker of the key is at fault. sometimes there can be faults that can either give possible leakage of the key (heartbleed for example) or just create bad keys (in older debians there was a problem in the RNG so there were only like 65536 or so different keys per key length possible, which anyone could have generated)
- 5 or 10 years is something I doubt, at the very least for DV certs. iirc the baseline requirements for CAs say maximim of like 3 years or so.
- short lifetimes can be annoying especially if you want to have control over the whole certificate process but there are other even free CAs that can help you with longer lifetimes. startssl does 1 year certs for free (and now even with up to 5 SAN entries)
The Baseline Requirements were changed, first to restrict all certificates to 60 months (5 years) or less, and then to restrict most certificates to 39 months or less (3 years plus 3 months grace to encourage people with paid certificates to renew by adding the months left on their old certificate onto the new one). I believe a resolution to outlaw the 5 year certificates entirely passed recently. So no, hotwap is wrong, no TLS server certificates you can use on the public Internet are now issued for “5 years 10years”
There are still a few valid certificates issued prior to the BRs coming into existence, some of those were issued with 10 year lifespans. Of course they’re 1024-bit RSA certificates with SHA-1 (at best) signatures, so they’re pretty weak which is why it was a bad idea to give them such a long lifespan.
The commercial CAs are inclined towards probably limiting all certificates to 27 months (2 years plus the 3 month grace) the way that EV certificates are now. There is not yet a ballot for that, but I could see one passing this year.
We’re hardly suffering. Three months is a very practical compromise balancing security and maintenance. And when automated (as most users do) the process requires virtually no maintenance at all. I basically run my eyes over a weekly cron email to see if anything requires attention. It’s not exactly something I “suffer” through.
Yes, ideally the sysadmin should be solely responsible. However in the real world, Let’s Encrypt’s target audience are non-professionals, hobbyists and small businesses. These are users and administrators that have very little skill in hardening their systems, much less identifying system compromises and knowing what actions to take in response.
Just look at the very example you quoted but ignored - heartbleed. This was a major security flaw in a huge number of servers around the world, yet it took a very long time for fixes to be implemented. Months after the issue was identified and patched there were still hundreds of thousands of vulnerable servers.
Even if you wanted to fix heartbleed and you patched your system, amateur and inexperienced sysadmins might not think to regenerate SSL certificates. Short cert lifetimes ensure these problem do not persist for years.
Tension? Really? And why would visitors see an insecure warning? Don’t you maintain your site properly?
Your “arguments” are hyperbole at best. There is absolutely no reason to implement a system that you’re incapable of maintaining. This site and it’s discussions contain plenty of support when it comes to automating renewal. There are many renewal methods, many clients that support various authentication methods, options for when you don’t have root, options for cert locations, and you get multiple reminder emails before expiry in case you haven’t automated the process.
If this causes you “tension”, perhaps you should try another career path.
I can’t facepalm this comment enough. Yes, Let’s Encrypt’s key goal of securing the web means they literally want every website visitor to see insecure site warnings after 90 days.
Offering certificates for longer than a year is a security issue. SSL Labs will actually downgrade your security if your certificate has an extremely long expiry. But being the security expert, you knew that already, right?
Even My1 disagrees with you. That’s how embarrassingly wrong your post is.
wait a sec. this doesnt work unconditionally. unless people change the keys it wont help changing the cert. also obviously the key needs to be made on a secure system with a good DRM (again, debian weak keys)
do they have something written on how long they need to be before degrading?
also this should only be for DV certs beause short EV or OV certs arent exactly practical due to the paperwork etc.
it doesnt actually need to be “linger than” 1 year but a year would be enough for giving people who want more control not too much annoyances.
and giving one year as an option for the control prople wouldnt lower the security if the people who DONT choose that option. da
well the tension part is for those who cant (way too exotic setup, shared hoster that doesnt support it yet, I have no Idea what’s all the stuff that can cause this) or dont want (control) to automate the certs.
nice joke, NOT! I am about the 90 days thing similar to windows 10. everybody has their opinions and their own unique situations (I have my reasons against it, others too and others again like it) but forcing it in ANY direction is the wrong thing. My Favorite word about this is this: OPTION.
For something like the Debian weak keys, where a sophisticated third party can check the public key to determine if there’s a problem, or for situations where a particular client system is faulty, the Boulder server can detect this and present a suitable error message, prompting subscribers to come here to ask “Why does it say my FooOS is weak?”. For a conventional 3 year certificate, that left an awkward 3 year period when CAs had to choose whether to revoke these poor quality certificates unilaterally, or just keep hoping the subscriber would be shaken awake and act to replace them, for Let’s Encrypt it would be only 3 months which is markedly better.
The BR issuance rules control the period during which they can issue based on documents they’d seen in the past - they don’t require everything be re-done for each issuance, a CA can (and for bigger customers they definitely do) just sort out the paperwork once every couple of years and then issue whatever is requested with the same turnaround as for DV in between those sessions.
This is actually a common mistake made by commercial CAs. The security we’re primarily concerned with isn’t security for individual subscribers, but for the public Internet as a whole. The interests of Relying Parties (people who trust TLS certificates to authenticate things, ie all of us) are often forgotten, they’re largely omitted in the present BR audit, and they’re given only lip service in the Root Programmes, other than perhaps Mozilla’s. Limited validity periods help the Relying Parties make good decisions. Letting individual subscribers opt out of that on their whim makes little more sense than letting individual drivers opt out of the speed limit on my road.
oops I was in DRM topic a bit before posting and my head got heated a bit due to that.
I meant RNG.
well there always are the “funny people” that try to get the best rating in any benchmark etc and then complain why nobody can connect.
aside from the lifetime (and the wildcards) LE is the only really good option in the free cert “business”. CACert isnt trusted yet and both WoSign and starssl have annoying restrictions to push users to their paid certs.
right above the quote you have the answer.
well but in the end it will only affect the parties who rely on the cert for that one site. that’s what I meant.
when someone has a bad cert on example.com, the users of example.org are not affected.
and using must-staple or similar also gives you revocation quality.
@hotwap, do you have anything to add to this thread that hasn’t already been said here at least 50 times (or, for that matter, even mentioned in the OP)?
Let me say something: The only reason why I use LE is that only this CA allows my 20 domains and subdomains. If StartSSL were to allow up to 20 domains/subdomains, I would happily switch to them. Finally, wildcard certificates from LE are not yet available. (and I don’t know whether the project managers want to even allow that…)
Heh! And I always thought you were one of those people aiming for fringe cases!
Then make it work for you. Stop pretending extreme edge cases are normal and just make it work. If not, stop pretending it doesn’t work for me (or anyone else) just because of $fictional_example. Either use it, or move on.
Yes they are. Low hanging fruit are always the first to go, and so far, LE is proving itself high hanging fruit. While a high tide lifts all boats, there are still many people not in a boat at all as the tide rises. I want to help, but I can’t stop everyone from drowning.
90 days seems just perfect to me and the renewal can be automated with the simplest of cron jobs. For non supported servers like IIS I am certain someone will write something to automate the renewal process and key generation process. This is all still in it’s early days so like with anything, some people may just have to wait a little bit longer for proper support it.
I don’t know whether it was already said/mentioned, but I found the/one reason why Google is e.g. using 90 days certificates:
If you poke around Google’s SSL configuration, you’ll see that (!) they use certificates signed with SHA-1. But each certificate expires in 3 months, a short-lived window that reduces the chances that a certificate could be forged, while they migrate to SHA-2 in 2015.
So actually this is another argument for 90day certs. Of course LE does not offer SHA-1 certificates, but even if SHA-256 would be vulnerable to collision attacks one would need to be able to do such an attack in < 90days to be able to successfully attack a TLS connection.
As SHA-256 is considered secure this advantage is of course only theoretical, but it’s still a point for 90 days.
I support the choice of 90 days. Alas nope, rugk, although LE’s ability to apply a rolling upgrade to the certificates they issue in just 90 days due to the short expiry is useful for some security problems (e.g. if we had to migrate everybody off x509v3 in a hurry for some reason) it doesn’t help here.
MD-style hashes (which includes SHA-1) are often first broken with a chosen prefix attack, the attacker discovers how to create any certificate of their choosing from an issuer, given that they can persuade the issuer to issue a genuine certificate with a particular set of bytes at the start.
So although genuine Let’s Encrypt end-user certificates expire in 90 days from date of issuance, a successful chosen prefix attack would not produce a certificate for the same names, also expiring in 90 days, but with the attacker’s chosen public key - instead it would produce typically a subCA certificate, expiring in 25 years, with the attacker’s public key. Ninety days doesn’t prevent the disaster.
Our main defence, today, against weak hashes is the insistence in the CA Baseline Requirements that every issuer must choose long, random values for the certificate “serial number”, which appears at the very start of the certificate. This makes a chosen prefix attack hard if the CA obeys the rules, because the attacker can ask for anything they want, but they can’t (shouldn’t be able to if their target obeys the BRs) control the bytes near the start which make up the serial number, so they can’t choose the prefix as they need to except by trying over, and over, and over again in the hope of getting a serial number that suits their attack.
Edited to add: For Let’s Encrypt specifically, but not most commercial CAs, the other defence is that a subCA shouldn’t work from the Intermediates used in production. The X1, X2 and now X3, X4 intermediates used by Let’s Encrypt are signed with “pathlen:0” which means a proper x509 implementation should reject a subCA certificate seemingly signed by those intermediates. So an attacker can’t actually do this trick, at least against mainstream web browsers today, with Let’s Encrypt, but again not because of the 90 day limit.
Something I haven’t seen brought up is PaaS hosts and the sites that use them. Virtually all PaaS hosts wont let you install software, so you can’t run the LE software on them. At best you can use manual mode to get a certificate and manually add it to the system.
For many of my clients there’s a price break where they are willing to have me set this up for them if they don’t have to buy a certificate, but if they’re paying me to manually update a certificate four times a year, then they’re just going to say ‘screw it’ and not get one. The ones that are willing to pay for a certificate would come out ahead by purchasing one that lasts at least a year and having me install it rather than having me do it four times in a year.
90 days makes sense as a default, but I think it’s heavy handed to force it, assume that the people using the software know what they’re doing, what their requirements are, and know what the best solution for their specific problems are, and let them override the default if they’ve put the effort into reading the docs and figuring out how.
You can use alternate clients such as getssl which are specifically designed to run remotely from the site and automate renewal of certificates. Also, if you use the dns-01 challenge then there are a significant number of alternate clients you can use to automate the process - certainly all the Bash and GO ones. The DNS-01 challenge should be included in the main certbot client soon as well.
DNS-record-based verification is more difficult to automate, especially when the DNS servers may be quite separate from the HTTPS servers. So for most it will remain a manual step. Requiring a manual process every 90 days is too much.
Most DNS servers have a way to automate adding a record, whether this is on the same server as the web server ( i.e. bind on many linux servers ) are remote DNS generally has an API ( all the various DNS servers I’ve used have an API - cloudflare, cloudns, freedns, rage4 … ), so automation is relatively easy in my opinion.
a quick check of the top 80% of market share for managed DNS ( http://www.w3cook.com/managed-dns/summary/ ) they all have a good API for automating the process. I didn’t check more than the top 80%, however I suspect most of the remaining 30% also have an API suitable for automation.
a manual process shouldn’t be required. In addition, LE remembers the DNS verification for 300 days ( from memory) so even if there was no API you don’t need to manually add the dns record every 3 months.
Most larger organizations aren’t going to have their authoritative bind servers running on the web hosts. While bind has a remote API, exposing it is a security risk. We have HTTPS servers running in various places – several cloud hosts in several of their facilities, as well as our own facilities, both for public and internal purposes. Giving all those web servers access to update our DNS would be a severe security risk. For instance, we have one sitting out on its own on AWS running WordPress. Now, WordPress just this week had yet another update to close yet another remote-exploitation vulnerability. The last thing we want is to allow anyone who compromises that server to start altering our DNS listings.
We also, for some less-crucial domains, have them at a registrar with a fine web interface – but no remote API. Which is good. Remote APIs are a security risk. This whole Let’s Encrypt effort is about minimizing those.
But LE remembering DNS verification for 300 days is good if true.
You don’t have to have all the individual servers having permission to update your DNS servers, I agree that would be a security risk. You can run all the certificate verification and certificate updates from a single secure server if you wish, then automate pushing the certificate to the servers ( which can be done securely over ssh from a known static IP address )
Let’s keep focus on the issue here: whether there should be the option of having certs good for, say, 1 year rather than just 90 days. Yes, I can set up the process you envision. But in a larger organization with multiple servers in multiple locations, there will be many potential points of breakage when servers change location or IP, or go in or out of existence. Writing even a moderately-complex custom set of scripts, along with maintaining firewall settings and lists of IPs in multiple places to have this work 100%, being required efforts for hundreds or thousands of sysadmins … compared to just allowing certs valid for 1 year, which has been the common standard forever. Those 1-year certs can still be revoked. So we’re saving some few people the labor of revoking stolen certs for stolen domains, while loading labor on sysadmins who are facing no such problems, who find 1-year certs just fine for our uses.
You’re missing the key argument here: Revocation does not work. It’s not about saving the time needed for revocation.
The other argument is that allowing one-year certificates would cause everyone to just continue their usual workflow, while using 90-days should help push the industry towards adopting automation, which is generally a safer approach in comparison to a manual, error-prone process.