I made a few certificates using standard, newly installed certbot, on a fresh Ubuntu 18 server with standard LAMP, using SSH under root account.
Didn’t do ANYTHING special or out of order, all standard installs!
The resulting certificate(s) arr not recognized by any browser because they say “invalid CA, common name: localhost.”
Those are self-signed certificates, also known as "snake-oil" certificates. They're installed by default and obviously won't be trusted at large because they weren't issued by a trusted CA, like Let's Encrypt.
Did you reload your webserver after running certbot?
Clear, but since Apache needs at least 1 certificate to start TLS/SSL to begin with, that "dummy"-certificate MUST be present otherwise we don't even start to row the boat...
I can reboot and (gracefully) restart whatever I want, to no avail... Where the heck does that "localhost" come from? And if LetsEncrypt advertises with "fully updating Apache", why doesn't it (obviously) don't do that?
@griffin: solved. For everyone who's having the same problem:
TLS/SSL connections use SNI to resolve the name of the intended domain when connecting (as multiple domains can be served and someone was asleep when the protocol was written down, b/c the exact same mistake was made in 1992 forgetting the Host: field:-)
In order to have a "last resort if the hostname can't be resolved", a default certificate (which is NOT a "root" certificate as mr HardcoreGames explains it) must be present, which defaults to a self-signed 509 file. BUT, that default certificate MUST be signed!
So, to solve it: replace both .key and .crt in /etc/apache2/ssl and reboot/restart Apache.
If you're thinking of Certbot, the trouble is that Certbot doesn't know how or to what extent your prior certificate configuration is intentional. Some people might deliberately use a mix of self-signed and CA-issued certificates, for example for internal or test domains vs. publicly-visible ones.
The self-signed or snake oil certificates are, I believe, typically created by the OS package that installs the web server (e.g. an Ubuntu package for Apache). However, we don't have a uniform standard way for this package to tell Certbot (or another client), or for Certbot (or another client) to tell this package, "this certificate is irrelevant/temporary, so feel free to replace it with a different one".
Although it's true that SNI is a sadly late addition to the TLS standard, almost all clients support it by now. (For the command-line openssl s_client, you have to specify -servername in order to tell it what to send.) So I don't think it's that bad that non-SNI connections may fail for this reason. They're also extremely hard to get right in an automated way, because different administrators may want or expect different behaviors in this case.
Sorry to - respectfully - say that's not true: it's VERY simple to just create a list of standard keys that are included in standard installs and check the cert data against that predefined list. Kinda preschool level for students, to be honest. Just checked 7 installs of Ubuntu/Apache, and guess what? 6 out of the 7 pre-installed std certs are all one and the same! Same probably goes for all other combinations (since those developers are just as lazy:-), so it's about 2 hours work - max - to prevent a LOT of problems in 1 of 4 installs...
Just solved the problem the same way for auto-installs on servers, took me 18 minutes to write that script so please don't tell me it's a) not possible or b) too much work.
(and I say that with the highest respect for every developer!)
That's interesting, but I feel like I was thinking about this at a higher level of abstraction, in the sense that I don't think the OS packagers are making a promise about that. So I think it would be fair to call it undocumented behavior that could easily change in the future.
(This does suggest possible scope for trying to coordinate with the OS packagers about having an official way to detect this.)
I think of this issue as kind of on par with grepping for a particular natural-language string in the output of a command in order to detect an error condition. It's extremely possible, and might be very useful, but the string could also easily change from one version to another, so it might be risky to rely on it. The best case would be that the documentation for the software would explicitly standardize this output to some extent. I'm thinking about something like the difference between a library call returning ENOENT (which is part of a standardized interface) and a particular program displaying No such file or directory or File not found or something. (The text "No such file or directory" might even be standardized by an operating system—like as the output of perror() or something—but yet it might change from one locale to another or one OS to another, so that scripts that look for it might have unexpectedly limited portability.)