More of a general question for comment rather than a problem I suppose. We just figured out an error on an internal "intranet" server app that employees were unable to use. Turns out that Chrome browser auto-blocks insecure content on websites by default and since this is an internal server with no outside network/Internet access we have no certificate installed. This is a very common thing for private intranets so begs the question: what happens in the future when browser makers block all insecure web access??
In our specific case we have three choices:
create a self-signed cert and install it on that server
push a network-wide registry patch to all employee computers that adds that server to the Chrome browser list of allowed access
move the app out to our DMZ and install a "real" cert
Seems like a toss up to me and made me wonder how the rest of the world handles this...
While I think the topic @Bruce5051 mentioned may be useful, I believe there is another option for your use-case – that topic is from someone who wants an intranet completely isolated from the public internet, even for certificate procurement. That is an edge case, and a relatively extreme design.
If I understand your situation correctly, it is a very common scenario. In this situation, a common solution is to use the DNS-01 challenge to obtain a publicly trusted SSL Certificate via ACME from LetsEncrypt, and install that certificate on your internal server. You can then use internal DNS to map that domain onto the correct IP within your intranet.
Your internal server would never have to be on the public internet, but you must obtain the certificate by:
entering DNS TXT records on the public internet; and
requesting the certificate from a computer that can make outbound connections to the public internet.
For security and ease of use, the DNS records required for ACME can be delegated to a secondary system. I suggest acme-dns which is described here.
This solution keeps your server isolated on the intranet, and affords you a publicly trusted certificate.
Getting and deploying the certificate to your server can be automated. When I utilize this model, I typically use a dedicated office laptop that has a Certbot post hook to invoke a fabfile.org script that rsyncs the cert, then ssh's into the local server to restart the relevant processes.
This is difficult to grasp.
How does this system get updates?
How can anyone reach this system?
If anyone can reach the system, that can also reach the Internet, then it can "interact" with the Internet via that "proxy".
IMHO, the most appropriate solution depends entirely on the "size and scope of the environment".
For "larger networks", there is likely a domain wide CA. Which can be used to provide a locally trusted cert.
For "smaller networks", you can create a self-signed cert and use/publish that in various ways OR place system behind an internal proxy.
Running a corporate CA is very common solution. You push out the CA to your internal devices via whatever device management solution you use, or install manually during device setup.
There's plenty of solutions, from Microsoft's Active Directory Certificate Services, Redhat's identity management CA, Hashicorp's Vault, Smallstep's ACME-compatible CA, and plenty more. If you're in a cloud environment, there's solutions there too like AWS's Private CA product.
You can use public certificates to avoid needing an internal CA too. If your servers are managed via a configuration management tool of some sort, you can centrally handle issuing and distributing certificates - probably using the DNS authentication mechanism to request all the certs you need.
Self-signed certificates are always an option too, though that can be annoying for end-users who will have to trust them individually, and understand the warnings when they're replaced.
The main reason people don't want to use public certs internally is that all certificates are logged publicly, so it may reveal more than you'd like about your internal infrastructure. As well, it's an additional dependency on an outside party.
From an internet-connected computer, you can use a DNS challenge to get a publicly-trusted wildcard certificate for *.internal.domain and copy it to the internal servers. This avoids needing to configure clients to accept a private CA while not revealing details of your internal network (other than that it exists). Unfortunately, this does require a flat network, since you can only have one wildcard.
Personally I favour the use of a public CA and DNS validation (using your public DNS to confirm you control the domain) to get a trusted cert. Your actual service does not need to be public. The name used in the cert will indeed be publicly logged to CT logs, but that's largely irrelevant.
Your acme client will need to (ideally automatically) populate a special _acme-challenge TXT record corresponding to the host name you want, e.g. if your intranet site had the full name hello.intranet.example.com and your DNS had a zone called intranet.example.com then you would update the _acme-challenge.helloTXT record in that zone (or just _acme-challenge.hello.intranet in the example.com zone).
Once validation is completed and your certificate acquired you can deploy to anywhere you like, it's just a file (or files) you can copy or import into things. This means you can acquire certificates using any machine, again it can be internal, it just needs to be able to talk outgoing https over the internet to the ACME certificate authority, e.g. Let's Encrypt. It does not need to be public in any way.
If you are a typical Windows based IT organisation I would (predictably!) suggest the client I develop, which does all this and more: https://certifytheweb.com