Options for private websites

I run external websites from my own systems and use letsencrypt via certbot quite happily. I also have a number of sites/services which are not accessible from the internet, and in the past I have run them from http/80, which was fine for what I needed.

Recently I am finding that changes to common web-browsers are making it extremely difficult to use http:// sites; the browser appears to auto-redirect to https:// and then (of course) fail to connect properly.

I am kind of resigned, therefore, to setting up the http: sites with certificates either via a proxy or directly. If there is a way to tell browsers (FF, Chrome, Opera, Safari) to never redirect (to believe me when I say “http://”) that would be good, but I haven’t found anything. [I can ‘curl’ the sites as http:// without issues, and network probes show there isn’t a connection attempt to port 80 for a server initiated redirect].

The web servers (some of which are embedded, not apache/nginx etc) are not directly accessible from the net, the DNS is entirely local, and I don’t want to change either point. In general the OS is some flavour of Ubuntu, mostly 18.04.

What options should I be considering?

I have so far considered:

  • Simple mgmt tools like XCA (which is ok but I found it clunky to use);
  • setting up an OSS web interfaced CA to manage certs (but can’t see a good one that runs on Ubuntu and has limited external requirements - Fedora '389 for OpenCA is a blocker, for example);
  • manual openssl (but I really want something that remembers what, why and when certs were made – limited fuss);
1 Like

This shouldn't happen, with some minor exceptions (like the .dev gTLD, which enforces HTTP Strict Transport Security for every domain).

If your browser is redirecting to HTTPS for ordinary domains, it's because the server is sending a redirect, or because the browser saw a 301 "Permanent" redirect in the past for that domain, or because it saw an HSTS header for the base domain.

Clearing the browser cache and HSTS cache should take care of those, if that's the case.

2 Likes

Should all that fail and/or you would still like to encrypt the internal HTTP traffic:
You could try getting one wildcard cert via DNS authentication and copy it around to all the internal servers that need TLS. [presuming they can see one another - and repeat that every 90 days.]
Or you could just use some sort internal PKI system that could provide certs once (for many years).
Otherwise, each internal system might also be able to get its’ own LE cert via DNS authentication.
[not sure why you would need to go down this route]

1 Like

Thank you both; _az was right (very embarrasingly). I think the problem was a Header directive in the ssl.conf file on my proxy server which was included globally. Fixing now.

Thank you rg305 - your info is very helpful. I did think of wildcard certs and that would solve the problem, but my internal DNS is, well, internal, and external DNS is at ISP Zen, which doesn’t seem to be approved for certbot :frowning: Is that correct?

Ruth

2 Likes

Hi @rivimey

how do you use these internal sites / services?

Isn't it possible to use a self signed certificate with a very long duration? (10 - 20 years)?

I have such an internal service, only used by one client (the db-server with own code). So it was easy to create an exception and to accept the self signed certificate. Works perfect, no renew is required.

1 Like

If you can delegate a DNS zone to a system at your site, you could manage that zone from within your network.
i.e.
yourdomain.com controlled at ISP Zen
[delegates yoursite.yourdomain.com to your site]
yoursite.yourdomain.com controlled by DNS within your site and is allowed AXFR, IXFR, etc. with ISP
[delegates internal.yoursite.yourdomain.com to your (truly) internal DNS]
internal.yoursite.yourdomain.com controlled by DNS within your internal network and allows updates from local clients (acme.sh | certbot).
can authenticate issuance of:
serverX.internal.yoursite.yourcompany.com
or even:
*.internal.yoursite.yourcompany.com

[hope that was clear]

edit: actually change “delegates” to “becomes a secondary DNS of” and you can better understand it

1 Like

Juergen, I use the services from various systems including my macbook, a win10 desktop and various tablets/phones. It’s things like a Squeezebox server, print server, the CCTV system, etc. No intention of making them even slightly public!

I could use a long-lived cert but it would need me to install something on multiple machines, some of which are “hard”. Not keen, but thought it might be the only way. I do have a couple of such certs in fact but am aware I will have to start from scratch when they expire - was hoping not to repeat that.

rg305 - delegation might be possible. I would have to ask Zen, I think, whether they support it. Good thought, anyway.

1 Like

Just had a horrible thought. I have an external webserver running on www.mydomain.com, which uses ssl and has a cert. It therefore has an HSTS directive as well.

I have other sites on other.mydomain.com which are http only as discussed. Would the external site's HSTS directive "poison" the http only sites?

[perhaps that's why it seemed to work and then broke?]

1 Like

Having an SSL certificate doesn’t necessarily mean you have HSTS. That’s an opt-in header that a server administrator might add: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security

If you did have an HSTS header on mydomain.com, and it used the includSubDomains directive, then yes, it would poison all subdomains, if the browser had observed the header at least once.

4 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.