Certificate issued but connection is secured only in local network

My domain is: wgbis.ces.iisc.ernet.in

I ran this command: sudo certbot --apache

It produced this output:

Successfully deployed certificate for wgbis.ces.iisc.ernet.in to /etc/apache2/sites-available/000-default-le-ssl.conf
Redirecting vhost in /etc/apache2/sites-enabled/000-default.conf to ssl vhost in /etc/apache2/sites-available/000-default-le-ssl.conf
Congratulations! You have successfully enabled HTTPS on https://wgbis.ces.iisc.ernet.in

My web server is (include version): Apache/2.4.41 (Ubuntu)

The operating system my web server runs on is (include version): Ubuntu 20.04

My hosting provider, if applicable, is:

I can login to a root shell on my machine (yes or no, or I don't know): yes

I'm using a control panel to manage my site (no, or provide the name and version of the control panel):

The version of my client is (e.g. output of certbot --version or certbot-auto --version if you're using Certbot): certbot 1.31.0

I see two IP addresses, is your server behind some kind of reverse proxy? Do you have any control over that?

3 Likes

No reverse proxy, but it is getting aliased to wgbis.ces.iisc.ac.in
I do have control over the server

14.139.128.76
14.139.128.75

Who controls these two IPs?

You need to tell the machine(s) that respond on those to use your certificate (or obtain one themselves, for the same domain name).

3 Likes

I got some suggestions such as self signed certificates will work on subdomains but in my case since it had 2 sub domains the url was failing is that the case
Do we have limits on the number of subdomains in a url?

Nope. There's a limit on total length but you're not even close to it.

Wildcards work on a single label, tho:

A certificate for *.example.com will work for www.example.com and blog.example.com but not for popeye.blog.example.com

3 Likes

Who controls your squid server?

curl -Ik https://wgbis.ces.iisc.ernet.in
HTTP/1.1 200 OK
Date: Wed, 12 Oct 2022 11:15:23 GMT
Server: Apache/2.4.41 (Ubuntu)

Via: 1.1 rp2.iisc.ernet.in (squid/3.5.20)
3 Likes

That im not aware tho

So will using wildcard work in mycase?

You should find out about squid. Because HTTPS requests to your domain are using the cert below and not the one shown in your first post.

openssl s_client -connect wgbis.ces.iisc.ernet.in:443 

Certificate chain
 0 s:C = IN, ST = KARNATAKA, L = BANGALORE, O = IISc, CN = *.iisc.ernet.in, emailAddress = networksupport@(redacted)
   i:C = IN, ST = KARNATAKA, L = BANGALORE, O = IISc, CN = *.iisc.ernet.in, emailAddress = networksupport@(redacted)
   a:PKEY: rsaEncryption, 1024 (bit); sigalg: RSA-SHA256
   v:NotBefore: Oct 23 07:07:03 2019 GMT; NotAfter: Oct 20 07:07:03 2029 GMT
3 Likes

No. It won't change anything in your current issue.

2 Likes

Sure, ill check on it

Thanks ill follow your previous suggestion

1 Like

Thanks all for the response, the problem was after the certificate is generated successfully I had to pass it to the system admin who handles the domain address and stuff. After I passed the certificate it is now working successfully. But dono what happened behind the screen
Now there is a new problem, after the secure connection is enabled, all the login form enabled sites are showing 403 forbidden error tho

1 Like

You'd need to point to an example for more help there but you'd need to investigate how the web application accepts the supplied login information and processes it, which will be the part returning the 403 error. I assume this isn't the first time this site has been https enabled?

You may also want to know that some of the other links don't work: https://ces.iisc.ernet.in/hpg/envis/ (https://ces.iisc.ernet.in/ is using the cert for *.iisc.ac.in which obviously doesn't match ces.iisc.ernet.in, so it won't work).

I recommend having a test environment for your systems so you can recreate your entire live website on a set of test servers, this allows you to test changes, improve your system administration practices and figure out configuration problems before they are a problem on your live servers.

It seems like you have a central squid server (which incidentally is very dated) handling traffic to your websites then passing it to the internal servers that actually serve each website. This means that the squid server is the one terminating SSL and is the one presenting all of the https connections/certificates. It may be time to modernise that approach.

As an aside, I'd also be very confident that some of your website content dates back about 25 years (.htm files and table based layouts are the clue), so it might be worth having a project to modernise everything (content included) if these websites are important. In doing so you could also consider moving to modern webservers, content management platforms etc. This has the advantage of up-skilling the current people who support the system, rather than having them maintain old systems they don't understand.

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.