Nginx Proxy Manager GUI / Setting up new SSL cert

It should not have much of an effect on your problem, but we should disable IPv6 within the docker container - so that we can focus on only IPv4.

That said, did you do the traceroutes?

1 Like

also:
traceroute -I4 acme-v02.api.letsencrypt.org
traceroute -I6 acme-v02.api.letsencrypt.org

It didn't want the argument "l4". So I ran the following:

:~$ traceroute acme-v02.api.letsencrypt.org
traceroute to ca80a1adb12a4fbdac5ffcbc944e9a61.pacloudflare.com (172.65.32.248), 64 hops max
  1   192.168.1.1  0.247ms  0.004ms  0.134ms 

And it stops there...

That statement conflicts with the attempts:

It should not have much of an effect on your problem, but we should disable IPv6 within the docker container - so that we can focus on only IPv4.

Certainly. You have a good point there. If IPv6 is somehow enabled in the Docker vm, it won't travel very far with that in this enviroment.

I shall have a look... Then I'll post an update when I've checked/disabled all IPv6 networking on the VM. Thank you very much ! :wink:

Well that's hit a brick wall!
Check firewalls... routing... docker settings...

2 Likes

It's interesting that these both worked from the Docker VM while traceroute failed

2 Likes

Yeah, that's starting to look like a firewall problem.

1 Like

Thanks for the help so far.
Interesting enough, the Nginx container lost internet connection when disabling IPv6 on the Docker vm.
I shall check the container config and disable any IPv6 configurations, if any. Would be strange if I do not find any.

What defines the traffic initiated by the curl command, network wise? I shall try to add a static rule for outbound traffic and see if that helps. Maybe also just reboot the firewall will help. Haven't done that yet. Have never ever had a problem with it, that is solved by a reboot... But who knows.

@rg305 , I disabled IPv6 on both container and vm. Now the container etc. are still working as they should, just without IPv6. :slight_smile: So far so good.

But the problem still exists. It times out. But not on the hypervisor host. Only from the VM behind PFsense. When I run the curl command on the host, this is the whole output:

root@host ~ # curl -vvv https://acme-v02.api.letsencrypt.org/directory
*   Trying 172.65.32.248:443...
* Connected to acme-v02.api.letsencrypt.org (172.65.32.248) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=acme-v02.api.letsencrypt.org
*  start date: Apr 26 18:59:11 2022 GMT
*  expire date: Jul 25 18:59:10 2022 GMT
*  subjectAltName: host "acme-v02.api.letsencrypt.org" matched cert's "acme-v02.api.letsencrypt.org"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5574ecefc5c0)
> GET /directory HTTP/2
> Host: acme-v02.api.letsencrypt.org
> user-agent: curl/7.74.0
> accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200 
< server: nginx
< date: Sun, 22 May 2022 16:26:56 GMT
< content-type: application/json
< content-length: 658
< cache-control: public, max-age=0, no-cache
< x-frame-options: DENY
< strict-transport-security: max-age=604800
< 
{
  "keyChange": "https://acme-v02.api.letsencrypt.org/acme/key-change",
  "meta": {
    "caaIdentities": [
      "letsencrypt.org"
    ],
    "termsOfService": "https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf",
    "website": "https://letsencrypt.org"
  },
  "newAccount": "https://acme-v02.api.letsencrypt.org/acme/new-acct",
  "newNonce": "https://acme-v02.api.letsencrypt.org/acme/new-nonce",
  "newOrder": "https://acme-v02.api.letsencrypt.org/acme/new-order",
  "revokeCert": "https://acme-v02.api.letsencrypt.org/acme/revoke-cert",
  "sSXqDHmwoog": "https://community.letsencrypt.org/t/adding-random-entries-to-the-directory/33417"
* Connection #0 to host acme-v02.api.letsencrypt.org left intact

I am looking at * Trying 172.65.32.248:443...

Isn't that a RFC1918-address? Or, isn't it a typical LAN-address?
Could it be that PFSense blocks it because it's recognized as a LAN-address? In stead of forwarding the query out on the WAN side. I'll try to make an allow-rule for outbound traffic on that whole subnet. And look for the setting somewhere, I know that it exists, blocking of RFC1918-addresses. Maybe that's it.

No, just a regular IP address (currently) from Cloudflare. You're probably confused with 172.16.0.0/12 which is only 172.16.0.0 to 172.31.255.255.

2 Likes

Yeah, you're right. sigh.

Start there.

And this is very likely (good eye @Osiris):

And using 172.16.0.0/8 - 11 instead.

1 Like

Hey everyone.
I have allocated the problem.

This.

I had some issues with traffic between some containers and the WAN IP at some point.
So this was one of the things that I tried back then. The traffic towards letsencrypt was practically being routed back to the docker vm lol.

I deleted this route, and the vm can reach the server with curl.

1 Like

Thanks, to everyone helping me out !

2 Likes

You can also just use the correct route/mask: 172.16.0.0/12

2 Likes

The screenshot does not mention this private address space at all. OP was just having a separate brainfart when looking at the Cloudflare IP address, not related to pfsense.

1 Like

The screenshot shows a route entry that is defined as networks expected to be found within "Docker containers via Docker VM".

So, my statement of "You can also just use the correct route/mask: 172.16.0.0/12", may have overstated "correct", but would none-the-less "work"; As only non-routable IPs are normally expected to be within any Docker container.

Although it is not a question of "correctness", clearly 172.0.0.0/8 overlaps a very large chunk of real Internet space and is thus "incorrect" and created the problem.

1 Like

Hm, I must confess I have looked at the screenshot multiple times now and every time I saw 192.0.0.0/8.. Looking at it for the fifth or so times, I see it's 172.0.0.0/8 :scream: My bad and my apologies.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.