Breach and migrating to new server

My domains are: ioan.blog, travel.charity

My server instance is Ubuntu 24.04 Noble Numbat running docker nginx-proxy and acme-companion, And I've followed their docs and made the privatekey be called travel.charity.dhparam.pem as per their documentation.

I think the issue is with HSTS.

I've moved hosts from 51.15.82.94 to 51.159.171.79

Trying to get travel.charity running again now.

There's been certain agents that cut me access to the original server.

The acme-companion ran fine when running the website on the domain temp.travel.charity, however, when i changed the DNS entry for travel.charity, it stopped working.

I can login to a root shell on my machine (yes or no, or I don't know): yes

The version of my client Certbot is: 2.11.0

After I recover my certificate on this domain, I will move over ioan.blog and make it more friendly to the user, by putting a prompt in their face that it's my blog and my thoughts, and that I'm not liable for anything, similar to how you've done with your T&Cs. However, from an individual perspective. Therefore I will use my domain ioan.dev to introduce myself and give a key for the user to access the blog.

Thank you for your assistance

The instructions I've followed are:

You can disable the certificate for ioan.blog until I instantiate another one, it's already expired, anyway

my docker-compose for my website is:

services:
  travel-charity-nitro:
      container_name: travel-charity-nitro
      image: rg.fr-par.scw.cloud/ioan-pari/travel-charity-nitro:1.0.1
      environment:
      - VIRTUAL_HOST=travel.charity
      - VIRTUAL_PORT=3000

      restart: unless-stopped
      networks:
      - nginx-proxy

networks:
  nginx-proxy:
    external: true

and this one is for nginx-proxy:

services:
  nginx-proxy:
    image: nginxproxy/nginx-proxy:latest
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - conf:/etc/nginx/conf.d
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - certs:/etc/nginx/certs
      - /var/run/docker.sock:/tmp/docker.sock:ro
    restart: unless-stopped
    networks:
      - nginx-proxy

  acme-companion:
    image: nginxproxy/acme-companion:latest
    container_name: nginx-proxy-acme
    environment:
      - DEFAULT_EMAIL=ioanb7code@gmail.com
    volumes_from:
      - nginx-proxy
    volumes:
      - certs:/etc/nginx/certs:rw
      - acme:/etc/acme.sh
      - /var/run/docker.sock:/var/run/docker.sock:ro
    restart: unless-stopped
    networks:
      - nginx-proxy

volumes:
  conf:
  vhost:
  html:
  certs:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /root/server-1/nginx-proxy/
  acme:

networks:
  nginx-proxy:
    external: true

I will symlink files before their expiration date (90 days), i just want to see both domains working for now.

Hello @ioan, welcome to the Let's Encrypt community. :slightly_smiling_face:

What is your question or issue?

Also you mention:

yet here:

:man_shrugging:

4 Likes

hi @Bruce5051 ,

thanks for the warm welcome and for taking the time to answer.

My question is: How long does it take for me to validate against the authority?

I am using acme with a custom config as I've mentioned.

Maybe I should use the fullchain.pem instead of privkey.pem?

Here is the tree and the logs:

โ”€โ”€ certs
โ”‚   โ””โ”€โ”€ travel.charity
โ”‚       โ”œโ”€โ”€ README
โ”‚       โ”œโ”€โ”€ cert.pem -> ../../archive/travel.charity/cert1.pem
โ”‚       โ”œโ”€โ”€ chain.pem -> ../../archive/travel.charity/chain1.pem
โ”‚       โ”œโ”€โ”€ fullchain.pem -> ../../archive/travel.charity/fullchain1.pem
โ”‚       โ””โ”€โ”€ travel.charity.dhparam.pem -> ../../archive/travel.charity/privkey1.pem
โ””โ”€โ”€ docker-compose.yml
dockergen.1 | 2024/06/11 20:39:31 Received event stop for container b93b0cdaa981
dockergen.1 | 2024/06/11 20:39:31 Received event die for container b93b0cdaa981
SIGQUIT: quit
PC=0x7c1ac m=0 sigcode=0

goroutine 0 [idle]:
runtime.futex()
	/usr/local/go/src/runtime/sys_linux_arm64.s:651 +0x1c fp=0xffffdf93add0 sp=0xffffdf93add0 pc=0x7c1ac
runtime.futexsleep(0xffffdf93ae58?, 0x57b0c?, 0xffffdf93ae58?)
	/usr/local/go/src/runtime/os_linux.go:69 +0x2c fp=0xffffdf93ae20 sp=0xffffdf93add0 pc=0x44c2c
runtime.notesleep(0x2ec0c8)
	/usr/local/go/src/runtime/lock_futex.go:160 +0x8c fp=0xffffdf93ae60 sp=0xffffdf93ae20 pc=0x1db1c
runtime.mPark(...)
	/usr/local/go/src/runtime/proc.go:1632
runtime.stopm()
	/usr/local/go/src/runtime/proc.go:2536 +0x84 fp=0xffffdf93ae90 sp=0xffffdf93ae60 pc=0x4ef14
runtime.findRunnable()
	/usr/local/go/src/runtime/proc.go:3229 +0xd34 fp=0xffffdf93af90 sp=0xffffdf93ae90 pc=0x50b14
runtime.schedule()
	/usr/local/go/src/runtime/proc.go:3582 +0x98 fp=0xffffdf93afd0 sp=0xffffdf93af90 pc=0x51b98
runtime.park_m(0xffffdf93b058?)
	/usr/local/go/src/runtime/proc.go:3745 +0x10c fp=0xffffdf93b020 sp=0xffffdf93afd0 pc=0x520fc
traceback: unexpected SPWRITE function runtime.mcall
runtime.mcall()
	/usr/local/go/src/runtime/asm_arm64.s:192 +0x54 fp=0xffffdf93b030 sp=0xffffdf93b020 pc=0x788a4

goroutine 1 [chan receive]:
runtime.gopark(0x4000102260?, 0x4000102267?, 0x0?, 0x4?, 0x400001a4c0?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x400012bc90 sp=0x400012bc70 pc=0x4aeb8
runtime.chanrecv(0x4000222000, 0x0, 0x1)
	/usr/local/go/src/runtime/chan.go:583 +0x414 fp=0x400012bd10 sp=0x400012bc90 pc=0x18db4
runtime.chanrecv1(0x4000120cb0?, 0x1?)
	/usr/local/go/src/runtime/chan.go:442 +0x14 fp=0x400012bd40 sp=0x400012bd10 pc=0x18964
main.runStart(0x2e70e8?, {0x40000100b0, 0x0, 0x4aa58?})
	/build/start.go:333 +0x3fc fp=0x400012beb0 sp=0x400012bd40 pc=0x15030c
main.main()
	/build/main.go:33 +0x28c fp=0x400012bf30 sp=0x400012beb0 pc=0x14d68c
runtime.main()
	/usr/local/go/src/runtime/proc.go:267 +0x2bc fp=0x400012bfd0 sp=0x400012bf30 pc=0x4aa8c
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x400012bfd0 sp=0x400012bfd0 pc=0x7ad54

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000030f90 sp=0x4000030f70 pc=0x4aeb8
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:404
runtime.forcegchelper()
	/usr/local/go/src/runtime/proc.go:322 +0xb8 fp=0x4000030fd0 sp=0x4000030f90 pc=0x4ad48
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x4000030fd0 sp=0x4000030fd0 pc=0x7ad54
created by runtime.init.6 in goroutine 1
	/usr/local/go/src/runtime/proc.go:310 +0x24

goroutine 3 [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000031760 sp=0x4000031740 pc=0x4aeb8
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
	/usr/local/go/src/runtime/mgcsweep.go:280 +0xa0 fp=0x40000317b0 sp=0x4000031760 pc=0x370d0
runtime.gcenable.func1()
	/usr/local/go/src/runtime/mgc.go:200 +0x28 fp=0x40000317d0 sp=0x40000317b0 pc=0x2bbe8
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x40000317d0 sp=0x40000317d0 pc=0x7ad54
created by runtime.gcenable in goroutine 1
	/usr/local/go/src/runtime/mgc.go:200 +0x6c

goroutine 4 [GC scavenge wait]:
runtime.gopark(0x4000050000?, 0x1dbf68?, 0x1?, 0x0?, 0x4000002d00?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000031f50 sp=0x4000031f30 pc=0x4aeb8
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:404
runtime.(*scavengerState).park(0x2eb6a0)
	/usr/local/go/src/runtime/mgcscavenge.go:425 +0x5c fp=0x4000031f80 sp=0x4000031f50 pc=0x3499c
runtime.bgscavenge(0x0?)
	/usr/local/go/src/runtime/mgcscavenge.go:653 +0x44 fp=0x4000031fb0 sp=0x4000031f80 pc=0x34ee4
runtime.gcenable.func2()
	/usr/local/go/src/runtime/mgc.go:201 +0x28 fp=0x4000031fd0 sp=0x4000031fb0 pc=0x2bb88
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x4000031fd0 sp=0x4000031fd0 pc=0x7ad54
created by runtime.gcenable in goroutine 1
	/usr/local/go/src/runtime/mgc.go:201 +0xac

goroutine 5 [finalizer wait]:
runtime.gopark(0x40000305a8?, 0x76884?, 0x1?, 0x5?, 0x80de4?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000030580 sp=0x4000030560 pc=0x4aeb8
runtime.runfinq()
	/usr/local/go/src/runtime/mfinal.go:193 +0x108 fp=0x40000307d0 sp=0x4000030580 pc=0x2ac98
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x40000307d0 sp=0x40000307d0 pc=0x7ad54
created by runtime.createfing in goroutine 1
	/usr/local/go/src/runtime/mfinal.go:163 +0x80

goroutine 6 [chan receive]:
runtime.gopark(0x4000068e78?, 0x14bc5c?, 0x1?, 0x0?, 0x17f800?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000068e50 sp=0x4000068e30 pc=0x4aeb8
runtime.chanrecv(0x400008c000, 0x4000068f58, 0x1)
	/usr/local/go/src/runtime/chan.go:583 +0x414 fp=0x4000068ed0 sp=0x4000068e50 pc=0x18db4
runtime.chanrecv2(0x400008c000?, 0x4000068f68?)
	/usr/local/go/src/runtime/chan.go:447 +0x14 fp=0x4000068f00 sp=0x4000068ed0 pc=0x18984
main.(*Forego).monitorInterrupt(0x4000120cb0)
	/build/start.go:159 +0xbc fp=0x4000068fb0 sp=0x4000068f00 pc=0x14f04c
main.runStart.func2()
	/build/start.go:288 +0x28 fp=0x4000068fd0 sp=0x4000068fb0 pc=0x150658
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x4000068fd0 sp=0x4000068fd0 pc=0x7ad54
created by main.runStart in goroutine 1
	/build/start.go:288 +0x190

goroutine 7 [IO wait]:
runtime.gopark(0x4000186c98?, 0x2799c?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000186c20 sp=0x4000186c00 pc=0x4aeb8
runtime.netpollblock(0x0?, 0xffffffff?, 0xff?)
	/usr/local/go/src/runtime/netpoll.go:564 +0x158 fp=0x4000186c60 sp=0x4000186c20 pc=0x43f08
internal/poll.runtime_pollWait(0xef9863686730, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0xa0 fp=0x4000186c90 sp=0x4000186c60 pc=0x75810
internal/poll.(*pollDesc).wait(0x40000523c0?, 0x4000180000?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x4000186cc0 sp=0x4000186c90 pc=0xd7788
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40000523c0, {0x4000180000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x200 fp=0x4000186d60 sp=0x4000186cc0 pc=0xd8ad0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000340a8, {0x4000180000?, 0x400?, 0x170740?})
	/usr/local/go/src/os/file.go:118 +0x70 fp=0x4000186da0 sp=0x4000186d60 pc=0xe2550
bufio.(*Reader).Read(0x4000186f18, {0x400018a000, 0x400, 0x0?})
	/usr/local/go/src/bufio/bufio.go:244 +0x1b4 fp=0x4000186de0 sp=0x4000186da0 pc=0x100844
main.(*OutletFactory).LineReader(0x0?, 0x0?, {0x400000e300, 0xb}, 0x0?, {0x1dddf8?, 0x40000340a8?}, 0x0?)
	/build/outlet.go:45 +0x1d8 fp=0x4000186f80 sp=0x4000186de0 pc=0x14d8e8
main.(*Forego).startProcess.func3()
	/build/start.go:214 +0x44 fp=0x4000186fd0 sp=0x4000186f80 pc=0x14fed4
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x4000186fd0 sp=0x4000186fd0 pc=0x7ad54
created by main.(*Forego).startProcess in goroutine 1
	/build/start.go:214 +0x2a0

goroutine 8 [IO wait]:
runtime.gopark(0x4000182c58?, 0xea02c?, 0x20?, 0x2f?, 0x4000140000?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000182c20 sp=0x4000182c00 pc=0x4aeb8
runtime.netpollblock(0x0?, 0xffffffff?, 0xff?)
	/usr/local/go/src/runtime/netpoll.go:564 +0x158 fp=0x4000182c60 sp=0x4000182c20 pc=0x43f08
internal/poll.runtime_pollWait(0xef9863686540, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0xa0 fp=0x4000182c90 sp=0x4000182c60 pc=0x75810
internal/poll.(*pollDesc).wait(0x4000052480?, 0x4000200000?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x4000182cc0 sp=0x4000182c90 pc=0xd7788
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4000052480, {0x4000200000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x200 fp=0x4000182d60 sp=0x4000182cc0 pc=0xd8ad0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000340b8, {0x4000200000?, 0x400?, 0x170740?})
	/usr/local/go/src/os/file.go:118 +0x70 fp=0x4000182da0 sp=0x4000182d60 pc=0xe2550
bufio.(*Reader).Read(0x4000182f18, {0x400014fc00, 0x400, 0x400001840a?})
	/usr/local/go/src/bufio/bufio.go:244 +0x1b4 fp=0x4000182de0 sp=0x4000182da0 pc=0x100844
main.(*OutletFactory).LineReader(0x0?, 0x0?, {0x400000e300, 0xb}, 0x0?, {0x1dddf8?, 0x40000340b8?}, 0x0?)
	/build/outlet.go:45 +0x1d8 fp=0x4000182f80 sp=0x4000182de0 pc=0x14d8e8
main.(*Forego).startProcess.func4()
	/build/start.go:215 +0x44 fp=0x4000182fd0 sp=0x4000182f80 pc=0x14fe54
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x4000182fd0 sp=0x4000182fd0 pc=0x7ad54
created by main.(*Forego).startProcess in goroutine 1
	/build/start.go:215 +0x364

goroutine 17 [select, locked to thread]:
runtime.gopark(0x40000327a0?, 0x2?, 0x78?, 0x26?, 0x400003279c?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000032630 sp=0x4000032610 pc=0x4aeb8
runtime.selectgo(0x40000327a0, 0x4000032798, 0x0?, 0x0, 0x1de318?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x608 fp=0x4000032740 sp=0x4000032630 pc=0x5b6e8
runtime.ensureSigM.func1()
	/usr/local/go/src/runtime/signal_unix.go:1014 +0x198 fp=0x40000327d0 sp=0x4000032740 pc=0x72308
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x40000327d0 sp=0x40000327d0 pc=0x7ad54
created by runtime.ensureSigM in goroutine 6
	/usr/local/go/src/runtime/signal_unix.go:997 +0xd8

goroutine 33 [syscall]:
runtime.notetsleepg(0x400000e300?, 0xb?)
	/usr/local/go/src/runtime/lock_futex.go:236 +0x34 fp=0x4000033790 sp=0x4000033750 pc=0x1dda4
os/signal.signal_recv()
	/usr/local/go/src/runtime/sigqueue.go:152 +0x30 fp=0x40000337b0 sp=0x4000033790 pc=0x77460
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x1c fp=0x40000337d0 sp=0x40000337b0 pc=0x14c14c
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x40000337d0 sp=0x40000337d0 pc=0x7ad54
created by os/signal.Notify.func1.1 in goroutine 6
	/usr/local/go/src/os/signal/signal.go:151 +0x28

goroutine 9 [semacquire]:
runtime.gopark(0x2f56e0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000033ec0 sp=0x4000033ea0 pc=0x4aeb8
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:404
runtime.semacquire1(0x400000e318, 0x0?, 0x1, 0x0, 0x0?)
	/usr/local/go/src/runtime/sema.go:160 +0x208 fp=0x4000033f10 sp=0x4000033ec0 pc=0x5c518
sync.runtime_Semacquire(0x0?)
	/usr/local/go/src/runtime/sema.go:62 +0x2c fp=0x4000033f50 sp=0x4000033f10 pc=0x770dc
sync.(*WaitGroup).Wait(0x400000e310)
	/usr/local/go/src/sync/waitgroup.go:116 +0x74 fp=0x4000033f70 sp=0x4000033f50 pc=0x80d64
main.(*Forego).startProcess.func1()
	/build/start.go:232 +0x8c fp=0x4000033fd0 sp=0x4000033f70 pc=0x14fcec
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x4000033fd0 sp=0x4000033fd0 pc=0x7ad54
created by main.(*Forego).startProcess in goroutine 1
	/build/start.go:229 +0x554

goroutine 10 [select]:
runtime.gopark(0x4000183fa0?, 0x2?, 0x60?, 0x0?, 0x4000183f0c?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000183d70 sp=0x4000183d50 pc=0x4aeb8
runtime.selectgo(0x4000183fa0, 0x4000183f08, 0x0?, 0x0, 0x0?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x608 fp=0x4000183e80 sp=0x4000183d70 pc=0x5b6e8
main.(*Forego).startProcess.func2()
	/build/start.go:240 +0x100 fp=0x4000183fd0 sp=0x4000183e80 pc=0x14fa40
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x4000183fd0 sp=0x4000183fd0 pc=0x7ad54
created by main.(*Forego).startProcess in goroutine 1
	/build/start.go:237 +0x660

goroutine 11 [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x400002cc20 sp=0x400002cc00 pc=0x4aeb8
runtime.netpollblock(0x0?, 0xffffffff?, 0xff?)
	/usr/local/go/src/runtime/netpoll.go:564 +0x158 fp=0x400002cc60 sp=0x400002cc20 pc=0x43f08
internal/poll.runtime_pollWait(0xef9863686448, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0xa0 fp=0x400002cc90 sp=0x400002cc60 pc=0x75810
internal/poll.(*pollDesc).wait(0x40000525a0?, 0x4000181000?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x400002ccc0 sp=0x400002cc90 pc=0xd7788
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40000525a0, {0x4000181000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x200 fp=0x400002cd60 sp=0x400002ccc0 pc=0xd8ad0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000340d0, {0x4000181000?, 0x400?, 0x170740?})
	/usr/local/go/src/os/file.go:118 +0x70 fp=0x400002cda0 sp=0x400002cd60 pc=0xe2550
bufio.(*Reader).Read(0x400002cf18, {0x400018a400, 0x400, 0x0?})
	/usr/local/go/src/bufio/bufio.go:244 +0x1b4 fp=0x400002cde0 sp=0x400002cda0 pc=0x100844
main.(*OutletFactory).LineReader(0x0?, 0x0?, {0x400000e378, 0x7}, 0x0?, {0x1dddf8?, 0x40000340d0?}, 0x0?)
	/build/outlet.go:45 +0x1d8 fp=0x400002cf80 sp=0x400002cde0 pc=0x14d8e8
main.(*Forego).startProcess.func3()
	/build/start.go:214 +0x44 fp=0x400002cfd0 sp=0x400002cf80 pc=0x14fed4
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x400002cfd0 sp=0x400002cfd0 pc=0x7ad54
created by main.(*Forego).startProcess in goroutine 1
	/build/start.go:214 +0x2a0

goroutine 12 [IO wait]:
runtime.gopark(0x4000187c58?, 0xea02c?, 0x20?, 0x2f?, 0x4000140000?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x4000187c20 sp=0x4000187c00 pc=0x4aeb8
runtime.netpollblock(0x0?, 0xffffffff?, 0xff?)
	/usr/local/go/src/runtime/netpoll.go:564 +0x158 fp=0x4000187c60 sp=0x4000187c20 pc=0x43f08
internal/poll.runtime_pollWait(0xef9863686350, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0xa0 fp=0x4000187c90 sp=0x4000187c60 pc=0x75810
internal/poll.(*pollDesc).wait(0x4000052660?, 0x400018c000?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28 fp=0x4000187cc0 sp=0x4000187c90 pc=0xd7788
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4000052660, {0x400018c000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x200 fp=0x4000187d60 sp=0x4000187cc0 pc=0xd8ad0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000340e0, {0x400018c000?, 0x400?, 0x170740?})
	/usr/local/go/src/os/file.go:118 +0x70 fp=0x4000187da0 sp=0x4000187d60 pc=0xe2550
bufio.(*Reader).Read(0x4000187f18, {0x400014f400, 0x400, 0x400010470a?})
	/usr/local/go/src/bufio/bufio.go:244 +0x1b4 fp=0x4000187de0 sp=0x4000187da0 pc=0x100844
main.(*OutletFactory).LineReader(0x0?, 0x0?, {0x400000e378, 0x7}, 0x0?, {0x1dddf8?, 0x40000340e0?}, 0x0?)
	/build/outlet.go:45 +0x1d8 fp=0x4000187f80 sp=0x4000187de0 pc=0x14d8e8
main.(*Forego).startProcess.func4()
	/build/start.go:215 +0x44 fp=0x4000187fd0 sp=0x4000187f80 pc=0x14fe54
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x4000187fd0 sp=0x4000187fd0 pc=0x7ad54
created by main.(*Forego).startProcess in goroutine 1
	/build/start.go:215 +0x364

goroutine 13 [semacquire]:
runtime.gopark(0x2f5ee0?, 0x0?, 0x80?, 0xc1?, 0x0?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x400002dec0 sp=0x400002dea0 pc=0x4aeb8
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:404
runtime.semacquire1(0x400000e398, 0x0?, 0x1, 0x0, 0x0?)
	/usr/local/go/src/runtime/sema.go:160 +0x208 fp=0x400002df10 sp=0x400002dec0 pc=0x5c518
sync.runtime_Semacquire(0x0?)
	/usr/local/go/src/runtime/sema.go:62 +0x2c fp=0x400002df50 sp=0x400002df10 pc=0x770dc
sync.(*WaitGroup).Wait(0x400000e390)
	/usr/local/go/src/sync/waitgroup.go:116 +0x74 fp=0x400002df70 sp=0x400002df50 pc=0x80d64
main.(*Forego).startProcess.func1()
	/build/start.go:232 +0x8c fp=0x400002dfd0 sp=0x400002df70 pc=0x14fcec
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x400002dfd0 sp=0x400002dfd0 pc=0x7ad54
created by main.(*Forego).startProcess in goroutine 1
	/build/start.go:229 +0x554

goroutine 14 [select]:
runtime.gopark(0x400002e7a0?, 0x2?, 0x0?, 0x0?, 0x400002e70c?)
	/usr/local/go/src/runtime/proc.go:398 +0xc8 fp=0x400006cd70 sp=0x400006cd50 pc=0x4aeb8
runtime.selectgo(0x400006cfa0, 0x400002e708, 0x0?, 0x0, 0x0?, 0x1)
	/usr/local/go/src/runtime/select.go:327 +0x608 fp=0x400006ce80 sp=0x400006cd70 pc=0x5b6e8
main.(*Forego).startProcess.func2()
	/build/start.go:240 +0x100 fp=0x400006cfd0 sp=0x400006ce80 pc=0x14fa40
runtime.goexit()
	/usr/local/go/src/runtime/asm_arm64.s:1197 +0x4 fp=0x400006cfd0 sp=0x400006cfd0 pc=0x7ad54
created by main.(*Forego).startProcess in goroutine 1
	/build/start.go:237 +0x660

r0      0x2ec0c8
r1      0x80
r2      0x0
r3      0x0
r4      0x0
r5      0x0
r6      0x3
r7      0x0
r8      0x62
r9      0x0
r10     0x0
r11     0x0
r12     0x0
r13     0x0
r14     0x16
r15     0x400002e75f
r16     0xffffdf92b410
r17     0x0
r18     0x0
r19     0x0
r20     0xffffdf93ae30
r21     0x2ebf80
r22     0x4000004000
r23     0x0
r24     0x0
r25     0x0
r26     0x1af208
r27     0x2e1000
r28     0x2eb8e0
r29     0xffffdf93adc8
lr      0x44c2c
sp      0xffffdf93add0
pc      0x7c1ac
fault   0x0
Info: running nginx-proxy version 1.6.0-4-ge6c301a
Warning: A custom dhparam.pem file was provided. Best practice is to use standardized RFC7919 DHE groups instead.
Warning: TRUST_DOWNSTREAM_PROXY is not set; defaulting to "true". For security, you should explicitly set TRUST_DOWNSTREAM_PROXY to "false" if there is not a trusted reverse proxy in front of this proxy.
Warning: The default value of TRUST_DOWNSTREAM_PROXY might change to "false" in a future version of nginx-proxy. If you require TRUST_DOWNSTREAM_PROXY to be enabled, explicitly set it to "true".
forego      | starting dockergen.1 on port 5000
forego      | starting nginx.1 on port 5100
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: using the "epoll" event method
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: nginx/1.27.0
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: built by gcc 12.2.0 (Debian 12.2.0-14) 
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: OS: Linux 6.8.0-35-generic
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker processes
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker process 24
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker process 25
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker process 26
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker process 27
dockergen.1 | 2024/06/11 20:40:42 Generated '/etc/nginx/conf.d/default.conf' from 1 containers
dockergen.1 | 2024/06/11 20:40:42 Running 'nginx -s reload'
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: signal 1 (SIGHUP) received from 29, reconfiguring
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: reconfiguring
dockergen.1 | 2024/06/11 20:40:42 Watching docker events
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: using the "epoll" event method
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker processes
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker process 35
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker process 36
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker process 37
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: start worker process 38
dockergen.1 | 2024/06/11 20:40:42 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx.1     | 2024/06/11 20:40:42 [notice] 26#26: gracefully shutting down
nginx.1     | 2024/06/11 20:40:42 [notice] 25#25: gracefully shutting down
nginx.1     | 2024/06/11 20:40:42 [notice] 25#25: exiting
nginx.1     | 2024/06/11 20:40:42 [notice] 26#26: exiting
nginx.1     | 2024/06/11 20:40:42 [notice] 25#25: exit
nginx.1     | 2024/06/11 20:40:42 [notice] 27#27: gracefully shutting down
nginx.1     | 2024/06/11 20:40:42 [notice] 26#26: exit
nginx.1     | 2024/06/11 20:40:42 [notice] 27#27: exiting
nginx.1     | 2024/06/11 20:40:42 [notice] 24#24: gracefully shutting down
nginx.1     | 2024/06/11 20:40:42 [notice] 24#24: exiting
nginx.1     | 2024/06/11 20:40:42 [notice] 27#27: exit
nginx.1     | 2024/06/11 20:40:42 [notice] 24#24: exit
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: signal 17 (SIGCHLD) received from 25
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: worker process 25 exited with code 0
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: signal 29 (SIGIO) received
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: signal 17 (SIGCHLD) received from 26
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: worker process 26 exited with code 0
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: worker process 24 exited with code 0
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: signal 29 (SIGIO) received
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: signal 17 (SIGCHLD) received from 24
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: signal 17 (SIGCHLD) received from 27
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: worker process 27 exited with code 0
nginx.1     | 2024/06/11 20:40:42 [notice] 20#20: signal 29 (SIGIO) received
dockergen.1 | 2024/06/11 20:40:42 Received event start for container b93b0cdaa981
dockergen.1 | 2024/06/11 20:40:42 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification 'nginx -s reload'
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: signal 1 (SIGHUP) received from 50, reconfiguring
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: reconfiguring
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: using the "epoll" event method
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: start worker processes
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: start worker process 51
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: start worker process 52
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: start worker process 53
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: start worker process 54
nginx.1     | 2024/06/11 20:40:43 [notice] 38#38: gracefully shutting down
nginx.1     | 2024/06/11 20:40:43 [notice] 35#35: gracefully shutting down
nginx.1     | 2024/06/11 20:40:43 [notice] 37#37: gracefully shutting down
nginx.1     | 2024/06/11 20:40:43 [notice] 38#38: exiting
nginx.1     | 2024/06/11 20:40:43 [notice] 35#35: exiting
nginx.1     | 2024/06/11 20:40:43 [notice] 37#37: exiting
nginx.1     | 2024/06/11 20:40:43 [notice] 36#36: gracefully shutting down
nginx.1     | 2024/06/11 20:40:43 [notice] 36#36: exiting
nginx.1     | 2024/06/11 20:40:43 [notice] 38#38: exit
nginx.1     | 2024/06/11 20:40:43 [notice] 35#35: exit
nginx.1     | 2024/06/11 20:40:43 [notice] 36#36: exit
nginx.1     | 2024/06/11 20:40:43 [notice] 37#37: exit
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: signal 17 (SIGCHLD) received from 37
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: worker process 36 exited with code 0
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: worker process 37 exited with code 0
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: worker process 38 exited with code 0
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: signal 29 (SIGIO) received
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: signal 17 (SIGCHLD) received from 38
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: worker process 35 exited with code 0
nginx.1     | 2024/06/11 20:40:43 [notice] 20#20: signal 29 (SIGIO) received

I've fixed the symlink now, still no luck. Maybe because it's root?

lrwxrwxrwx 1 root root   50 Jun 11 21:09 travel.charity.dhparam.pem -> /etc/letsencrypt/live/travel.charity/fullchain.pem

Hi @ioan,

The server needs both the privkey.pem, which is The SECRECT and should never be shared with anyone, and the fullchain.pem which contains the issued certificate as well as all the Intermediates that lead up to but not including the Trusted Anchor Root Certificate.

The issued certificate is composed of the Public Key, derived from privkey.pem, the Public Key be shared with anyone and so it does get shared to a Certificate Authority. Simplified description here the Certificate Authority take that Public Key and wraps it with some information such as Not Before and Not After times plus some other details and then Certificate Authority Signs all of that and it basically the issued certificate that gets returned to you.

Generally on the order of a few minutes, assuming the DNS Records are proper, nice, clean, and the Name Servers are properly replying for the Domain Name when requested. Also generally (but not always required see DNS-01 Challenge) the server is and properly responding to the Internet;
no firewall preventing connections from around the world.

2 Likes

Hi mate,

Based on your last part of the answer, it seems that I can't use nginx-proxy and certbot at the same time then. Because nginx-proxy wants to listen on port 80, and certbot does as well.

The DNS are not clean, I've just changed the IP today. I'm wondering if I can manually verify again with nginx-proxy? How often does the authority check on port 80?

It seems I have some more reading to do: GitHub - nginx-proxy/acme-companion: Automated ACME SSL certificate generation for nginx-proxy this has been updated recently

Not always true. Depending on the plugins you choose to use and the Certbot options.
Certbot's --webroot option for example places the challenge in the nginx content serving directory when setup properly; thus Certbot does the asking to Let's Encrypt and nginx does the replying.

That still could be clean and just not propagated; however nicely Let's Encrypt uses the Authorities Name Servers so along as they all are in agreement. Not clean could be DNSSEC issued, etc.

4 Likes

That looks like it would only be the DH Parameters, not the Private Key nor the Certificate.

Also testing and debugging are best done using the Staging Environment as the Rate Limits are much higher.

Next time you run certbot try adding the verbosity option of -vv or for more -vvv to the command line parameters.

Also feel free to share the log file
Use this image
image
to upload a file

out of these images
image

4 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.