I successfully generated my certs, however i want to setup a cron job for renewal and get an error on the dry-run, “(‘Connection aborted.’, OSError(107, ‘Transport endpoint is not connected’)). Skipping.”
It produced this output:
Cert not due for renewal, but simulating renewal for dry run
Plugins selected: Authenticator nginx, Installer nginx
Attempting to renew cert (www.twsinternet.co.uk) from /etc/letsencrypt/renewal/www.twsinternet.co.uk.conf produced an unexpected error: (‘Connection aborted.’, OSError(107, ‘Transport endpoint is not connected’)). Skipping.
All renewal attempts failed. The following certs could not be renewed:
My web server is: nginx/1.14.0 (Ubuntu)
OS: Ubuntu 18.04.3
I have full root access to the machine using certbot 0.31.0.
Its a fresh installation only for certbot, still using nginx
I started having problems renewing on my old ubuntu 14.04 installations in the last few weeks
(~# curl -4 https://acme-v02.api.letsencrypt.org
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to acme-v02.api.letsencrypt.org:443)
So i made this fresh install for the sole purpose of creating and renewing the certs.
I wouldn't got there just yet...
Can you check the drive subsystem?
Any alarms/failures/etc. related to disk access?
Can you inspect them visually?
[a lot of times you can see rows of green blinking lights and the one red/yellow/orange light crying for help!]
In a perfect world, he would be using a hot-swappable drive subsystem.
And only one drive is going bad in a raid5 system and it’s just pull one out and put a new one in.
But you may be right, I may be crazy.
[Billy Joel in my head now]
I am using hot-swappable but the world still aint perfect
I don’t have physical access to the server at the moment but I used “hpacucli” to check the condition of the 8 drives and the array, all reported “OK”.
Cant see anything relevant in logs.
If its a mount point a reboot of the VM should have resolved it.
I will play google for a little longer and if no joy I will reboot the server later.
I tested all the drives today (removed 1 by 1 and created some large files and allowed the raid to rebuild, all without issue.)
Rebooted now and issue remains.
what a shame that i broke this record ->
20:30:34 up 1196 days, 20:26,
Form my google searches I did not find anywhere that’s its hardware / mount point related, all i found is its problem in the python code.