I'm currently running a couple of docker containers. One for nginx, one for my node js app and a last one for creating and renewing certifications from lets encrypt. This worked quite well for some time but I'm running into an issue.
On this server I have 5 domains who all have ssl installed. But now I can't install any more certificates. I get this:
- The following errors were reported by the server:
Domain: domain1.com
Type: unauthorized
Detail: Incorrect validation certificate for tls-sni-01 challenge.
Requested
c7a966714b5363c594f152b27f947722.f767e462430a051872cd4eaab3969248.acme.invalid
from MY_IP:443. Received 2 certificate(s), first
certificate had names "domain0.com"
After looking around I see that if I go to "https://domain1.com/" I get greeted by an error message that reads:
Safari can't verify the identity of the website "domain1.com".
When I click "Show Certificate" it shows me the certificate of domain0.com
Here is the config for domain0.com
server {
listen 80;
server_name domain0.com;
location /.well-known/acme-challenge {
proxy_pass http://certbot:80;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443;
server_name domain0.com;
client_max_body_size 50M;
ssl on;
ssl_certificate /etc/letsencrypt/live/domain0.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain0.com/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_dhparam /etc/ssl/private/dhparams.pem;
location /.well-known/acme-challenge {
proxy_pass http://certbot:443;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}
location / {
proxy_pass http://cldapi1:8080;
proxy_set_header Host $host;
proxy_set_header X-Protocol https;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
certbot is the link to the lets encrypt container and cldapi1 is the link to the node application.
I guess this is the default server because it is the first server block within the nginx config. But when I try to create a ssl default server it won't work because I have no certs for a default server, and if I would have one that wouldn't play well with lets encrypt.
This is the config for domain1.com
server {
listen 80;
server_name domain1.com www.domain1.com;
root /var/web/domain1.com;
location /.well-known/acme-challenge {
proxy_pass http://certbot:80;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto http;
}
location = / {
include /etc/nginx/mime.types;
try_files /pages/home /pages/home.html =404;
}
location / {
include /etc/nginx/mime.types;
try_files /pages/$uri /pages/$uri.html =404;
}
location ~ \.(js|css|png|jpeg|ico|jpg|gif|bmp|sql|ttf|otf|torrent|dmg|iso|zip|rar|7z|woff|woff2|svg|eot|txt|doc|docx|mp3|mp4) {
include /etc/nginx/mime.types;
sendfile on;
sendfile_max_chunk 1m;
try_files /template/public/$uri /files/$uri =404;
}
}
I think I need a way to not let the server respond with an ssl certificate when lets encrypt looks for one.
If you have any suggestions or questions please let me know!
I'm running nginx version 1.11.9 and certbot version 0.9.3