Certbot failing in AWS Elastic Beanstalk ngix extension

Hello, first of all, apoloogies for being a noob on the subject, but im stuck and need some help :slight_smile:

I have an EC2 instance, behind Elastic Beanstalk, and I found a script online, that is a NGIX extension, to be able to generate the certificates for the domain in question…

First of all, I went to Route 53, and added the information of my domain (sdk.bigfootgaming.net), and also in GoDaddy, added an A dns rule to forward sdk.bigfootgaming.net to my EC2 instance’s IP. That is working.

Now, in my NGIX script, Im running the following

command: “mkdir /opt/certbot || true”
command: “wget https://dl.eff.org/certbot-auto -O /opt/certbot/certbot-auto”
command: “chmod a+x /opt/certbot/certbot-auto”
command: “sudo /opt/certbot/certbot-auto certonly --debug --non-interactive --email gaston@bigfootgaming.net --agree-tos --standalone --domains sdk.bigfootgaming.net --keep-until-expiring”
command: “ln -sf /etc/letsencrypt/live/sdk.bigfootgaming.net /etc/letsencrypt/live/ebcert”
command: “mv /etc/nginx/conf.d/https_custom.pre /etc/nginx/conf.d/https_custom.conf”
command: “cat .ebextensions/certificate_renew.txt > /etc/cron.d/certificate_renew && chmod 644 /etc/cron.d/certificate_renew”

The GetCert part of this is failing, with the following message:

  • The following errors were reported by the server:

    Domain: sdk.bigfootgaming.net
    Type:   unauthorized
    Detail: Invalid response from
    2.0//EN\">\n<html><head>\n<title>500 Internal Server

Im not sure here what its doing. I know it cant seem to access the files, and if I FTP into the site, I dont see them created there.

Any help with this is greatly appreciated


Hi @gclaret

looks like that doesn’t work.

Your dns entries ( https://check-your-website.server-daten.de/?q=sdk.bigfootgaming.net ):

Host T IP-Address is auth. ∑ Queries ∑ Timeout
sdk.bigfootgaming.net A yes 1 0
AAAA yes
www.sdk.bigfootgaming.net Name Error yes 1 0

A different ip address.

Your name servers:

	•  ns73.domaincontrol.com / p21
	•  ns74.domaincontrol.com / p06

But there is a new certificate, created today:

CertSpotter-Id Issuer not before not after Domain names LE-Duplicate next LE
927605971 CN=Let’s Encrypt Authority X3, O=Let’s Encrypt, C=US 2019-05-23 13:25:50 2019-08-21 13:25:50 sdk.bigfootgaming.net
1 entries duplicate nr. 1

This certificate isn’t used, instead, there is the

expires in 83 days	bigfootgaming.net, www.bigfootgaming.net - 2 entries

used. Looks like you have created a certificate and changed your settings.

Thanks for the super quick reply!

I’m not quite sure I follow thought :frowning:

There is a new certificate, which I created today, but I created that by hand on ZeroSSL.com… That works perfectly if my A record for sdk.bigfootgaming.net points to my own hosting, but when I make that record point to the EC2 instance (, it gives me unsecure again… thats why I wanted to run this from a script from inside the deployment of my EC2 instance.

Sorry again if it’s clear whats wrong, but I dont quite understand, and am stuck for weeks on this :confused:

Ok, then this certificate isn’t relevant.

That’s the ip address I see. But why has the error a different ip address -

The main configuration looks ok:

Domainname Http-Status redirect Sec. G
http://sdk.bigfootgaming.net/ 404 0.370 M
Not Found
https://sdk.bigfootgaming.net/ 404 1.833 N
Not Found
Certificate error: RemoteCertificateNameMismatch
http://sdk.bigfootgaming.net/.well-known/acme-challenge/check-your-website-dot-server-daten-dot-de 404 0.363 A
Not Found
Visible Content: Cannot GET /.well-known/acme-challenge/check-your-website-dot-server-daten-dot-de

If there is a running instance, webroot (not standalone) should always work. Find your nginx root (in your vHost definition), then use it.

certbot run -a webroot -i nginx -w yourRoot -d sdk.bigfootgaming.net

standalone is hard to debug, webroot is easier.

Which error has a different IP? And thats the original’s domain IP (the one from godaddy, and my actual website, www.bigfootgaming.net)

I don’t really understand how to change this to use webroot… I have a file that tries to do all of this for me, is it posible to do from there? just changing the command in get_cert?

The problem Im seeing is not that it’s not finding the files, is that it’s not creating them at all… :thinking:

Here is my code, in case it helps you (I cant thank you enough Juergen!!!)

# Dont forget to set the env variable "certdomain", and either fill in your email below or use an env variable for that too.
# Also note that this config is using the LetsEncrypt staging server, remove the flag when ready!


  # The Nginx config forces https, and is meant as an example only.
    mode: "000644"
    owner: root
    group: root
    content: |
      server {
        listen 8080;
        return 301 https://$host$request_uri;

  # The Nginx config forces https, and is meant as an example only.
    mode: "000644"
    owner: root
    group: root
    content: |
      # HTTPS server
      server {
        listen       443 default ssl;
        server_name  localhost;
        error_page  497 https://$host$request_uri;

        ssl_certificate      /etc/letsencrypt/live/ebcert/fullchain.pem;
        ssl_certificate_key  /etc/letsencrypt/live/ebcert/privkey.pem;

        ssl_session_timeout  5m;
        ssl_protocols  TLSv1.1 TLSv1.2;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
        ssl_prefer_server_ciphers   on;

        location / {
            proxy_pass  http://nodejs;
            proxy_set_header   Connection "";
            proxy_http_version 1.1;
            proxy_set_header        Host            $host;
            proxy_set_header        X-Real-IP       $remote_addr;
            proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header        Upgrade         $http_upgrade;
            proxy_set_header        Connection      "upgrade";

    epel-release: []

    command: "mkdir /opt/certbot || true"
    command: "wget https://dl.eff.org/certbot-auto -O /opt/certbot/certbot-auto"
    command: "chmod a+x /opt/certbot/certbot-auto"
    command: "sudo /opt/certbot/certbot-auto certonly --debug --non-interactive --email gaston@bigfootgaming.net --agree-tos --standalone --domains sdk.bigfootgaming.net --keep-until-expiring"
    command: "ln -sf /etc/letsencrypt/live/sdk.bigfootgaming.net /etc/letsencrypt/live/ebcert"
    command: "mv /etc/nginx/conf.d/https_custom.pre /etc/nginx/conf.d/https_custom.conf"
    command: "cat .ebextensions/certificate_renew.txt > /etc/cron.d/certificate_renew && chmod 644 /etc/cron.d/certificate_renew"


the error message Letsencrypt sees and sends back.

There is the ip address visible. Checking that ip - https://check-your-website.server-daten.de/?q=

there is a different certificate.

CN=*.prod.phx3.secureserver.net, OU=Domain Control Validated (2440)
expires in 484 days	
*.prod.phx3.secureserver.net, prod.phx3.secureserver.net - 2 entries

So I don’t understand why Letsencrypt sees that ip address.

But I don’t think your instance can create a file under So the result must be invalid.

Found a way to do the webroot command

sudo ./certbot-auto certonly --debug --non-interactive --email gaston@bigfootgaming.net --agree-tos --authenticator webroot --webroot-path /var/www/acme-challenge --domains sdk.bigfootgaming.net --keep-until-expiring --installer nginx

but now Im getting the following…

Activity execution failed, because: ./certbot-auto has insecure permissions!

Any idea why that is? looked at the suggested help (Certbot-auto deployment best practices), but not sure what most of them mean

Is that

really your webroot?



should be visible via


Let go back a bit if we can Juergen, to make sure we can move forward, and I dont keep running in circles…

My server functionality is in my EC2 instance (
I also have a domain/hosting, in GoDaddy, for my website (bigfootgaming.net,

What Im trying to achieve, is to:

  1. call sdk.bigfootgaming.net, and get the funcionality inside of the EC2 instance
  2. make this a secure connection

To make the “redirection” work, I created a DNS rule in GoDaddy, type A, with host: sdk, and value: This seems to work ok

The part Im lost now, is the following:

The cert-auto commands, im running when I deploy de EC2 instance, BUT, it asks me for a domain… should I use sdk.bigfootgaming.net? is this the correct thing to do?

This isn’t a redirect, it’s a normal A-record pointing to a server. That’s the ip Letsencrypt must check.

Yes. Certbot runs on this ip, so Certbot can create the validation file.

After another whole day of searching, still stuck, but at least I understand where the problem is…

Relevant info in my nginx extension file:

    mode: "000644"
    owner: root
    group: root
    content: |
      server {
        location /.well-known/acme-challenge {
          allow all;
          root /var/www/acme-challenge/;

command: "sudo /usr/local/bin/certbot-auto certonly --debug --non-interactive --email gaston@bigfoootgaming.net --agree-tos --authenticator webroot --webroot-path /var/www/acme-challenge --domains sdk.bigfootgaming.net --keep-until-expiring --installer nginx"

From what I understand of the above, it will try to put the certs in /var/www/acme-challenge/, and then when it tries to fetch them for the challenge, using the above rule, it replaces /.well-known/acme-challenge with var/www/acme-challenge

This of course, does not work. So I followed your advice, and added a file by hand ini var/www/acme-challenge, and tried to get it this way:


That was giving me 'cannot GET …" error. So I kept on reading, and found that maybe its an issue of static files… I tried to set them up this way, still doesnt work, but now I get a 404 error.

path: /acme-challenge
directory: var/www/acme-challenge

(I tried like 10 variations of the above, with var/www, without, with traiiling /, nothing seems to work)

Any ideas come to mind?


the wrong path.

You must use


these are the two subfolders Letsencrypt checks.

That doesnt make sense…

Fiirst, I put that file BY HAND ini var/www/acme-challenge, why would I look for it in .well-know/…

And second, like I said, doesnt this directive, when it does go looking for the actual file, replace .well-know/acme-challenge for var/www/acme-challenge?

server {
        location /.well-known/acme-challenge {
          allow all;
          root /var/www/acme-challenge/;

Anyways, by putting the file by hand in var/www/acmee-challenge is not workiing, so 'm not sure whats going on anymore

/.well-known/acme-challenge/ is fixed, if you want to use http - 01 validation.



The path at which the resource is provisioned is comprised of the
fixed prefix “/.well-known/acme-challenge/”, followed by the “token”
value in the challenge. The value of the resource MUST be the ASCII
representation of the key authorization.

GET /.well-known/acme-challenge/LoqXcYV8…jxAjEuX0
Host: example.org



must answer with the correct value. This isn’t optional, this isn’t something you can select.

If your configuration doesn’t work, then your configuration is buggy. Remove all these additional location definitions.

A post was split to a new topic: Starting a new topic

Ok, I was able to get the creation of everything working, now I seem to have a different problem :man_facepalming:

Everytime I try to read the created certs from my NodeJS app, I get the following:

Error: EACCES: permission denied, open '/etc/letsencrypt/live/sdk.bigfootgaming.net/privkey.pem'

I’ve read a ton of posts that mention that its a permissions issue, and that if I grant permissions to /etc/letsencrypt/live/ and /etc/letsencrypt/archive it should work, but its not.

These are my permissions:

This is all running in AWS, in Elastic Beanstalk & EC2, thats why the user is ec2-user.


Happy to read that you have created a certificate.

Not only one, 4 identical.

Yes, now it’s only a permission problem. Depends on your NodeJS app, may have not enough rights.

1 Like