AWS API Gateway + Lets Encrypt?


As Serverless Services are getting more and more popular, i think it would be good to find a solution for creating certificates for them.

When you create lambda functions at AWS and combine them with api-gateway you receive urls like

When you have the certificate keys, body and chain data you can create custom domain names for this URL. Afterwards you receive an URL like that must be used for CNAME forwarding. This URL accepts the provided certificate data for https connections…

Did anyone find a solution to create a certificate for API-Gateway and lambda functions ?


I’m probably in for a world of hurt come next year’s renewal—maybe by then the process will improve–but here is what I did in order to get cert for API Gateway.

  1. Spun up an EC2 micro (installed git)

  2. Changed the domain’s DNS settings to point to the micro (for @ and * to cover the root and the subdomains)

  3. Cloned the letsencrypt repo and ran the following:

./letsencrypt-auto certonly --manual -d -d -d

  1. Followed the instructions on creating the python web server. NOTE: I needed to make 3 challenge files (one for each domain) but only 1 python server.

  2. On success I located the symlinked certs from the live directory, changed whatever permissions in order to scp them down.

  3. Input the cert, chain, and key into API Gateway’s custom domain form.


Defeats a large swath of LE objects doesn’t it.


For the initial creation of the certificates this approach should work.

But when you are changing the DNS of a running Service to this EC2 instance you will have to plan a downtime of your service during the renewal of the certificates.

If you don’t want to have this downtime, the instance could act as a proxy during the process to forward the traffic to Apigateway. But will the old certificate be still valid for forwarded connections ?


I just tested the gateway and you can make an endpoint for .well-known/acme-challenge/ so the API might not have to experience any downtime for future challenges.


Can you explain how you created this endpoint?
And - did it really work with LE ?

For me it did not work.

During the deployment apigateway forces me to enter a stagename, that will be injected to the final url path like this:[stagename]/myfunctionname
So the only way to add “.well-known” would be the stage-name. But this is not allowed:
"Stage name only allows a-zA-Z0-9_"

Do you have another solution ?

(if there is a way to create this urlpath) The next Question would be: Does the LE-Server, that checks the acme-challenge, follow header redirects?

The Cli-Command only accepts domains without http or https
letsencrypt-auto certonly --register-unsafely-without-email --manual --agree-tos -d

LE will call http instead of https. Apigateway returns in this case a 301 header with another location to forward the request.


Just now seeing this. The only way to set up the Let’s Encrypt TLS with a custom domain seems to be altering the DNS to point to a temporary server that you run. In the future, you can make the challenge endpoints via the api gateway, but you cannot do that for the initial setup. After I set up the cert w/ the API gateway, I made the challenge endpoint to make sure it would be possible in the future.

Regarding the stage name being in the URL—I have 2 stages called development and production. I have mapped production to a base path of (none) and development to a base path of /dev. You can do this in the custom domain settings page (under the second tab in the API Gateway section). This way, the production api is at the root and dev is at /dev vs

Hope that helps, but for the initial setup, you’ll still have to spin up a server.


I think i found a solution that solves this problem: CloudFront

When you set it up, you receive a new URL that can direct the traffic to apigateway and s3.
This url needs to be used as CNAME direction for your domain. -> cloudfront

  • create a new distribution
  • add the apigateway url as origin
  • route all the https traffic to this origin
  • create a s3 bucket for the check file
  • add this bucket as origin
  • route all traffic to .well-known/acme-challenge/* to the s3 origin

Important to know is, that the s3 bucket also needs to have a .well-known/acme-challenge/ folder. There you have to store the check file.


Your best bet is pby to use the DNS challenge type instead of the HTTP (or TLS-SNI) types because it won’t require you to worry about the responses coming back from the actual service.

The DNS challenge type has just recently been restored to STAGING, and I believe may even be in BETA/PROD now. So once you complete the challenge strictly through DNS entry manipulation, then you can issue the certs and install them to AWS to be used in any capacity.

The official LE client won’t help in the latter case as far as installing to AWS, but you can generate the cert and then install it manually or script something up using the AWS CLI stuff. Similarly, the LE client can’t help out with the DNS challenge either, you’ll need to manually adjust the DNS settings before submitting the challenge response.

If you have a Windows environment, the ACMESharp project actually does support AWS providers for both DNS challenge handling (if your DNS is hosted in Route 53) and for “installing” the generated cert to AWS IAM. Once it’s in IAM, it can be assigned to any AWS front-end (ELB, CF, etc.).