Dear comrades
I am new to the forum and hope you can bring up some valuable ideas on how to fix this problem.
I wanted to migrate my Nextcloud environment from my Raspberry Pi at home to AWS.
For security reasons I intent to set the server running Nextcloud AIO behind a Application Load Balancer with an AWS WAF insertion.
To do so, I need to import the automatically issued Let's Encrypt certificate on the Nextcloud server to the AWS Certificate Manager for the use in the AWS ALB.
It is easy to get the certificate and the key from the Nextcloud EC2 instance but it seems that AWS does not like the certificate chain.
My domain is leviathan.cbdk.ch
This is the server certificate I post into the AWS Certificate Manager Certificate body filed::
Of course I enter also the private key but when I click on next in the import certificate wizard I get the following error: "The certificate chain provided is not in a valid PEM format."
I have tried to add also the server certificate to the chain or to inverse the sequence of the root and intermediate certificate keys but to no avail. And now I'm at the end of my wisdom and counting on you!
Thanx in advance for your feedback.
Best regards
Cyrill
Is there a reason you want to use a Let's Encrypt cert? Because the AWS ACM issues certs that are easy to integrate with their services like the ALB.
As to your specific question, you may need to post on AWS re:Post or maybe stackoverflow why the ACM rejects your chain. Unless maybe the error just means it does not like the expired DST Root CA X3 in the chain. You could try removing the second cert from the chain and try that. (or just use an AWS ACM cert <g>)
Thanx for the welcome.
If I was independent from Let's Encrypt I would use a native AWS certificate but then again this would mean I would have to exchange the automatically generated certificate on my Nextcloud AIO backend server. I would rather not mingle with all the automated scripts that come with this flavor of Nextcloud.
I guess if I replace the Let's Encrypt certificate on the server with an AWS certificate, this would get overwritten within the next cycle of the Let's Encrypt script.
I assume it is easier to handle an AWS ALB and certificates there than in a highly automated appliance.
With the ALB you (probably) have two different HTTPS connections. One between the client and the ALB and another from the ALB to your server. You could use an ACM cert in the ALB for HTTPS between the client and ALB. And, retain your LE cert in Nextcloud for the TLS connect between the ALB and your server.
You cannot use AWS ACM certs in your own servers. It can only be used in certain AWS services like the load balancers, CloudFront, and such.
Hi @rg305
Pura vida y hasta la victoria
Unfortunately this did not work out as expected.
Any combination of one, one-two, two-one or only two (I hope you get the meaning) did not change the outcome: The certificate chain provided is not in a valid PEM format.
I'll try the workaround of @MikeMcQ and let you know the outcome.
If nothing works, I will check on AWS support general guidance too. Let's see what they say.
I doubt this is the issue as you're coming from a Raspberry Pi, but have you ensured your certificates have the correct newline characters?
Some clients/libraries can only handle a specific newline character, while others do not care. DOS/Windows use CR+LF or \r\n, while Linux/Mac use \n. This is often the cause of weird "not valid" issues.
I think there is a misunderstanding. The Nextcloud AIO server is completely new as well as its certificate.
Thank you for the hint though.
I have tried to convert the certificate and its chain using Notepad++. I have tried to enter the certificate chain with Mac, Linux and Windows newline characters with any combination of one, one-two, two-one or only two. AWS still dos not like it and shows the same error with all versions.
I finally tried to copy/paste the certificate and chain directly from the linux shell but neither this was successful.
Good start into the day for all of you. I appreciate your feedback.
Hopefully I find the time to test the workaround that Mike has proposed.