I'm working on a new ACME client that internally uses a streaming API for generating JWS bodies for POST requests. So I enabled chunked transfer encoding in the HTTP client, but I get a 400 response from pebble with the body:
I think this boulder PR introduced the requirement, and this issue mentions that it was to emulate a behavior in Akamai in local development. I'm not turning up anything in the Akamai docs about what the check is doing or why it's there (zip bomb prevention?), and the response from both staging & production seems to be coming from boulder since the response is an application/problem+json and there is a ContentLengthRequired prom counter. RFC8555 doesn't mention anything about the transfer-encoding or content-length requirements.
Can you please shed some light on why chunked request bodies are not allowed?
I'm honestly not sure! Especially since we're requiring the presence of a Content-Length header, but not rejecting requests whose header indicates that they are too large, nor rejecting requests whose length doesn't match what the header says.
ok, any chance you confirm that there's no Akamai rule intercepting these requests? If it's no longer there then check shouldn't be needed for dev/prod parity.
And if that's the case, would you accept a boulder/pebble PR to drop this check? I'm thinking that the request body could wire up a LimitReader instead.