We're developing an ACME server for in-house use and recently got to the point of testing with the old, but still functional, acmeshell client. We're having rejections when placing an order because field names in the client-generated JSON have capital letters:
The examples and specifications in the RFC are all lowercase, with no mention made one way or the other about case. JWS makes no mention of it either, except to say that parameter names in the header objects (e.g. kid, alg) are case-sensitive. JSON itself is of course case-sensitive.
So is this an error in this particular client that we can safely ignore and find another test client, or should we be pre-processing incoming requests to ensure consistent casing?
I think that the references to JSON in the RFC mean that everything would be case-sensitive. I haven't explored that tool, but it's made by someone who was instrumental in getting ACME off the ground as far as I understand it, so I'd be surprised if it make something non-conforming. Interesting. I'm curious what various existing server implementations do.
Boulder parses all incoming json object keys case-insensitively, simply because that's the behavior of the go json standard library package. That behavior is changed by the new json/v2 package, but we'll likely have to preserve our case-insensitive behavior when we update for the sake of backwards compatibility.
Hmm, does this mean that Pebble (which also uses the go json standard library) accepts random cases as well? I would have assumed that Pebble is more strict with this regard (but then I never tested). It would probably be great if Pebble sticks to case-sensitive (at least by default, with a switch to case-insensitive so that clients that don't use the strict case can keep testing), at least now that json/v2 is there.
There really are a lot of clients out there that have only tested against Let's Encrypt.
Yeah, I was going to say the same thing. It'd also be great if there were a client of some sort analogous to Pebble, so that ACME server implementations can test against some, uh, looser interpretations of the standards that are out there.
It might be nice if ACME profiles could help with this. I don't even know as one needs a specific "strict" profile, but just clients that pass a profile at all are hopefully staying more up-to-date, and so requests that specify a profile probably implies that the client won't get surprised by asynchronous order finalization, case-sensitive handling of keys, requiring POST-as-GET, and so forth. And maybe just a little more robust than changing behavior based on user-agent. Though I totally get that trying to support both "strict" and "loose" clients might expand your test matrixes more than desired, too.
Ok; if that's what's Boulder does, then our server should probably follow that lead and lowercase all member names before processing. I find it very surprising that a JSON implementation in a fairly big language like Go could get such a basic implementation detail wrong! Relevant historical discussion: json
It's certainly led to some grief. But it's because capitalization is meaningful in Go structs. If you have
type Identifier struct {
type string
value string
}
then those two data members are private and can't be seen by the json package when it's time to serialize the struct to a json string, or fill a struct by parsing a json string. In order for parsing/serialization to work, you're required to write
type Identifier struct {
Type string
Value string
}
If the json package was case-sensitive, all Go programs would default to requiring capitalized JSON keys. But the most popular JSON style guides recommend that all keys be camelCase with a lowercase first letter. So the go authors instead chose for the json package to be case-insensitive by default, to make it easy for uppercased struct members to be converted to and from lowercased JSON keys.
Anyway, none of that is particularly relevant to this particular case, and they've clearly realized that case-sensitivity is the better tradeoff in the long run with json/v2. All of that is just to say that they "got it wrong" very much on purpose, for understandable reasons.
JSON's advantage, of being "simple", leads to there actually being a lot of inconsistency in parsing implementations once one starts looking at "corner cases". There are no less than seven specifications, and even the most recent has plenty of ambiguity around what are allowed "extensions" or "parsing limits".