I have to say I sympathise more with jvanasco’s position on this. The reality is that Let’s Encrypt is not just a generic install of Boulder, its CA function is tied very closely to lots of (expensive) infrastructure and personnel requirements, so it’s not enough for a serious client to check that their code works with their own copy of Boulder (though I’d certainly advise them to do that) but they must also be sure it works with the actual Let’s Encrypt system.
I’d liken this to Payment Gateways. It’s useful to test code against your own implementation of the Payment Gateway APIs, where you can fake the responses timing out, or receiving unusual errors. But it’s very valuable if the Gateway provides in their own pre-production environment a way to see their system produce some errors for real e.g. a credit card number that appears valid but always fails verification. I would always prefer a provider that offers me this over one that says I should just test my own systems and then cross my fingers.
It may be that this is a lot of work for Let’s Encrypt, but I’d hope that if they’re amenable to this feature existing but just don’t have the manpower to deliver it they’d be open to client maintainers like jvanasco pitching in to do the initial development and testing in Boulder code.
One thing though javanasco, I don’t think your Unit tests ought to rely on the Internet if possible. To me a Unit test ought to be very self-contained, end-to-end tests are valuable, but should be the last step in testing, finding only subtle problems that occur when the whole system is assembled and used, typically as a result of an design mistake not a typo or numerical error.