Issuing a large number of certs for browser-accessible P2P clients

I have various use cases where I'd like to internetwork web browser clients with a large pool of smart clients that also act as web servers, including handling things like hole-punching and UPnP port forwarding automatically.

Imagine some central domain, my-p2pish-service.com, with a subdomain, clients.my-p2pish-service.com. The domain has some directory functionality where a client can map a data request to another client, and the response to that request is a "somehash.clients.my-p2pish-service.com", where "somehash" is a dynamic DNS label updated to correspond the IP address of the client possessing some key that can generate "somehash"

Is there any mechanism in LetsEncrypt that would permit issuing certificates to a potentially large number of clients with DNS names under "clients.my-p2pish-service.com"? What about policies for sharing certificates between clients? For example, I know up to 100 names may appear in a certificate, is there any policy that would prevent sharing one certificate between multiple clients, assuming the certificate provided no additional benefit to the protocol running in the application layer, which had separate authentication mechanisms of its own?

The goal of this is simply to expose P2P servers directly to browser clients using modern web protocols, all of which require TLS, such that the traffic more or less resembled regular web traffic, and without depending on expensive and centralized signalling boxes like STUN and TURN servers, or complex networking and client software, as in the case of a solution like WebRTC

Hi,

I think you might want to look at publix suffix list, which will allow Let's Encrypt and other services to treat those subdomain as separate websites (thus different cookies, different rate limits). I personally think it's horrible to distribute one central certificate and key to your clients, since it's still possible for it to leak.

If you are using PSL, maybe it's better to adapt a new domain for those purposes (I heard there might be some issues if you have a main website at APEX and the domain is designed as public suffix... Not sure).

1 Like

Registering the domain on the PSL hadn't even crossed my mind! It almost feels like cheating, but indeed it does fit within the rate limits. Great idea, thanks for this.

Mostly this is just thinking out loud so far. I just keep bumping into the same desire while thinking about designs for various systems

1 Like

Well, if they're all "sharing one certificate", then they'd be sharing the private key too, meaning that each system could impersonate the others.

It's not clear to me what problem you're actually trying to solve here by adding Let's Encrypt to the mix. If you're forming your own network of systems talking to each other, it's probably easy for them to make their own keys (maybe doing something tor-hidden-service-like with the public key being part of how you address them), or use your own private CA, then to try to involve a public CA. You'd only need a public CA if you're trying to have "strangers" connect to these systems directly over the web in their web browser and need it to be trusted by them by default. I think that the reason for the complexity behind STUN/TURN/WebRTC/etc. is more about default firewall rules (and how IPv4 is usually implemented with a NAT) than about authentication.

2 Likes

Definitely the certificate sharing option is a terrible one, I mention it only as one possibility.

There are a few interesting elements

  • Simplified networking. Hosting the WebRTC stack outside a browser is a bit of a mess. There are a few good libraries, but generally I would not wish that much inherent complexity on anyone if it can be avoided, this is complexity inherent in the design of WebRTC rather than just specific to a particular implementation. It is also significantly harder to diagnose problems with a WebRTC connection failure, and connections require signalling infrastructure beyond simply knowing a peer's name.

  • Filtering resistance. QUIC in particular looks like it will become a go-to protocol over the next 5-10 years in P2P applications due to explicitly designed-in resistance to various kinds of identification/filtering middle boxes. There is a vague promise at some point in the near future that all web traffic will look like QUIC, and any protocol that can live within QUIC will in turn look like web traffic, such that a middlebox's choices are (mostly) reduced to allowing or denying all traffic. That's desirable for many kinds of P2P application

  • Browser accessibility. This is probably self-explanatory, but applications reach a much, much wider audience when they require no installation and can run in a browser.

As for a specific use case, one area I've been curious about are social tools where the service itself doesn't know much of anything about end user identities or relationships, and mostly just provides a nice UI, and ability for that UI to find other folk it wants to talk to.

Can't you get a different wildcard certificate for each client?
*.clients.my-p2pish-service.com

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.