Round-robins and IRC

Based on discussion of the proposed IRCv3.3 STS specification I started wondering how would LetsEncrypt work with IRC where round-robins are usually used.

  • Will one operator have to generate certificate for irc.example.net and others have to sync it to their servers at least every 90 days?
  • Or can lal operators validate their right to irc.example.net by adding DNS records into it?
    • If yes, could the server operators also generate certificate that is also valid for servername.example.net in addition to rc.example.net?

STS does not pin to an specific Public Key so there are two options:
a) Each server has its own FQDN with own keys srv.example.net
b) They use round robin DNS then they should also use all the same private key.

IRC (mostly) always uses round-robin in case there are multiple servers so that users connect to irc.example.net and if one server goes down, they reconnect and then get to working server.

So it’s not possible to have different valid key for all servers of the round robin?

Hi,
yes it is possible. But you have to decide who is the “key-master” because you need one person who have the account key for issuing the certificates. You can request the cert again. But in this usecase i would sugest using the same key/cert for all server in one “cluster”. Alternative you use CNAME that mean user irc.example.net is an cname for multiple A records. Each with its individual key. But i am nut sure how the client will handle this case.

And automation which LetsEncrypt suggests so much doesn’t seem to be possible or at least not easily in this case.

I've set up a test bed using InspIRCd and NGINX acting as reverse proxies to round robin incoming challenge requests.

  • Each node is available as irc.example.com as well as its own name ircN.example.com and MUST NOT answer challenge requests for other names than those two names
  • Each node requests certificates on its own using subjectAltName "irc.example.com" as well as its own "ircN.example.com"
  • dehydrated (a leightweight ACME client) uses the local file system to place the webroot challenge into a directory where NGINX can serve it
  • Each NGINX tries to answer an incoming challenge request on its own and if that fails with a 404 it proxies the request to the next node and so on...

NGINX common webroot config

location /.well-known/acme-challenge {
        alias /var/www/dehydrated;
        try_files $uri @acme-challenge;
      }
      location @acme-challenge {
        proxy_set_header Connection "";
        proxy_set_header HOST $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass https://acme-challenge;
        proxy_next_upstream http_404;
        proxy_intercept_errors on;
      }

NGINX upstream config irc1.example.com

  upstream acme-challenge {
    server irc2.example.com:443;
    server irc3.example.com:443;
  }

NGINX upstream config irc2.example.com

  upstream acme-challenge {
    server irc1.example.com:443;
    server irc3.example.com:443;
  }

NGINX upstream config irc3.example.com

  upstream acme-challenge {
    server irc1.example.com:443;
    server irc2.example.com:443;
  }

This is working well so far. Another yet unsolved problem is fingerprinting wheras you can't just replace your certs without informing each other node about the new fingerprint beforehand.