Subject Alternative Name (SAN) : field type ipAddress (IP in a CSR)

First off, my hat is off and kudos to Internet Security Research Group (ISRG) / for bringing some sanity to the encryption certificate aspect of the web, Thank You.

I recognize the posts topic is policy and political. Technically it looks like the code already exists in boulder (if I am wrong about that, point at any references and I would gladly work up and propose patches for review and test) and obviously it was intended in the specification (rfc5280).

This is not a request for ip in field type DNS, that goes against the rfc.

This is not a request for ip in the CN.

This is not a request for an ip only cert.

This is a request to allow and verify subject alternate names of type ipAddress when the name, returned by a domain name server (DNS) query in a resource record (RR) type PTR for the ip address (reverse lookup), matches the requested certificate top level domain name (TLD) and or fully qualified domain name (FQDN) , where both name and ip are present and intended to be included in the certificate being requested (CSR SAN field type IP).

I ask for this because a site that has multiple ip addresses provided by different internet service providers (multi homed) has no other usable method available to allow the user to select the best connection for that user except to form an initial connection to the preferred ip address. This establishes the users preferred route which is cached by the users browser and the browser will continue to use. This does have significant impact on web applications that interact in real time with real world activities.

I believe the rational utilized to restrict (recommend that) a certification authority (CA) not issue a certificate where the subject alternate name of an x.509 v3 certificate contains a field type ipAddress is based on fud.

  1. implies that operating systems are affected yet only one browser on those operating systems is known to be negatively impacted (two other browsers when used on these operating systems handle this issue without problem).

  2. the affected browser does not fail when the certificate is accessed using the FQDN (the presence of the ipAddress in the SAN does not cause failure) only when the browser attempts to access the cert via the ip address does that browser fail to acknowledge the valid certificate (the browser does not crash).

  3. there is a work around available (configuration of the browser) that does not involve misuse of the DNS field type of the CSR SAN .

  4. the topic is dated, this has been known for over 5 years. Any intent to resolve the issue by the software vendor has had ample opportunity.

  5. the statement of requirement is that a certificate authority (CA) not publish a cert with an ip address in the SAN field type dNSName (last line on the page and poorly written, in part why I claim FUD)

it is curious that the recommendation was not allowed into the published rules, it sits as a linked page randomly floating without identifying authority nor with expectation of expiration, just a random roadblock placed in there for one browser which the page itself denotes as legacy software.

does explicitly acknowledge as defined that an ip address is appropriately present in the subject alternative name (page 35). (the rfc 6818 update does not appear to change this)

almost advocates the use thereof, and certainly does not conflict with

(almost a clone of the baseline) , nor

which is Very vague on the topic

If it is actually the intent of ISRG / to “never allow” the use of SAN field type ipAddress, it really should be stated as such in one of the above mentioned documents (CP/CPS). I did read everywhere I thought might contain a clue in this regard before I started crafting this post. The reference is the only item I found (posted on the pages, thank you for that).

The legacy product referred to on the page is the browser not the operating system, other browsers available for these operating systems work fine

by the way, that one is dated 2011 and conveniently there is no date on

The concerned product does not fail when the browser is pointed at the domain name, only when referencing the ip address.

The defect has been known for a long long time, a google search for “microsoft subject alternative name ipAddress” will share reports from 2011 (over 5 years ago) that describe the problem and offer solutions in the configuration of the browser that will work for ipv4 (but not ipv6, interesting)

Those individuals that are using legacy microsoft product are having no trouble with the self signed root and daily cert that I have been providing for our sites users for the past ten(10) years, I have all of the ip addresses incorporated in the subject alternate name.

Any security conscience enity will not be using legacy microsoft product.

Qualifying the authority to use a ip address is available through the domain name service PTR resource record , any site that needs to have an ip address in a certificate probably has enough knowledge to get a PTR record pointing at the same fully qualified domain name as is being requested for the certificate (CN) or at least the same TLD with the FQDN present in the SAN.

section Authentication for an IP Address

explains about the same thing in pretty plain language

Why this is an issue for me:

I work with web application software that provides user interaction with real world events in real time, latency from every source is an issue (yes, I staple OCSP)

The certificate handshake of a client browser with a https server occurs prior to any configuration control (name rewrite / redirect) by the site/server owner.

A site owner has little control over route selection; the site may provide direct connection to more then one internet service provider (multi homed) but cannot directly advertise this routing information to a client browser, the only method to advertise the existance of multiple ip addresses is through the domain name server providing a “round robin” selected ip address solution.

When the subject alternate name provides acknowledgement of the available ip addresses , the client browser’s user can point the browser at an ip address , conduct the certificate handshake then allow the sites https server to coordinate naming convention, allowing the browsers cached ip address and associated route to provide the anticipated level of service.

If we believe the polls (searched: “browser market share”, July 2017), less then 3% of the browsers used in the wild will not be able to take advantage of this technique and will be stuck with a round robin / hit or miss route.

Removing the ip address from the subject alternate name causes 100% of the browsers to fail the certificate handshake leaving degraded service as the only alternative.

One alternative method to allow access to the web server where selection of the ip address is done by name would be to list each ip as a uniquely named host, that would lead to the TLD being referenced as a CNAME. This would probably work for letsencrypt but would be an absolute failure for smtp/email. And no, promoting one of these internet service providers as a primary service provider (assignning TLD to that ip) causes problems of its own, as simple as google claiming duplicate content and the inability to refer to the site as a whole.

Please consider enabling the ability to follow the spec and allow placing the reverse path in the cert.

Thank you.

P.S. This may also offer some temporary relief and alternatives for those fighting with ipv6 related accessability issues.

I think you may be operating under a bit of a misunderstanding. It sounds like you want to put a set of IP addresses in a certificate so that browsers will choose one of those IP addresses to connect to. There are two problems with this idea:

  1. By the time a browser gets a certificate, it has already chosen an IP address (from the DNS) and connected to it. Reconnecting to a different IP address would almost always make things slower.
  2. No browser actually implements this logic today.


The intended usage is that the user selects their preferred route / internet service provider by entering / bookmarking or following a link to a specific address (known to work best for them) , the role of the certificate in this respect is simply to acknowledge that the ip address is acceptable for the use of the certificate.

The http 301 redirect of ip address to domain name does not cause a reconnection / renegotiation, the browser continues to use the cached route to conduct the communication

These internet service providers are in direct competition for the same customers and provide disparaging routes to their opponents addresses

That's fair, but it shouldn't be necessary to explicitly type the IP addresses into a browser's URL bar in order to achieve that goal. If a web site wants to facilitate this (which it would also have to do in order to want to apply for certificates with IP addresses as subject names), it can provide DNS names that reflect those two or more different routes. E.g.,,, or whatever. The web server can be configured to serve the same content under each name and to set cookies at the level so that they will work regardless of which way the site is accessed. And, as with the IP address example, internal site links and references can be made relative (like <A HREF="../foo"> instead of <A HREF="">) so that they also work regardless of how the server is being accessed.

I think this would provide all of the same user benefits as the scenario you describe, while not being much more work for the server operator, and it's achievable today with Let's Encrypt simply by listing all of these DNS names as SANs in the certificate. In fact, cookie authentication will work better in this model than in the IP address case, because you can choose to preserve a session across alternate names for the service, while you can't do so easily when the alternate names are bare IP addresses.

I recognize your point that no requirement other then an A or AAAA record on the domain name server is needed to satisfy letsencrypt.

The concern that brings up is that you leave a single point failure that can knock out the https sites. We have seen a DDOS against the dynamic ip name servers knock out DNS service to a significant portion of the North American east coast. Natural events and server outages all leave DNS vulnerable.

Explaining to a user that a non local event has made a local event inaccessible is difficult especially when they can get to the unencrypted website by typing in the ip address but can not access the encrypted site due to the certificate being used.

Sounds like FUD against DNS. If your DNS is DDoS-prone, you need more DNS servers. Everything you need can be accomplished with DNS as schoen layed out. You don’t need TLS to accommodate layer 3 for this.

It was not intended as FUD, any DNS outage whether isolated, local or

widespread, will cause access ( the only access available being via ipaddress aka level 3) to TLS enabled sites that use a certificate that is not level 3 aware to respond with a “cert not for this site” (my words) response that the user can not get past. This issue is not specific to certificates.

As schoen brought out, adding DNS records for route specific fully qualified domain names and incorporating them into the subject alternate name will accomplish what I perceive as needed for my site and what I was asking for in the original post.

Of course you can. You'll get a certificate error, of course, but you can access it. If your users are savvy enough to browse to an IP address in the event of a widespread DNS outage, they should understand what's going on with the certificate.

I understand what you are getting at. This has been a suggestion in the security circles recently

Search multiple dns server in google and there are a few suggestions

What I haven’t figured out is how you would keep the records in sync across multiple providers. This is more me being out of practice with DNS


It depends. :smile: For basic, old fashioned DNS providers, you may be able to use standard zone transfers.

Some providers only support bespoke APIs. Stack Overflow, as an example, wrote DNSControl to drive multiple providers with one configuration file and one tool.

If you like suffering, you can copy and paste between every provider’s web interface. :upside_down:


I think I have another use case that requires to add an ip address in the SAN field, along with a dNSName, but I would be very grateful if I could be presented to another solution: it is client-side load-balancing (

The driver’s load-balancing policy goes like this (think of a public cassandra cluster where it is the driver that chooses which IP to connect to):

  • request the list of server IPs from a DNS, using the domain name;
  • randomly select the IP from the list on each connection (and remove obsolete IPs as they are found);
  • update the list on a regular basis from any node, so nodes can be dynamically added or removed without service interruption (like with continuous deployment).

Do you have an idea how one could implement this logic with letsencrypt’s certificates?

Thank you in advance,


IP Addresses cannot be used as SAN entries for Let’s Encrypt certificates

You may need to use self signed certificates but it should be OK as these are database backends and are accessed by drivers


Thank you Andrei. It is a public API though, and I would prefer not to bypass SSL certificate checking.

I forgot to mention that I was hoping to use a SAN entry like this:,IP:X.X.X.X
with the IP address declared in the DNS record.

Nowhere in that scenario do you need the IP address in a certificate. When a browser connects to a website, that connection is to an IP address as well, but the browser knows which DNS name it originally wanted.

Even if you bypass DNS at some point to retrieve more IP addresses, you still know which name you originally wanted. I fail to see the problem at all.

1 Like

Hi @admin1,

There are a number of ways to solve your problem. Since these certificates are purely internal to your system, I think the best would by to generate your own root certificate and use it to sign end-entity certificates that meet your needs (for instance by having IP addresses in them). This may sound daunting, but it’s pretty straightforward; I’d recommend Then you can use that root certificate when validating the server certificates receive from your Cassandra cluster.

I think this is a fairly common problem with RPC frameworks: They don’t necessarily handle custom load balancing and certificate validation nicely together. For instance, in Boulder, we use the gRPC framework, and we wound up having to implement our own Balancer in order to get the validation behavior we wanted:

1 Like

This is for a public REST API, but let’s say I exactly want to implement client-side load-balancing as described here.

  1. retrieve list of servers:
  1. have client randomly select the IP from the list on each connection
curl https://X.X.X.X/...

How is this?

--resolve <host:port:address>

Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s) use a
specified address and prevent the otherwise normally resolved address to be used. Consider it a sort of /etc/hosts
alternative provided on the command line. The port number should be the number used for the specific protocol the
host will be used for. It means you need several entries if you want to provide address for the same host but different

The provided address set by this option will be used even if -4, --ipv4 or -6, --ipv6 is set to make curl use another IP

This option can be used many times to add many host names to resolve.

Added in 7.21.3.

Try the request to the normal DNS name, but provide an alternative IP address via this option.

Thank you very much for the links @jsha! This is exactly what I am trying to solve and I will look at them, but the API is public, and I though users could implement their own driver. However, if the solution is that it is mandatory to use our driver, it is fine.

If you're willing to accept an intermediate step where the client gets a list of servers and has to explicitly handle the result, you could provide the servers by name instead of IP. For instance:

$ curl

# Client randomly chooses server2:
$ curl

However, a more typical way to do this, for a public API where you don't control the clients, would be with round-robin DNS, which doesn't require special behavior. You would set up your authoritative resolver to respond to either with a random IP address from your pool, or with all IP addresses randomly sorted (under the assumption that the clients will attempt the first IP first). This gives you less granular control, and is subject to caching at the client's recursive resolver, but doesn't require special client behavior. You can also improve the caching problem by setting TTL=0 on your responses.

@WinstonSmith Amazing, it works! Thank you :slight_smile: I will check but, by any chance, would you know if this feature is available in JVM languages, python etc.?

1 Like