BlueCoat Not Trusting Let's Encrypt

I’m having trouble with people who are behind a BlueCoat Proxy not being able to access sites using Let’s Encrypt certificates.

An example site would be www.i7media.net

Users are getting an “invalid certificate” error in their browser. Upon inspection of the certificate, I notice that the Issuer information is different than the actual certificate installed on the site. For example, O = Blue Coat SG900 Series". The certification path shows “Not Found” for the issuer of the certificate.

While I know this isn’t an issue with Let’s Encrypt necessarily, I was wondering if anyone else has run into issues with BlueCoat and Let’s Encrypt.

To clarify, I am not using BlueCoat and I have no access to modify the BlueCoat device in question.

Is there a way to specify which Root or Intermediary CA is used for the certificates I create using Let’s Encrypt? Maybe that would help the Blue Coat device with the trust of the certificate.

Thank you,
Joe

Hi @JMDAVIS,

I think what's happening is that this is a BlueCoat intercepting proxy that's not allowing users to connect directly to HTTPS sites, but is trying to proxy them. The design of HTTPS and the web PKI mean that this should generate a certificate error in the user's browser—because the connection is not secure, since it's being intercepted by the BlueCoat proxy. When the proxy operator controls the end-user devices, the devices can have a certificate installed in order to indicate that the should accept this interception and trust the BlueCoat certificate. But the general public's devices and devices that haven't been specifically configured will do so.

However, that doesn't explain the observation that the devices are apparently not seeing the same error for other web sites. So, perhaps the device isn't trying to intercept connections to other sites?

It would be interesting to save the certificate associated with browsing to other sites from behind this proxy. One possibility is that it's a policy issue where the proxy is trying to block certain sites (hence returning its own certificate and some kind of "Blocked" HTML page, which can't be seen without accepting the associated certificate). Then receiving this certificate error would be symptomatic of accessing a site that the proxy doesn't permit, rather than necessarily a site that's using a particular certificate authority.

Nope! Let's Encrypt currently only uses the Let's Encrypt Authority X3 intermediate for all issuance.

@schoen,

Thank you for your quick response. I am not sure what to do with this at this point. Ordinarily, I would contact the IT Dept that’s running the Blue Coat device but it’s a very large entity. I don’t think they’re going to be all that eager to do anything about this.

I will post here if I am able to get anywhere with this.

Thanks,
Joe

SSL inspection breaks the client-server connection and splits it into two (client-proxy & proxy-server) connections.
I’ve checked the site from behind another large BlueCoat customer, using SSL inspection, and that site seems operational (at this time).

@rg305,

I’ve removed the reference I had. While I’ve not worked for that entity specifically, I’ve worked for others that are of the same scale and sensitivity. I don’t think I posted anything that would be a security issue, but it’s best to not mention them specifically, you’re right about that.

Thanks,
Joe

Hi. Post edit history is normally visible on this forum* – for example, click on the pencil icon at the top-right corner of rg305’s first post – but I’ve hidden the history for @JMDAVIS’s edited post.

* Except when edits are made very quickly.

Just out of curiosity, if you are in the EFF, why are there no articles on eff.org condemning intercepting proxies?

Do you think browser manufacturers should be required to display a warning when known intercepting proxy devices are decrypting HTTPS?

I first learned about these nasties here: https://www.grc.com/fingerprints.htm

I think there was a long thread about this on mozilla.dev.security.policy a few years ago. If I remember correctly, the conclusion was roughly that browser vendors do want end-users of browsers to always know if their connections are being monitored or intercepted, but don't think they can win an arms race against corporate IT departments to force them to allow a browser UI element to disclose this if the IT departments are trying to make the monitoring more clandestine and have full administrative control of the user's device¹. (For example, the monitoring could also be performed from outside of the browser itself, like with a hardware or software keylogger or screen recorder, or maybe by patching a dynamically-linked TLS library to save copies of session keys into a file, or by patching the OS CSPRNG to produce predictable values from an IT-department-known seed.)

Also, there are non-interception use cases for custom browser-trusted CAs and it's currently difficult for browsers to clearly distinguish "custom CA for internal services" from "custom CA for interception of external services" in all cases (although there are some cases that they could distinguish and don't).

The strongest counterargument that I'm aware of is that a lot of environments never really change the defaults and so maybe a UI indicator to distinguish some custom CA or transparency/pinning violations would be left intact in practice much of the time in intercepting environments.

I haven't followed the progress of this issue closely in a few years, so I'd be glad to be updated on what the browser vendors are really saying.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.