Google’s Safe Browsing FAQ lists three types of sites which receive advisories: Phishing, Malware, and Unwanted Software. Two of the three types provide an appeals/delisting process, but “Unwanted Software” does not. However, it may be that LE’s proposed use of the Safe Browsing API is only looking at the first two types.
I think it is not job for a CA (especially for a DV certificate) to judge how this certificate will be used. It doesn’t matter if it is LE, Google or anybody. People shouldn’t rely on this for how genuine the site is. In short term it sounds good idea…but in long term it is not.
As TLS becomes as pervasive as I think we all anticipate, the UX
metaphor for it should probably change quite a bit. Perhaps it would be
inverted and browsers would only show a message if you’re not encrypted.
Regardless, I think the future is going to look quite a bit different
than the present, and looking forward, we should not be so hung up on
the “lock” metaphor.
I agree the UI metaphor needs to change. I’d propose browser vendors do something like below.
- Extended Validation certs: Green lock
- Let’s Encrypt generated/Standard certs: Gray padlock (or nothing at all)
- No encryption or broken certs: Red warning
I don’t think the CA should really be doing anything but domain validation. Why not just have the browser vendors subscribe to Google’s Safe Browsing (or whatever service) if they choose?
I can see an issue with using the Google Safe Browsing API in that it might discourage websites from switching to HTTPS because they would be left inaccessible if their site is wrongly flagged. Users can bypass the safe browsing warning but they won’t be able to bypass a certificate error if the site is using HSTS.
I agreed with this blog post throughout until the conclusion, which came out of nowhere and surprised me.
I can’t say I support this decision. But if the CA industry needs more time to wean itself off of security theatre practices, I suppose LE will have to play along for the time being.
I’m fairly sure sites have been wrongly blacklisted by Google in the past. Moreover, it seems relevant to point out that, as far as I’m aware, browsers check these blacklists anyway - which raises the question of why Let’s Encrypt needs to check them as well. Optionally performing these checks in the browser seems the right place to do this sort of thing.
That is a serious issue, I suggest checking for safe browsing only if the domain is new in LetsEncrypt, but there is no reason to check a old domain.
I do agree with a change but I dont agree with automatically causing all sorts of warnings with no encryption since a lot of hosters dont have HTTPS or dont allow you to set a cert, meaning either HTTP or invalid cert, I know that from a friend of mine.
unless everyone can get HTTPS at no exra cost, theere should be no huge warning. there should maybe a small speech-bubble-like warning, similar when FF wants you to confirm an addon install but only on HTTP pages with a password field because a blog doesnt really need HTTPS especially if the admins are the only ones who can log in.
Unless that domain was recently compromised.
The Google malware and phising check is a good starting point but shouldn’t be absolute. If a domain is marked suspicious by one or multiple sources then don’t reject it but mark it for review.
Also have you conciderd the fact that it’s really easy to get a new domain, get a certificate and then put malware on the domain. Currently certificates are valid for 90 days, more then enough time to do some damage. Unless you’re planning on daily checks on all the domains the whole malware check is bypassed easily.
As long as you’re not offering extended certificates I don’t see any problem. The purpose of a standard certificate is encryption, the prupose of an extended certificate is establishing identity.
The purpose of a “standard” certificate (Domain Validation certificate I think is what you are referring to there), is to validate that the server being accessed is under the control of the specified domain, and to encrypt the contents of communication while in transit between the client and server.
EV certificates do the same but in addition include identity of entity operating the domain.
Problem with MiCRo’s argument is that every certificate (DV, OV, or EV) is branded with the name of the CA. If the certificate is used for illegal purposes it could undermine the reputation of the CA. For example, Netcraft has called out Comodo for providing about 3/4 of the certificates to phishing sites, half of which are from CloudFlare.
If the CA is not prudent with anti-fraud measures then it could be in the next Netcraft report, or have angry comments and ratings placed on Web of Trust, etc.
I’ve been watching this space lately, as I am starting to consider getting Let’s Encrypt for my own domain, and just want assurances that I can trust the service.
LE issues only DV certificates and that’s the only thing they should check, ownership of a domain. Everything else should really not be a CA task nor should any trust be lost when a certificate is used for phishing. CAs are the wrong party here, all these checks should be client only.
Good policy and explanation. I feel like it’s an appropriate measure that doesn’t bog down the effort.
As a research opportunity, consider tracking malware status for domains over time. Who makes it onto a malware list after a cert is issued, and who falls off? And in which malware lists?
At the very least, it would interesting, but it could prove useful for everyone since LE does not represent the normal Google malware API user. I also doubt that you’ll find a better service than Google, but having the numbers to back up the decision may be nice.
Time permitting… If this distracts from the mission, skip it/save it for later.
Please stick to Domain Validation. You cannot assure the content of sites that are compromised, for example.
Using Google’s Safe Browsing API is not the best approach either.
Let’s get the world on SSL/TLS, then tackle the problem of malware/phishing attacks.
While I agree with your posted draft policy to check with Google before issuing a domain certificate, and while I understand your reasoning that CAs should not be in the business of certifying domains or websites as being free of phishing and malware, I think you must have a customer interface for complaining and checking on complaints. Verified complaints must lead to revocations and other actions. The interface must also support creating a trusted user category to help with policing. By soliciting feedback from users, and by elevating some users to trusted roles, much of the burden of policing can be lifted from Let’s Encrypt staff and handled by the users themselves. Most of the cost of such a scheme will be in doing a really good design for this complaint and management interface, but again the users can be involved in reviewing the design, much like Internet RFCs are reviewed, although perhaps less formally so. I would suggest have an initial design for this functionality, with a lifetime of perhaps a year, and with an overlapping design effort for a second-generation functionality that would replace the first interface. My reasoning is that all significant software designs need at least a second version, to fix all the limitations of the first version. It was Don Knuth, the computer scientist, who suggested rewriting large systems after they had been in use (at least for Alpha testing) for some time.
In general, the trusted and expert user community will probably start small, but within a year or two may become quite large. This is another reason why a planned redesign will likely work well.
I hope these suggestions are useful to stimulate thinking, planning, and policy.
I’m not sure that Let’s Encrypt necessarily intends to offer users any warranty that the sites that receive certs are trustworthy or not malicious. Part of the idea that I took from Josh’s post, and from other related discussions that I’ve been in, is that the idea that a CA is “vouching for” a site or saying that the site is “trustworthy” is a layering confusion, and one that, if it persists, could permanently prevent us from having a 100% encrypted Internet.
That doesn’t mean that there’s no such role to be performed as telling people about whether sites are trustworthy and whether sites are likely to harm them, but if people expect that function to be in the same service as TLS name/key bindings, we may not get that much ubiquity and automation for the name/key binding because human beings are getting put back in the loop.
This might be a reason that Let’s Encrypt may not want to have a brand in the same way that some other CAs have.
Yes CAs who issue EV certs certainly check this. However as LE obviously does not issue EV certs this is out of scope of the article and this thread. (this may also be the reason why it was not mentioned)
Major browser vendors already consider this. For example Mozilla has already simplified the GUI in Firefox 42 a bit:
Chrome/Chromium considers this too: https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure
@rugk I mean: because LE doesn’t issue EV certs, for those companies and organisations which need it EV is always available because they can’t be phished.
That is why it would be good if HTTP will be marked as insecure as developers of two browsers have already suggested: Chrome/Chromium - (‘Marking HTTP As Non-Secure’) and Firefox (‘Deprecating Non-Secure HTTP’).
And HTTPS as the new normal.
as soon as DNSSec and dane are fully supported well not that much of a problem, but before that the CAs are a problem, because any CA can generate for anyone and HPKP relies on the fact that a user already visited a page.
but HTTP should only be warned if there are actual forms on the page, because simple information pages shouldnt need to involve themselves with “complex” stuff like certificates…