Malware test site: Considered an unsafe domain by a third-party API


#1

Hello,

Our website just triggered the “considered an unsafe domain by a third-party API” rejection.
In this case, that is correct, broadly speaking.

Is it possible to request an exemption from the API result? Issuance Policy sounds like the right area.

The denied certificate is for https://malware.wicar.org/ which hosts safe/test “undesirable payloads” for http://wicar.org/ - a website designed to be the antivirus EICAR equivalent for Web, hence the W :wink:

Examples such as old browser / flash / java exploits and javascript mining.

I don’t know what APIs are checked, but I know Google recently flagged it and the objection to please delist was ignored.

https://transparencyreport.google.com/safe-browsing/search?url=wicar.org
https://transparencyreport.google.com/safe-browsing/search?url=malware.wicar.org

Technically, wicar.org is safe and should not be blocked by Google, while malware.wicar.org does not matter so much as it does indeed host dangerous content.

The purpose of WICAR is to be able to safely test network defences. For example, a corporation might have firewall IPS signatures, then AV scanning HTTP proxy, then content filter, then desktop AV. Users can visit the WICAR website to see where their security controls “see” the threat and choose to block. Or if they fail, the test may actually reach a desktop and execute. Given the possibility, it is desirable to have WICAR safe payloads which execute calc.exe instead of testing from a public blacklist of actual infected websites and e.g. cryptolocker ransomware that might infect the organisation if newly deployed security technology doesn’t work as expected.

I might have to chase Google to at least resolve the wicar.org vs malware.wicar.org block.

However the malware host must continue to host fake payloads, and SSL via LetsEncypt was added so that visitors can also test their SSL-inspection anti-malware defences given SSL and TLS is often abused to deliver browser attacks which sneak past IDS/IPS/AV which cannot strip SSL for analysis and (thanks to LetsEncrypt!) is becoming the norm.

Any help? Thanks!
-Patrick


#2

Hi,

By checking the previous history on the same issue…I don’t think there’ll be any way to bypass this API check… At least there aren’t before. https://community.letsencrypt.org/search?q=safe%20browsing

Please also see The CA's Role in Fighting Phishing and Malware

Quoting a paragraph from old posts (Not official opinion… but I think it makes sense…)

Although you can’t request a certificate from let’s encrypt (at least not until your domain are cleared from Google Safe Browsing), you could (might) still get a certificate from commercial CAs…

Thank you


#3

Hi, I went ahead and flagged the post for attention by staff. Hopefully one of the LE members who posts here can offer an official or even unofficial response.


#4

That’s a very interesting policy question which we should bring to the attention of @lestaff.

(Thanks for the flag, @motoko.)

If anyone isn’t familiar with the EICAR test mechanism, see


#5

Hi all,

Many thanks for the responses - first post, but obviously a great community!

@stevenzhu I agree with all points. Happy to use a commercial CA too - I’m assuming this eventuality and I’m sure they won’t care (which it is good that LS does some basic checks!). But the inability to bypass evidently has some unusual flaws such as this. I am also happy to let the certificate expire - but my issue is that an organisation might have a strict browser policy, where they cannot bypass an expired cert. Yet still want to test their anti-malware technology and sysadmins saying “we are immune” to management merely due to an expired cert doesn’t validate that in-line appliances / signatures / heuristics are working.

Thanks for the flag @motoko and @schoen, I wasn’t aware we can ping @lestaff :slight_smile:

I did follow up with Google and await human response but doubtful. Safe Browsing marks them as “Send visitors to harmful websites” which is amusing, because the domain is okay and the subdomain says the same yet actually hosts the questionable content. Seems like a failure of automated classification.

An alternative solution might be to register an additional domain, so that clean / unclean are unique and not flagged by wildcard inheritance which appears to have occurred here.

The other question is whether EICAR should be flagged or not, vs WICAR? EICAR is well known globally, but working AV and Googlebot should still list them as a problem? I suspect they are exempted manually by vendors.

FWIW, wicar.org being blocked is a good thing™. But it has taken years for Google / Chrome / AV to catch up. As a newly registered URL which hadn’t been classified, it was surprising/worrying how many desktop AV solutions either didn’t notice or only picked up the HTML files when written to disk… which for large corporations protecting thousands of users is the last thing you want to see after tens of millions have been invested using cutting edge packet inspection and in-line appliances. It demonstrated that HTTP(S) defence is primarily URL or domain blacklisting alone, rather than malicious content inspection and won’t protect in a real world scenario from a skilled adversary using selective new domain registration to achieve compromise.

Thanks,
-Patrick


#6

Very interesting case @aushack! Thanks for bringing it to our attention. Presently there is no mechanism in Boulder (the Let’s Encrypt CA software) to exempt domains from the Google Safebrowsing check. I think the bar to justify building this into Boulder will be pretty high. We’re generally nervous about features that aren’t broadly applicable (this is a pretty narrow corner case) or that change decisions in the issuance pipeline.

@aushack Do you think there is value in having wicar.org considered a true positive by Google Safe Browsing, or is the challenge simply finding someone to get it removed? It seems to me that its perhaps better to work on getting GSB to consider this a benign website and let subsequent systems make a determination based on the content of the site. If you agree we can try to work together to escalate this with a human being inside of Google. Out of curiosity, when did you submit your follow-up with Google? Has it been without a reply for more than a week?

I’ll discuss this case with the other Boulder engineers this afternoon & see what other folks think.

Thanks!


#7

The consensus here was that this should be addressed by GSB. If that doesn’t work out I think your best path forward will be acquiring a certificate from a CA that has the staffing/process in place to handle exceptional circumstances with manual overrides. I know that’s an unfortunate outcome so I’m hopeful that Google Safe Browsing would consider your request to be whitelisted.


#8

Hi @cpu,

Thanks for that. Yes I can see your point.

I originally opposed the GSB block via the automated form with details which was rejected 24-48 hours later. In that case I suspect they maybe didn’t read / understand so maintained the block. I’m sure the GSB code just checks if the URLs are still 200 / unchanged hash and default to maintain the block.

I sent an email to Google as well, but no response yet. If you know someone that is willing to take a few minutes to actually consider the situation, that would be really great.

Thanks,
-Patrick


#9

@cpu bump?

Is there a DM/PM so we can ask someone from Google Safe Browsing about it?


#10

Hi @aushack, apologies for the delay in response. I’ve been out of town for the past few days.

I’m going to see if I have a contact I can reach out to and will follow-up with you via DM.

Is it safe to assume you haven’t heard anything back from the contact form you filled out previously?


#11

Bump. Many thanks, sorry about the delay.

I shall look into this and report back, should it be of value to others.