A lot of SSL_do_handshake() failed erors in nginx logs

My domain is:

I ran this command:

I didn’t run a command, I checked my nginx error logs and noticed the below error being recorded (fairly regularly).

Further to this, my site’s nginx config (relating to SSL) is as follows:

server { 
	#other config relating to the site here
    #location {} and all that fun stuff

	listen 443 ssl http2; # managed by Certbot
	#listen [::]:443 ssl http2 ipv6only=off;
    ssl_certificate /etc/letsencrypt/live/willstocks.co.uk/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/willstocks.co.uk/privkey.pem; # managed by Certbot
    #include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
	ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
	
}

I have a feeling it relates to the ssl_ciphers, but I’m not 100% familiar with what should be here… As a matter of fact, I’m not 100% sure where this line came from :worried:

It produced this output:

[crit] 6048#6048: *4119 SSL_do_handshake() failed (SSL: error:10067066:elliptic curve routines:ec_GFp_simple_oct2point:invalid encoding error:1419C010:SSL routines:tls_process_cke_ecdhe:EC lib) while SSL handshaking, client: *ip address here*, server: 0.0.0.0:443

My web server is:

Nginx 1.15.8 (SSL termination + reverse proxy)

The operating system my web server runs on is:

Ubuntu 18.04.2

My hosting provider, is:

N/A as far as I can tell - if it is relevant, DigitalOcean

I can login to a root shell on my machine:

Yes

I’m using a control panel to manage my site:

No control panel - just SSH

The version of my client is:

~# certbot --version
certbot 0.31.0

Which version of OpenSSL are you using?
[maybe you should upgrade to latest version]

@rg305 please see below

~# openssl version -a
OpenSSL 1.1.1b  26 Feb 2019
built on: Thu Feb 28 08:51:51 2019 UTC
platform: debian-amd64
options:  bn(64,64) rc4(16x,int) des(int) blowfish(ptr)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -fdebug-prefix-map=/build/openssl-ZQLgGc/openssl-1.1.1b=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPADLOCK_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
OPENSSLDIR: "/usr/lib/ssl"
ENGINESDIR: "/usr/lib/x86_64-linux-gnu/engines-1.1"
Seeding source: os-specific

Nothing pending when running

sudo apt-get update
sudo apt-get upgrade

And I can confirm that per https://www.openssl.org/ I am running the latest production ready version :slight_smile:

Is it possible it’s just to do with Browser/TLS version? IIRC I’ve nuked support for TLS1.1 and older. Also looking at: https://www.ssllabs.com/ssltest/analyze.html?d=willstocks.co.uk&s=2606%3A4700%3A30%3A0%3A0%3A0%3A681b%3Aac27 I only see:

Chrome 49 / XP SP3 Server sent fatal alert: handshake_failure

Everything else seems OK?

The site runs through CloudFlare.
Are you seeing the problem through CloudFlare?
(or are you hitting your server directly)

Sorry - forgot to disable Cloudflare while I ran that test! I’ll disable and will run again :slight_smile:

Yes, I use Cloudflare, however I’m seeing the error on my actual server itself - when hit directly (i.e. when I’m making changes to the site, I disable Cloudflare so I don’t get any weird caching/weirdness!

You hid the source IP.
Is it always the same one (with the problem)?
Do you recognize the source IP?
If so, does that side show anything else in the logs or screen(shot)?

Hi @willstocks-tech

are you sure this isn't only the result of a SSLLabs test?

Check your website

and look in your logs if there is the same error.

Ssllabs tests some things, so handshakes may not work.

And check the ip / useragent, if it is a user or a bot.

Hi @JuergenAuer

I can confirm that the errors were from a period of time where I was not doing any testing at all - I will, however, try to confirm whether it was a bot or not (the error log did not provide UA info)

@rg305 - I did strip it out, but can provide if necessary? If I look back through older log files (not just today) there are more of the errors, and all have different IP’s so I’m inclined to think it’s not a single user?

I assume it’s possibly either a bot or a service, as it must be bypassing Cloudflare to hit my server directly for SSL termination?

There are a lot of bots using only the ip address. So Cloudflare isn't relevant.

If it is bypassing CloudFlare, then it must be hitting your IP directly.
Which means they are NOT using the FQDN.
This sounds very much like a bot [scanning or actively hacking].

If you know the CloudFlare IPs that are used, you can "whitelist" them and block all others.

It seems very possible that this problem is a problem with a client (whether a bot or a browser). I’m tempted to say that unless you know that any legitimate users are encountering it, you can ignore it—especially if mainstream testing tools (and mainstream browsers) seem to regard your site as working properly.

It could be that a bot or a browser is implementing one of the supported ciphersuites incorrectly for some reason.

I don't know for sure whether Cloudflare publishes this information either, but if they do, then this should also work well: your origin server would effectively be hidden from Internet-wide scans.

1 Like

Is there true benefit in doing this, other than ensuring traffic is all going via Cloudflare? Also, are you able to provide any resource for accomplishing such a task, as it's not actually one I'm familiar with!

I haven't setup anything to use IP, so it must just be some form of scraping/scanning?


@schoen

IP-wise though, right? Your "standard" bots such as Google, Bing etc. would all be using FQDN, so there's no risk of impacting any SEO?


@JuergenAuer

How would a bot get my IP if I'm using Cloudflare, as in theory my server IP is hidden (I "orange cloud" my server for all but a minute or so a month)


For reference, a couple of the IP's I've seen over the last couple of days (a good few of them coming from Iran based on https://www.ultratools.com/tools/ipWhoisLookupResult) :

25/03:
89.199.4.4
198.36.23.252
172.80.245.233
178.250.251.27
5.218.44.136
89.199.202.123
5.218.54.231

26/03:
91.133.154.35
185.87.32.198
198.36.23.252
79.127.45.13

I'm inclined to go with the "ignore it", or unless you guys advise restricting all my traffic to Cloudflare origin IP's?

Not really. If you rely on particular Cloudflare (security) features, you might want to make it impossible for them to be bypassed -- but IP address blocking doesn't entirely accomplish that, and regardless anyone who knows your IP can always DDoS it.

Right.

You should probably make sure that the web server doesn't have other virtual hosts -- or the default virtual host -- serving the same site.

Well, they might happen to access it during that minute. :smile:

Especially if you didn't always use Cloudflare, some services will have archived the association between that IP and that domain, and might continue to obnoxiously scan it for whatever reason.

And there are only 4 billion IPv4 addresses; they all get HTTP requests, whether or not they're known to be associated with particular domains.

1 Like

IP-addresses are not hidden, start with ..0.0, go to 0.255, then go to ..1.0.

I have a lot of requests ip + some php / wordpress etc. checks, I had never installed php or wordpress.

So you can ignore these requests.

My blog has sometimes requests like

/de/tags/Geometric-Light-Projection'A=0

Checked the ip - always from the same country.

1 Like

Not sure which response to pick as the resolving response now!

Thanks for all your help/responses guys - I’m thinking about just ignoring, and if it sticks in my head for an extended period of time then I’ll think about ensuring all traffic comes in via CF

1 Like

In the past, scans with tools like nmap commonly took a couple of days or so to contact every IPv4 address, while zmap is able to do this in a few minutes.

ZMap is a fast single packet network scanner designed for Internet-wide network surveys. On a computer with a gigabit connection, ZMap can scan the entire public IPv4 address space in under 45 minutes. With a 10gigE connection and PF_RING, ZMap can scan the IPv4 address space in 5 minutes.

My updated joke about this: "An American thinks 100 years is a long time, an Englishman thinks 100 miles is a long distance, a human thinks 2³³ individuals is a large population."

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.