Looking for sample nginx configs that use default_server

As part of our preparation for the 0.9.0 release (which should ship with some support for autoconfiguring nginx servers), we’re trying to test Certbot automatically against a variety of sample configurations. It would be helpful to get a few interesting configurations contributed (although we already have a library mostly based on the sample configurations on the nginx home page).

However, one thing that would be particularly useful to test, and that we don’t have, is realistic configurations where one server uses the default_server option (and other servers exist but aren’t so marked). If you have such a configuration and would like to contribute it, it would be great if you could send it to me. Note that it will become part of our public code base, so if there are hostnames or filenames that you don’t want to be public, you could simply change them to example.com or /var/log or whatever.

If you have an nginx config that you think is particularly unusual or interesting and that you’d like to become a part of our test suite, you’re also welcome to send it in. The goal of these tests is to confirm that each version of Certbot can successfully modify all of their configuration files without human intervention, and that nginx will apparently be able to use the resulting modified config files.


This is the config we use for all of our sites:


There’s a couple of variable placeholders in there, but you should be able to get the idea.

(sorry, really of-topic but I don’t think of a better place to put that)

Using virtual host fall-back (default_server with apache2 for example) increase the risk of virtual host confusion attacks

One way to prevent these attacks is to make sure the web-server only answer to the domains he is supposed to. (if he can answer for a domain he is not supposed to but covered by the presented certificate in the default vhost, then it may be vulnerable)




In my opinion, as these attacks/vulnerabilities aren’t really famous, it would be an improvement if Certbot warns about these potentially vulnerable configurations.

Here's my nginx setup.


location /.well-known/acme-challenge {
    alias /var/www/letsencrypt;
    location /.well-known/acme-challenge {
        #header seems unnecessary - it works without it in default host
        #add_header Content-Type application/jose+json;
        try_files $uri $uri/ =444;


server {
    listen 80 default_server;
    server_name _;
    access_log /var/log/nginx/access_log main;
    error_log /var/log/nginx/error_log info;
    root /var/www/letsencrypt;
    try_files $uri $uri/ =444;


server {
    listen [::]:80 default_server;
    server_name _;
    access_log /var/log/nginx/access_log main;
    error_log /var/log/nginx/error_log info;
    root /var/www/letsencrypt;
    try_files $uri $uri/ =444;


server {
    listen [hidden]:80;
    server_name hidden.net;
    include templates/letsencrypt.inc;
    location / {
        return 301 https://$host$request_uri;
server {
    listen [hidden]:443 ssl http2;
    server_name hidden.net;
    ssl_certificate /etc/letsencrypt/live/hidden.net/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/hidden.net/privkey.pem;
    root /dev/null;
    include templates/letsencrypt.inc;
    location / {
        proxy_buffering off;
        proxy_set_header Host $host;
        proxy_set_header X-Real_IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

nginx.conf includes *.conf.
/var/www/letsencrypt contains an empty .well-known directory.

The reason default.conf and default6.conf are separate server blocks is because at some point with some nginx version while I was setting this up, using both IPv4 and IPv6 with default_server in the same block was causing weird bugs.

When I request/renew certs, I use --webroot -w /var/www/letsencrypt.

Thanks to everyone who contributed these. I am going to add them to our test corpus but haven’t done so yet. The nginx configuration language and what you can do with it can get pretty complex.

We did put out our 0.9.0 Certbot release (@bmw is about to make a forum post announcing it) this morning. It includes officially-enabled nginx integration for the first time via --nginx. Like the Apache plugin, this will attempt to modify your web server configuration files directly in the course of obtaining (if you used TLS-SNI authentication) and installing your certs.

Since this is a totally new feature, it would be great to have people try it out and give us feedback about possible bugs. I’m sure there will be nginx configurations that aren’t handled properly yet, but we’re optimistic that the integration will work for most users in this version.

You can get 0.9.0 right away if you install it yourself using pip or certbot-auto rather than with an OS package.

Hope it’s not too late to chip in. My project basically turns NGINX into a full-on traffic manager so i make use of a default_server config in the main server declaration and issue a 444 for any inbound hostnames which don’t match a defined config (i have no easy way to know ahead of time which hostnames are/aren’t valid so this is a catch). This means i can effectively drop connections when e.g. an IaaS vendor IP has been reused for one of my traffic manager servers that was previously used by another company - so i won’t serve our content on their hostname.

Here’s an excerpt:

user tmdaemon;
worker_processes auto;
worker_priority -15;
worker_cpu_affinity auto;
worker_rlimit_nofile 50000;

events {
    # worker_connections benefits from a large value in that it reduces error counts
    worker_connections 20000;
    multi_accept on;

http {
	# Default includes
	include       /etc/nginx/current/mime.types;
	default_type  application/octet-stream;

	include /etc/nginx/current/proxy.conf;

	# Tuning options - these are mainly quite project-specific
	server_tokens off;
	keepalive_requests 1024;
	keepalive_timeout 120s 120s;
	sendfile on;
	tcp_nodelay on;
	tcp_nopush on;
	client_header_timeout 5s;
	open_file_cache max=16384 inactive=600s;
	open_file_cache_valid 600s;
	open_file_cache_min_uses 0;
	open_file_cache_errors on;
	output_buffers 64 128k;
    aio on;
    directio 512;

	# For small files and heavy load, this gives ~5-6x greater throughput (avoids swamping workers with one request)
	postpone_output 0;
	reset_timedout_connection on;
	send_timeout 3s;
	sendfile_max_chunk 1m;
	large_client_header_buffers 8 8k;
	connection_pool_size 4096;

  # DNS resolver (for pool nodes AKA upstream servers and also for OCSP)
    resolver_timeout 10s;

	# client_body_buffer_size - Sets buffer size for reading client request body. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file
	client_body_buffer_size 8k;
	client_header_buffer_size 8k;

	# client_max_body_size - Sets the maximum allowed size of the client request body, specified in the “Content-Length” request header field. If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client
	client_max_body_size 8m;

	# We need to increase the hash bucket size as the R53 names are long!
	server_names_hash_bucket_size 128;

# Logging
	# NOTE: $host may need to in fact be $hostname
	log_format standard '"$remote_addr" "$time_iso8601" "$request_method" "$scheme" "$host" "$request_uri" "$server_protocol" "$status" "$bytes_sent" "$http_referer" "$http_user_agent" "$request_time" "$request_time" "$upstream_cache_status" "-" "-" "$sent_http_vary" "$sent_http_cache_control" "$upstream_status" "$ssl_protocol" "$ssl_cipher" "$ssl_server_name" "$ssl_session_reused" "$tcpinfo_rtt"';
	access_log /var/log/nginx/main-access.log standard;
	error_log /var/log/nginx/main-error.log debug;

# GeoIP config
	# This appears to need an absolute path, despite the docs suggesting it doesn't.
	# Path is defined in bake script so changes to that will break this
	geoip_country /var/lib/GeoIP/GeoLiteCountry.dat;
	geoip_city /var/lib/GeoIP/GeoLiteCity.dat;

# Global-scope HTTP Headers
	# Request headers
	include /etc/nginx/current/standard-request-headers.conf;

	# Response headers - we need these here for defaults, in case no listener/route is selected
	include /etc/nginx/current/standard-response-headers.conf;

# Proxy global configuration
	proxy_cache_path /mnt/cache_data levels=1:2 use_temp_path=on keys_zone=shared_cache:32m;
	proxy_temp_path /mnt/cache_tmp;

    # SSL session cache - for server-side TLS sessions
    ssl_session_cache shared:global_ssl_cache:128m;

	# Generic/common server for listen port configs
	server {
		listen *:80 deferred  so_keepalive=120s:30s:20 reuseport default_server;
		listen *:443 ssl http2 deferred so_keepalive=120s:30s:20 reuseport default_server;

		# This cert & key will never actually be used but are needed to allow the :443 operation - without them the connection will be closed
		ssl_certificate     /etc/nginx/current/tls/certs/default.crt;
		ssl_certificate_key /etc/nginx/current/tls/private/default.key;

		# Return an HTTP 444 for any unmatched hostnames
		return 444;

	# Top-level includes for pool and listeners. Listeners then include relevant routes, pools incorporate healthchecks

	include /etc/nginx/current/pools/<pool-group-a>/*.conf;
	include /etc/nginx/current/pools/<pool-group-n>/*.conf;
	include /etc/nginx/current/pools/<pool-group-c>/*.conf;

	include /etc/nginx/current/listeners/*.conf;

	# include naxsi rules
	include /etc/nginx/current/naxsi.rules;

I have stripped out most of the bits which are custom as they’re not of great interest i guess. For reference, each listener conf file contains one or more server declarations which listen on TCP :80 or :443 with a defined hostname - these listener don’t use default_server (obviously :-))

Hope that helps

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.