Nginx needs least tweaking as compared to PHP & MySQL.
Still to get the most out of Nginx, you can tweak few directives in
This is very important. It controls number of worker processes Nginx is running.
worker_processes = number of processors in your system.
To find out how many processors you have on your server, run the following command:
grep processor /proc/cpuinfo | wc -l
It will display a number only:
Set that number as value for
If you are running very high traffic sites, its better to increase value of
worker_connections. Default is 768.
Theoretically, nginx can handle
max clients = worker_processes * worker_connections
worker_connections = 10240
Increase number of files opened by
This directive is not present by default. You can add it in
/etc/nginx/nginx.conf in main section (below
I have come across many definitions for this. Apart from traffic, response time of FASTCGI backed application is a factor to be considered while tweaking this directive. Its default is 65 or 75 seconds.
keepalive_timeout = 30s
Following two directives need tweaking in rarely. When the time is right, Nginx, itself, will tell you to make a change! 😉
Nginx will throw a friendly error like below:
Reloading nginx configuration: nginx: [emerg] could not build the server_names_hash, you should increase either server_names_hash_max_size: 512 or server_names_hash_bucket_size: 64
When you have a domain name longer than 64-chars, you will need it.
By default, its value is 64. If you have a domain 80-chars wide, do not set it to 80. Instead use next value of multiple of 2. That means 128, 256 and so on.
Its default value is 512. If you are hosting hundreds of sites on your server.
Nginx suggests, you can change either
server_names_hash_bucket_size to accomodate large number of sites, but I prefer keeping
server_names_hash_bucket_size as it is and making
server_names_hash_max_size big in multiple of 2’s till error disappears.
On a server, where we host 300+ sites, we needed to change it to 8192!
I used a trick to find out correct size by using following command:
ls /etc/nginx/sites-available/ | wc -c
Above command list of enabled sites’ name which I pass to
wc command to get number of total characters all
server_name are using together. In my case, above command returned 7414, for which next 2’s multiple value is 8192.
In each site config, I was using only 1
server_name without any wildcard or regex. I guess wildcard & regex will also affect value of
WordPress Multi-site Subdomain/Domain-Mapping & server_name hashing
As you know, you can host millions of subdomains/domain-mapping using a WordPress multisite setup. But as far as my experience goes, it doesn’t put any load on Nginx’s server_name hashing.
thanks whatever is desrcibed in article i have already done but still my cpu load is above 1.15+ and around 250members active at any time. any suggetion?
Try using object-cache/database-cache with memcache backend. https://rtcamp.com/tutorials/php/memcache/
Nice article. I have a locally hosted site available only locally. Our site is a media site and contains mostly games and videos which we deliver through nginx web server. These files are hosted in a local hadoop cluster managed by cloudera. This webserver mounts the hdfs in fuse mode and is connected by 2 giga bonded ethernets. And the webserver delivers to users through another 2 giga bonded interfaces.
Our problem is that nginx is very very slow. Can you advise us something on this?
There are many things that could go wrong in your setup.
Replicate same setup in isolated environment to verify if issue is with nginx only.
Test with a large file directly to verify if issue is not with hadoop-cluster.
Finally on nginx end – https://calomel.org/nginx.html (mainly sendfile off, enable aio and set large output buffers)
I found I had to increase my server_names_hash_bucket_size to 64 when I added a third site (a subdomain) to my list of sites available. The site name length was only 4+7+4 characters. Does that seem right?
Doesn’t feel intuitive but it happened with me many times.
May be you can read where it was discussed at length – https://github.com/rtCamp/easyengine/issues/27
we use keepalive on our nginx.conf, but for some reason it seems to be having a detrimental effect on TTFB. The Force SSL file in nginx does a return 301 to the HTTPS version of teh site. Once the browser gets a 301 (with a Keepalive) it waits for bit before making the next request, and this causes high TTFB.