Tuning a Linux Server for High Performance Applications

1 Comment

By John Cavazos, Senior Performance Test Engineer

John CavazosIf you recently installed a new OS onto your brand new high-performance server for production or even just for performance testing and are not seeing the results in your application you were expecting, there is likely a good explanation. If you have not tuned your server from the default setting, you are likely hitting some of the default limits of the OS. Regardless of what OS you have installed, it is still necessary to tune the server to get the best performance out of your machine.

Typically, this job is performed by a system’s administrator. However, in smaller companies or start-ups that may not have such a support role, the task may fall to others. If that’s you, and you’re looking for a little guidance in getting it done, keep reading. By no means is this an exhaustive set of tuning that can be done, but more a general guide to the most common settings that we typically tune for performance testing. This article will also be limited to Linux systems for space and time considerations, saving Solaris and Windows for later discussions to come.

Understanding Limits

File descriptors: In Linux, most operations — from accessing network sockets to opening files for writing — use a ‘file descriptor.’ Within the system is a setting that limits how many file descriptors can be used at a time per user. Once these descriptors are all in use, new network connections cannot be made, new files cannot be opened, and many other operations cannot function. This causes problems when your application is trying to do a lot of these operations at once or is under heavy load. By default, this setting is set very low and can quickly become an issue with your server.

Setting these limits higher is recommended, but be careful that you don’t hit the hard OS limit. If this occurs you may not be able to log into the server remotely, and will have to either reboot the machine or log on directly if one or all of the users tie up all the descriptors. The ‘fs.file-max’ setting also affects this on Linux machines and needs to be adjusted. This will be addressed further in ‘kernel tuning’ below.

Processes: Much like file descriptors, the number of open processes is limited by default. Depending upon which type of applications you are running on the server this may or may not be a problem. Applications that require a lot of processes — like Cassandra — can suffer if this is set too low.

Viewing and Configuring File Descriptor and Process Settings

You can check and adjust these settings in several different ways. To see what the process and file descriptor settings are by default, log into your server as the user that is running your application and run ‘ulimit -a.’ This will show you the file descriptor limit (open files) and the process limit (max user processes) as well as several other settings. You can adjust these settings on the fly by using the option listed (such as ‘ulimit -n 100000’) to adjust the number of open files. You may need root access to do this. You may also need to restart your application for these to take effect.

ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 514933
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 514933
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

To adjust these settings more permanently you can add them to the /etc/security/limits.conf file like this. (Note: this example will give these settings only to ‘myuser’ and no other user. You may substitute the user name with an asterisk (*) to give the settings to all users, but be cautious when doing this. Use ‘nofile’ for the number of file descriptors and ‘nproc’ for the number of processes. You will have to log out and log back in for the setting to take effect. And, you will have to restart the process once you log back in.)

myuser soft nofile 294180
myuser hard nofile 294180
myuser soft nproc 32768
myuser hard nproc 32768

In newer versions of Linux, a new settings file has been added which severely limits the number of processes to all users except root. This file overrides whatever is set in limits.conf and will also need to be adjusted or removed. The file is located: /etc/security/limits.d/90-nproc.conf.

Kernel Parameters

In addition to the limit configurations, there are several common kernel parameters that should be tuned. These, too, may be tuned on the fly or set permanently in a startup file like the limits. Below are the settings that we typically use, but you may want to research these settings and tune them to match your needs.

Viewing and Configuring Kernel Parameter Settings

To view the current setting, log onto the server with root access and type sysctl setting_name. To set it on the fly type sysctl -w setting_name=value. To set this permanently add the key value pair into /etc/sysctl.conf. For example net.ipv4.tcp_tw_reuse=1.

TCP Kernel Settings: TCP settings are beneficial to tune if you are running an application that uses a lot of connections such as a web server that is processing a large number of requests. To adjust these settings on the fly you can use the sysctl command.

sysctl -w net.ipv4.tcp_tw_reuse=1
sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_fin_timeout=60

Miscellaneous Kernel Settings: Below are some other settings you may consider tuning…

sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.wmem_max=16777216
sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
sysctl -w net.ipv4.tcp_wmem="4096 65536 16777216"

File Limit Kernel Setting: Regardless of what you set the ‘nofile’ setting to in ‘limits.conf,’ the ‘fs.file-max’ setting will take precedence and needs to be adjusted in unison with the ‘nofile’ setting. This must be equal to or larger than the number of open files in ‘limits.conf.’ I recommend making it larger so that the number of open files does not get taken up by a single user.

sysctl -w fs.file-max= 294180

As mentioned above, this is a fairly basic covering of the tuning that can be done with Linux systems. Deeper tunings to better match your specific needs can certainly be done, and are definitely recommended to obtain the full benefit expected of your new OS.

Author: bridge360blog

Software Changes Everything.... Bridge360 improves and develops custom application software. We specialize in solving complex problems at every phase of the software development lifecycle, removing roadblocks to help our clients’ software and applications reach their full potential in any market. The Bridge360 customer base includes software companies and world technology leaders, leading system integrators, federal and state government agencies, and small to enterprise businesses across the globe. Clients spanning industries from legal to healthcare, automotive to energy, and high tech to high fashion count on us to clear a path for success. Bridge360 was founded in 2001 (as Austin Test) and is headquartered in Austin, Texas with offices in Beijing, China.

One thought on “Tuning a Linux Server for High Performance Applications

  1. Pingback: Tuning a Windows Server for High Performance Applications |

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s