Don’t let the OOM killer stop MySQL

MySQLMany of our customers are successfully using MySQL (or MariaDB) databases on their servers, and they usually run fine as installed and do not need any special attention. However, occasionally problems can occur with MySQL, and this may indicate that some manual tuning is required. One scenario is when the system is short of memory, sometimes MySQL will be killed by the system since MySQL can be a big memory user.

Usually the first you know about this is that your website isn't working, it may display a message saying the database is unavailable. If this has happened, the system logs will contain messages about "mysqld invoked oom-killer". Since MySQL has stopped it will need to be restarted.  Whenever the OOM ("Out Of Memory") killer has been active, it's always best to restart the entire server and not just MySQL, since it's not always clear what other flow-on effects may result from the OOM condition.

Even though restarting the server will most likely fix the immediate problem, it is a sure sign that the system could do with some tuning. When MySQL is unceremoniously killed like this, it's possible to loose some data or corrupt database tables. So it's best to try to avoid this happening if possible.

So how can a server's configuration be changed to prevent the OOM killer from killing MySQL?

Firstly check what other applications might be using lots of memory apart from MySQL. As installed, our servers come with a script which keeps track of memory usage and logs it to the file /var/log/memmon.txt. This is a useful to determine what other processes were using memory when the OOM killer was in action.

A common culprit is Apache. If your server is short of ram, it's better to not service some requests during high load rather than try to service them and have MySQL or other processes killed when memory runs out. So the upper limit on the number of Apache processes which can run (which by default on Debian is set at 150 processes) should be lowered. This is handled by the MaxClients (Apache 2.2) or MaxRequestWorkers (Apache 2.4) directives in Apache's configuration.

Other processes may be using a lot of ram, you could check what is run from cron.  You can also use our HOWTO located at https://rimuhosting.com/howto/memory.jsp

The second step is to tune up MySQL itself. Try running mysqltuner (apt-get instal mysqltuner on Debian and Ubuntu), that should let you know what MySQL's maximum memory usage is likely to be. If that is higher than the ram you have available for MySQL to use, you should consider reconfiguring MySQL to use less buffer space (e.g. key_buffer_size, innodb_buffer_pool, read_buffer_size, sort_buffer_size, max_heap_table_size, tmp_table_size)

Next, you can ask the kernel to give MySQL less attention when it is looking for processes to kill: (e.g. echo '-20' > /proc/$(pidof
mysqld)/oom_score_adj) This only works until the next time the server is restarted, to make it permanent you would need to adjust the init scripts. Configuration options to do this are available for the upstart (oom adjust) and systemd (OOMScoreAdjust) init systems used in modern distributions.

You can also tune the overcommit ( /proc/sys/vm/overcommit_memory and /proc/sys/vm/overcommit_ratio) settings of the kernel itself, so that it doesn't overcommit as much memory in the first place,  so is less likely to invoke the OOM killer.

Finally, if tuning hasn't made a difference, maybe you need more memory for your VPS. Start at at https://rimuhosting.com/cp/vps/upgrade.jsp to add more RAM.

If you need assistance with any of these tasks, our sysadmins are here to help. Just put in a ticket at https://rimuhosting.com/ticket/startticket.jsp

This entry was posted in HOWTO, Rimuhosting and tagged , , . Bookmark the permalink.

One Response to Don’t let the OOM killer stop MySQL

  1. This article was easy to digest and saved me from long latencies and having to upgrade to a more expensive hosting plan. Thank you.