Memory leak @ read_view_open_now (read0read.c:152/166) in Percona Server 5.5.30-30.2 GA

Bug #1170103 reported by Albertas Sileika
52
This bug affects 7 people
Affects Status Importance Assigned to Milestone
Percona Server moved to https://jira.percona.com/projects/PS
Invalid
Undecided
Unassigned
5.1
Invalid
Undecided
Unassigned
5.5
Fix Released
Critical
Alexey Kopytov
5.6
Invalid
Undecided
Unassigned

Bug Description

After upgrade to Percona 5.5.30-30.2 server started to use more RAM. After 4-5h it uses all ram and swap and then oom-killer kills mysql process.

Server is running only mysql.
Version before upgrade: Percona 5.5.30-30.1.
New version installed from rpm. OS: CentOS release 6.4., x86_64.

With 5.5.30-30.1 server was running fine, and there was no problem with ram.

There is 13Gb ram and 4Gb swap in the server.

Variables changed from defaults:

max_connections = 1024
skip-name-resolve
table_cache = 1024
thread_cache = 128
thread_cache_size = 128
innodb_buffer_pool_size = 5G
innodb_additional_mem_pool_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_table_locks = 0
innodb_flush_method = O_DIRECT
max_tmp_tables = 10M
tmp_table_size = 512M
max_heap_table_size = 512M
tmpdir = /dev/shm/
key_buffer_size = 1G
sort_buffer = 2M
join_buffer = 32M
query-cache-size = 256M

mysqladmin status:
Uptime: 15382 Threads: 190 Questions: 11064491 Slow queries: 226905 Opens: 681 Flush tables: 1 Open tables: 591 Queries per second avg: 719.314

Output of show engine innodb status is attached.

Related branches

Revision history for this message
Albertas Sileika (a-sileika) wrote :
Revision history for this message
Valerii Kravchuk (valerii-kravchuk) wrote :

Do you use jemalloc or any other non-default memory allocation library preloaded?

These settings are too high I'd say:

tmp_table_size = 512M
max_heap_table_size = 512M

Please, upload the output of:

show global status;

to this report.

Revision history for this message
Albertas Sileika (a-sileika) wrote :

We use standart memory allocation library.

Revision history for this message
Valerii Kravchuk (valerii-kravchuk) wrote :

Based on these:

Connections 92344
Created_tmp_disk_tables 147778
Created_tmp_tables 464737
Uptime_since_flush_status 16608

6 connections per second is made to your system, in average:

mysql> select 92334/16608;
+-------------+
| 92334/16608 |
+-------------+
| 5.5596 |
+-------------+
1 row in set (0.67 sec)

up to 200+ of them runs concurrently:

Max_used_connections 203

and they use temporary tables a lot, 28 per second:

mysql> select 464737/16608;
+--------------+
| 464737/16608 |
+--------------+
| 27.9827 |
+--------------+
1 row in set (0.00 sec)

and even while you let them to be up to 500M in memory, some seems too big and are created on disk:

mysql> select 147778/16608;
+--------------+
| 147778/16608 |
+--------------+
| 8.8980 |
+--------------+
1 row in set (0.00 sec)

9 per second, in average.

So, I'd say up to 8G of memory could easily be used by implicit temporary tables only. Is this enough to explain OOM in your case?

Please, send the output of:

free
ps aux | grep mysqld

also.

Revision history for this message
Albertas Sileika (a-sileika) wrote :

I still think there is bug in 5.5.30-30.2, because with previous version server was running fine.

2013-04-18 in the morning I added 4Gb ram to the server. Still after 24h server started to swap:

total used free shared buffers cached
Mem: 17298188 17083796 214392 0 116888 752440
-/+ buffers/cache: 16214468 1083720
Swap: 4095992 2982920 1113072

root 960 0.0 0.0 108296 276 ? S Apr18 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/15min-newdb1.pid
mysql 1483 102 89.4 22635872 15478480 ? Sl Apr18 1519:51 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/lib/mysql/15min-newdb1.pid
root 14441 0.0 0.0 103236 864 pts/0 S+ 07:25 0:00 grep --colour=auto mysqld

Also I have attached Temporary Object graph for one month. There is no significant increase in created tmp tables, so I think that this increased memory usage is because of mysql version.

Revision history for this message
Olaf van Zandwijk (olafz) wrote :

I'm having the same issue. I have a 60GB memory, 2GB swap machine. Before the upgrade to 5.5.30-2, the swap usage was stable for weeks. After the upgrade, the swap usage is as in the attached image. The load of the machine did not change, neither did anything else. Like in the report of this bug, the kernel kills the mysqld process (currently every 1-2 days).

Revision history for this message
Brian Morton (rokclimb15) wrote :

Similar situation. No issue on 5.5.30-30.1. 48GB of RAM in system. MySQL only configured to use 80%, but is now using 92.4% of system memory on Ubuntu 12.04 64 bit.

I'm monitoring the swap usage closely and have not yet had OOM killer situation, but I will have to restart the process every few days. Attachments show config and status.

Revision history for this message
Brian Morton (rokclimb15) wrote :

InnoDB Status

Revision history for this message
Brian Morton (rokclimb15) wrote :

Global Status

Revision history for this message
Brian Morton (rokclimb15) wrote :

Free output

Revision history for this message
Brian Morton (rokclimb15) wrote :

ps aux | grep mysqld

Revision history for this message
Alexey Kopytov (akopytov) wrote :

Performance improvements in 5.5.30-30.2 is the main suspect. We've been trying to reproduce the memory leak on internal tests, but have not succeeded so far. So I have a request to everyone affected by this issue: if you can test a custom-built binary with additional diagnostics, please help us track down this problem.

The additional diagnostics is only available in a source branch currently, but we are going to provide a tarball shortly.

Instructions to build:

bzr branch lp:~percona-core/percona-server/5.5.30-30.2-memory-diagnostics
cd 5.5.30-30.2-memory-diagnostics/Percona-Server
cmake . -DBUILD_CONFIG=mysql_release -DFEATURE_SET=community -DWITH_EMBEDDED_SERVER=OFF -DEXTRA_VERSION="-30.2-memory-diagnostic"

make -j6

You can temporarily replace the installed mysqld binary with the newly built one located in sql/ (don't forget to backup the original binary). Additional diagnostics should not have any measurable overhead. It adds 2 status variables: innodb_read_views_memory and innodb_descriptors_memory.

So when the excessive memory usage can be observed after a few hours, please get the output of

SHOW GLOBAL STATUS LIKE 'innodb%memory';

Thank you.

Revision history for this message
Roel Van de Paar (roel11) wrote :

341,336 (310,992 direct, 30,344 indirect) bytes in 3,534 blocks are definitely lost in loss record 7 of 7
   at 0x4C278FE: malloc (vg_replace_malloc.c:270)
   by 0x859D82: ut_malloc_low (ut0mem.c:102)
   by 0x8073FE: read_view_open_now (read0read.c:152)
   by 0x81DB98: row_search_check_if_query_cache_permitted (row0sel.c:4901)
   by 0x7F7B5B: innobase_query_caching_of_table_permitted(THD*, char*, unsigned int, unsigned long long*) (ha_innodb.cc:2156)
   by 0x55B7D2: Query_cache::ask_handler_allowance(THD*, TABLE_LIST*) (sql_cache.cc:4014)
   by 0x55EB7B: Query_cache::store_query(THD*, TABLE_LIST*) (sql_cache.cc:1468)
   by 0x588A71: execute_sqlcom_select(THD*, TABLE_LIST*) (sql_parse.cc:4796)
   by 0x58D402: mysql_execute_command(THD*) (sql_parse.cc:2300)
   by 0x590C0A: mysql_parse(THD*, char*, unsigned int, Parser_state*) (sql_parse.cc:5867)
   by 0x59227C: dispatch_command(enum_server_command, THD*, char*, unsigned int) (sql_parse.cc:1062)
   by 0x62FC1E: do_handle_one_connection(THD*) (sql_connect.cc:1487)
   by 0x62FD00: handle_one_connection (sql_connect.cc:1402)
   by 0x4E36850: start_thread (in /lib64/libpthread-2.12.so)
   by 0x5F0211C: clone (in /lib64/libc-2.12.so)

Revision history for this message
Roel Van de Paar (roel11) wrote :

Also,
at 0x4C278FE: malloc (vg_replace_malloc.c:270)
by 0x4C27A72: realloc (vg_replace_malloc.c:662)
by 0x8074A7: read_view_open_now (read0read.c:166)
otherwise same stack.

summary: - Possible memory leak in Percona Server 5.5.30-30.2
+ Memory leak @ read_view_open_now (read0read.c:152/166) in Percona Server
+ 5.5.30-30.2 GA
Revision history for this message
Roel Van de Paar (roel11) wrote :
Revision history for this message
Roel Van de Paar (roel11) wrote :

One more;

23,056 (18,128 direct, 4,928 indirect) bytes in 206 blocks are definitely lost in loss record 16 of 18
   at 0x4C278FE: malloc (vg_replace_malloc.c:270)
   by 0x859D82: ut_malloc_low (ut0mem.c:102)
   by 0x8073FE: read_view_open_now (read0read.c:152)
   by 0x81DB98: row_search_check_if_query_cache_permitted (row0sel.c:4901)
   by 0x7F7B5B: innobase_query_caching_of_table_permitted(THD*, char*, unsigned int, unsigned long long*) (ha_innodb.cc:2156)
   by 0x55B7D2: Query_cache::ask_handler_allowance(THD*, TABLE_LIST*) (sql_cache.cc:4014)
   by 0x55EB7B: Query_cache::store_query(THD*, TABLE_LIST*) (sql_cache.cc:1468)
   by 0x588A71: execute_sqlcom_select(THD*, TABLE_LIST*) (sql_parse.cc:4796)
   by 0x58D402: mysql_execute_command(THD*) (sql_parse.cc:2300)
   by 0x76E61D: sp_instr_stmt::exec_core(THD*, unsigned int*) (sp_head.cc:3211)
   by 0x76E7A6: sp_lex_keeper::reset_lex_and_exec_core(THD*, unsigned int*, bool, sp_instr*) (sp_head.cc:2988)
   by 0x77384B: sp_instr_stmt::execute(THD*, unsigned int*) (sp_head.cc:3138)
   by 0x77138E: sp_head::execute(THD*, bool) (sp_head.cc:1433)
   by 0x772716: sp_head::execute_procedure(THD*, List<Item>*) (sp_head.cc:2189)
   by 0x58E9C5: mysql_execute_command(THD*) (sql_parse.cc:4251)
   by 0x590C0A: mysql_parse(THD*, char*, unsigned int, Parser_state*) (sql_parse.cc:5867)

Revision history for this message
Roel Van de Paar (roel11) wrote :

Extra compile option used; -DWITH_VALGRIND=ON

Revision history for this message
Roel Van de Paar (roel11) wrote :

And another;

44,928 (33,968 direct, 10,960 indirect) bytes in 386 blocks are definitely lost in loss record 19 of 24
   at 0x4C278FE: malloc (vg_replace_malloc.c:270)
   by 0x859D82: ut_malloc_low (ut0mem.c:102)
   by 0x8073FE: read_view_open_now (read0read.c:152)
   by 0x81DB98: row_search_check_if_query_cache_permitted (row0sel.c:4901)
   by 0x7F7B5B: innobase_query_caching_of_table_permitted(THD*, char*, unsigned int, unsigned long long*) (ha_innodb.cc:2156)
   by 0x55B7D2: Query_cache::ask_handler_allowance(THD*, TABLE_LIST*) (sql_cache.cc:4014)
   by 0x55EB7B: Query_cache::store_query(THD*, TABLE_LIST*) (sql_cache.cc:1468)
   by 0x588A71: execute_sqlcom_select(THD*, TABLE_LIST*) (sql_parse.cc:4796)
   by 0x58D402: mysql_execute_command(THD*) (sql_parse.cc:2300)
   by 0x59E3B8: Prepared_statement::execute(String*, bool) (sql_prepare.cc:4110)
   by 0x5A1DAA: Prepared_statement::execute_loop(String*, bool, unsigned char*, unsigned char*) (sql_prepare.cc:3769)
   by 0x5A207A: mysql_sql_stmt_execute(THD*) (sql_prepare.cc:2791)
   by 0x58E7EB: mysql_execute_command(THD*) (sql_parse.cc:2310)
   by 0x590C0A: mysql_parse(THD*, char*, unsigned int, Parser_state*) (sql_parse.cc:5867)
   by 0x59227C: dispatch_command(enum_server_command, THD*, char*, unsigned int) (sql_parse.cc:1062)
   by 0x62FC1E: do_handle_one_connection(THD*) (sql_connect.cc:1487)

Revision history for this message
Alexey Kopytov (akopytov) wrote :

Confirmed as a regression introduced by the fix for bug #1131187. The workaround is to disable the query cache.

Changed in percona-xtradb-cluster:
milestone: none → 5.5.31-25
status: New → Triaged
no longer affects: percona-xtradb-cluster
Revision history for this message
Shahriyar Rzayev (rzayev-sehriyar) wrote :

Percona now uses JIRA for bug reports so this bug report is migrated to: https://jira.percona.com/browse/PS-2940

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.