Hello, OS: 13.0-STABLE stable/13-n246859-416194c9af84 mariadb103-server-10.3.31 MariaDB crashing when tries to access one of the DB tables: 2021-08-20 06:14:01 0x1681847a00 InnoDB: Assertion failure in file /usr/ports/databases/mariadb103-server/work/mariadb-10.3.31/storage/innobase/btr/btr0pcur.cc line 527 InnoDB: Failing assertion: page_is_comp(next_page) == page_is_comp(page) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to https://jira.mariadb.org/ InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/ InnoDB: about forcing recovery. 210820 6:14:01 [ERROR] mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. To report this bug, see https://mariadb.com/kb/en/reporting-bugs We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Server version: 10.3.31-MariaDB-log key_buffer_size=17179869184 read_buffer_size=262144 max_used_connections=8 max_threads=65537 thread_count=13 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2181073303 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x1857a95848 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7fffdbbbfeb0 thread_stack 0x49000 0x115139c <my_print_stacktrace+0x3c> at /usr/local/libexec/mysqld 0xb32ec9 <handle_fatal_signal+0x299> at /usr/local/libexec/mysqld 0x801812e62 <pthread_sigmask+0x532> at /lib/libthr.so.3 Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0x1857ab4520): SELECT /*!40001 SQL_NO_CACHE */ `id`, `user_id`, `user_login`, `failed_login_date`, `login_attempt_ip` FROM `wp_aiowps_failed_logins` Connection ID (thread ID): 38 Status: NOT_KILLED Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on ----------------------------------------------- Here is .core backtrace: Core was generated by `/usr/local/libexec/mysqld --defaults-extra-file=/var/db/mysql/my.cnf --basedir=/'. Program terminated with signal SIGABRT, Aborted. #0 kill () at kill.S:4 4 RSYSCALL(kill) [Current thread is 1 (LWP 181500)] #0 kill () at kill.S:4 #1 0x0000000000b330ea in handle_fatal_signal () #2 0x0000000801812e62 in handle_signal (actp=actp@entry=0x7fffdbbbce40, sig=sig@entry=6, info=info@entry=0x7fffdbbbd230, ucp=ucp@entry=0x7fffdbbbcec0) at /usr/src/lib/libthr/thread/thr_sig.c:303 #3 0x000000080181236e in thr_sighandler (sig=6, info=0x7fffdbbbd230, _ucp=0x7fffdbbbcec0) at /usr/src/lib/libthr/thread/thr_sig.c:246 #4 <signal handler called> #5 thr_kill () at thr_kill.S:4 #6 0x00000008018d5f84 in __raise (s=s@entry=6) at /usr/src/lib/libc/gen/raise.c:52 #7 0x000000080198cc89 in abort () at /usr/src/lib/libc/stdlib/abort.c:67 #8 0x0000000001027c7b in ?? () #9 0x0000000000efaf0c in ?? () #10 0x0000000000fd4912 in ?? () #11 0x0000000000e2b7b3 in ?? () #12 0x0000000000a2bdf2 in handler::ha_rnd_next(unsigned char*) () #13 0x0000000000b5cb1a in rr_sequential(READ_RECORD*) () #14 0x0000000000c83cdc in sub_select(JOIN*, st_join_table*, bool) () #15 0x0000000000c70f62 in JOIN::exec_inner() () #16 0x0000000000c582f5 in mysql_select(THD*, TABLE_LIST*, unsigned int, List<Item>&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsigned long long, select_result*, st_select_lex_unit*, st_select_lex*) () #17 0x0000000000c57fc9 in handle_select(THD*, LEX*, select_result*, unsigned long) () #18 0x0000000000c294b3 in ?? () #19 0x0000000000c23afb in mysql_execute_command(THD*) () #20 0x0000000000c213d3 in mysql_parse(THD*, char*, unsigned int, Parser_state*, bool, bool) () #21 0x0000000000c1ea69 in dispatch_command(enum_server_command, THD*, char*, unsigned int, bool, bool) () #22 0x0000000000c20b05 in do_command(THD*) () #23 0x0000000000d81184 in tp_callback(TP_connection*) () #24 0x0000000000e16dc0 in ?? () #25 0x0000000801809768 in thread_start (curthread=0x1681847a00) at /usr/src/lib/libthr/thread/thr_create.c:292 #26 0x0000000000000000 in ?? () Backtrace stopped: Cannot access memory at address 0x7fffdbbc0000 I don't know is it a problem with OS or mariadb. Should I report this bug to https://jira.mariadb.org/ ?
Created attachment 227335 [details] my.cnf
Have you submitted a report upstream on jira.mariadb.org? Does this happen on any query, or just for this table, have you run mariadb-upgrade on your database?
(In reply to Bernard Spil from comment #2) I didn't submit bug report to jira.mariadb.org yet because I don't know for sure is it OS problem or mariadb. First what I did when it happened first time - I wiped data dir then started mysql-server (for initialize data dirs) and restored all databases from backup. But after ~12 hours I've been started getting the same crashes.
(In reply to iron.udjin from comment #3) Currently mysql crashes when access a only one table from database. After restore full backup it's been started crashing when tried to access another table. So I think the problem not related on one table.
I'm also unable to use mariadb103-server once having updated the OS to 13.0-RELEASE-p3 a few days ago. It was working fine after the update, until I started to add new databases, and adjust users and permissions, which fairly quickly caused the server to exit with ERROR 2013 (HY000) at line 1093: Lost connection to MySQL server during query I rebuilt via ports and also alternatively installed the pkg but no change in behavior once installed. 2021-08-21 13:49:36 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-08-21 13:49:36 0 [Note] InnoDB: Uses event mutexes 2021-08-21 13:49:36 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-08-21 13:49:36 0 [Note] InnoDB: Number of pools: 1 2021-08-21 13:49:36 0 [Note] InnoDB: Using SSE2 crc32 instructions 2021-08-21 13:49:36 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2021-08-21 13:49:36 0 [Note] InnoDB: Completed initialization of buffer pool 2021-08-21 13:49:36 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=53936269 2021-08-21 13:49:37 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2021-08-21 13:49:37 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1" 2021-08-21 13:49:37 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2021-08-21 13:49:37 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2021-08-21 13:49:38 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2021-08-21 13:49:38 0 [Note] InnoDB: Waiting for purge to start 2021-08-21 13:49:38 0 [Note] InnoDB: 10.3.31 started; log sequence number 53936278; transaction id 68864 2021-08-21 13:49:38 0 [Note] InnoDB: Loading buffer pool(s) from /var/db/mysql/ib_buffer_pool 2021-08-21 13:49:38 0 [Note] Plugin 'FEEDBACK' is disabled. 2021-08-21 13:49:38 0 [Note] Recovering after a crash using tc.log 2021-08-21 13:49:38 0 [Note] Starting crash recovery... 2021-08-21 13:49:38 0 [Note] Crash recovery finished. 2021-08-21 13:49:38 0 [Note] Server socket created on IP: '::'. 2021-08-21 13:49:38 0 [Note] Reading of all Master_info entries succeeded 2021-08-21 13:49:38 0 [Note] Added new Master_info '' to hash table 2021-08-21 13:49:38 0 [Note] /usr/local/libexec/mysqld: ready for connections. Version: '10.3.31-MariaDB' socket: '/tmp/mysql.sock' port: 3306 FreeBSD Ports 210821 13:49:39 [ERROR] mysqld got signal 10 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. To report this bug, see https://mariadb.com/kb/en/reporting-bugs We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Server version: 10.3.31-MariaDB key_buffer_size=134217728 read_buffer_size=131072 max_used_connections=1 max_threads=153 thread_count=6 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467364 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x81622d548 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7fffdc747f38 thread_stack 0x49000 0x116f27c <my_print_stacktrace+0x3c> at /usr/local/libexec/mysqld 0xb507d5 <handle_fatal_signal+0x295> at /usr/local/libexec/mysqld 0x80179fe00 <_pthread_sigmask+0x530> at /lib/libthr.so.3 Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0x8162537e0): SHOW FULL COLUMNS FROM `wp_wfblocks7` Connection ID (thread ID): 8 Status: NOT_KILLED Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains information that should help you find out what is causing the crash. Core pattern: %N.core
(In reply to Bernard Spil from comment #2) mysql_upgrade Phase 1/7: Checking and upgrading mysql database Processing databases mysql mysql.column_stats OK mysql.columns_priv OK mysql.db OK mysql.event OK mysql.func OK mysql.gtid_slave_pos OK mysql.help_category OK mysql.help_keyword OK mysql.help_relation OK mysql.help_topic OK mysql.host OK mysql.index_stats OK mysql.innodb_index_stats OK mysql.innodb_table_stats OK mysql.plugin OK mysql.proc OK mysql.procs_priv OK mysql.proxies_priv OK mysql.roles_mapping OK mysql.servers OK mysql.table_stats OK mysql.tables_priv OK mysql.time_zone OK mysql.time_zone_leap_second OK mysql.time_zone_name OK mysql.time_zone_transition OK mysql.time_zone_transition_type OK mysql.transaction_registry OK mysql.user OK Phase 2/7: Installing used storage engines... Skipped Phase 3/7: Fixing views Phase 4/7: Running 'mysql_fix_privilege_tables' ERROR 2013 (HY000) at line 643: Lost connection to MySQL server during query ERROR 2006 (HY000) at line 644: MySQL server has gone away ERROR 2006 (HY000) at line 645: MySQL server has gone away ERROR 2006 (HY000) at line 646: MySQL server has gone away ERROR 2006 (HY000) at line 647: MySQL server has gone away ERROR 2006 (HY000) at line 648: MySQL server has gone away ERROR 2006 (HY000) at line 649: MySQL server has gone away ERROR 2006 (HY000) at line 650: MySQL server has gone away ERROR 2006 (HY000) at line 651: MySQL server has gone away ERROR 2006 (HY000) at line 652: MySQL server has gone away ERROR 2006 (HY000) at line 653: MySQL server has gone away ERROR 2006 (HY000) at line 654: MySQL server has gone away ERROR 2006 (HY000) at line 655: MySQL server has gone away ERROR 2006 (HY000) at line 656: MySQL server has gone away ERROR 2006 (HY000) at line 657: MySQL server has gone away ERROR 2006 (HY000) at line 658: MySQL server has gone away ERROR 2006 (HY000) at line 659: MySQL server has gone away ERROR 2006 (HY000) at line 660: MySQL server has gone away ERROR 2006 (HY000) at line 661: MySQL server has gone away ERROR 2006 (HY000) at line 662: MySQL server has gone away ERROR 2006 (HY000) at line 663: MySQL server has gone away ERROR 2006 (HY000) at line 664: MySQL server has gone away ERROR 2006 (HY000) at line 665: MySQL server has gone away ERROR 2006 (HY000) at line 666: MySQL server has gone away ERROR 2006 (HY000) at line 667: MySQL server has gone away ERROR 2006 (HY000) at line 668: MySQL server has gone away ERROR 2006 (HY000) at line 669: MySQL server has gone away ERROR 2006 (HY000) at line 670: MySQL server has gone away ERROR 2006 (HY000) at line 671: MySQL server has gone away ERROR 2006 (HY000) at line 672: MySQL server has gone away ERROR 2006 (HY000) at line 673: MySQL server has gone away ERROR 2006 (HY000) at line 674: MySQL server has gone away ERROR 2006 (HY000) at line 675: MySQL server has gone away ERROR 2006 (HY000) at line 676: MySQL server has gone away ERROR 2006 (HY000) at line 677: MySQL server has gone away ERROR 2006 (HY000) at line 678: MySQL server has gone away ERROR 2006 (HY000) at line 679: MySQL server has gone away ERROR 2006 (HY000) at line 680: MySQL server has gone away ERROR 2006 (HY000) at line 681: MySQL server has gone away ERROR 2006 (HY000) at line 682: MySQL server has gone away ERROR 2006 (HY000) at line 683: MySQL server has gone away ERROR 2006 (HY000) at line 684: MySQL server has gone away ERROR 2006 (HY000) at line 685: MySQL server has gone away ERROR 2006 (HY000) at line 686: MySQL server has gone away ERROR 2006 (HY000) at line 687: MySQL server has gone away ERROR 2006 (HY000) at line 688: MySQL server has gone away ERROR 2006 (HY000) at line 689: MySQL server has gone away ERROR 2006 (HY000) at line 690: MySQL server has gone away ERROR 2006 (HY000) at line 691: MySQL server has gone away ERROR 2006 (HY000) at line 692: MySQL server has gone away ERROR 2006 (HY000) at line 693: MySQL server has gone away ERROR 2006 (HY000) at line 694: MySQL server has gone away ERROR 2006 (HY000) at line 695: MySQL server has gone away ERROR 2006 (HY000) at line 696: MySQL server has gone away ERROR 2006 (HY000) at line 697: MySQL server has gone away ERROR 2006 (HY000) at line 698: MySQL server has gone away ERROR 2006 (HY000) at line 699: MySQL server has gone away ERROR 2006 (HY000) at line 700: MySQL server has gone away ERROR 2006 (HY000) at line 701: MySQL server has gone away ERROR 2006 (HY000) at line 702: MySQL server has gone away FATAL ERROR: Upgrade failed
I seem to be hitting sort of the same issue here. I upgraded on 08-18-21 and everything continued to run as normal. I am using mariadb as the backend on a nextcloud server. I ran mysql-server upgrade and it was reported back as successful. Then this morning, when trying to log in to nextcloud, I get "Internal Sever Error" After doing research I find out that mariadb will not start. although all other services are running as they should. Nextcloud throws out errors, but it is due to mariabd not running. I have even try to set innodb_force_recovery but it still will not start. There is also a .core file but it is about 500 mb big. Last part of the log file below: Server version: 10.3.31-MariaDB key_buffer_size=134217728 read_buffer_size=131072 max_used_connections=0 max_threads=153 thread_count=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467364 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x0 thread_stack 0x49000 0x116f27c <my_print_stacktrace+0x3c> at /usr/local/libexec/mysqld 0xb507d5 <handle_fatal_signal+0x295> at /usr/local/libexec/mysqld 0x80179fe00 <_pthread_sigmask+0x530> at /lib/libthr.so.3 The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains information that should help you find out what is causing the crash. Core pattern: %N.core 2021-08-25 10:22:21 0 [Note] InnoDB: !!! innodb_force_recovery is set to 1 !!! 2021-08-25 10:22:21 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-08-25 10:22:21 0 [Note] InnoDB: Uses event mutexes 2021-08-25 10:22:21 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-08-25 10:22:21 0 [Note] InnoDB: Number of pools: 1 2021-08-25 10:22:21 0 [Note] InnoDB: Using SSE2 crc32 instructions 2021-08-25 10:22:21 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2021-08-25 10:22:21 0 [Note] InnoDB: Completed initialization of buffer pool 2021-08-25 10:22:21 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=15995163577 2021-08-25 10:22:21 0x802012000 InnoDB: Assertion failure in file /wrkdirs/usr/ports/databases/mariadb103-server/work/mariadb-10.3.31/storage/innobase/log/log0recv.cc line 1585 InnoDB: Failing assertion: !page || (ibool)!!page_is_comp(page) == dict_table_is_comp(index->table) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to https://jira.mariadb.org/ InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/ InnoDB: about forcing recovery. 210825 10:22:21 [ERROR] mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. To report this bug, see https://mariadb.com/kb/en/reporting-bugs We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail.
(In reply to John Anderson from comment #7) are you running FreeBSD 13.0? Was the system recently upgraded from an earlier version of FreeBSD?
(In reply to Oclair from comment #8) The system was installed as 13. Only updates are security patches. Also, I should have noted. This is running in a jail that also began as ver13. jda
After I reported this bug, all DBs was recovered from backup. After that I migrated from mariadb103 to mariadb104. It was working fine until now. Today I was editting my.cnf and restarted a few times mariadb. After a few restarts one of the tables got broken: 210828 0:45:20 [ERROR] mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. To report this bug, see https://mariadb.com/kb/en/reporting-bugs We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Server version: 10.4.21-MariaDB-log key_buffer_size=17179869184 read_buffer_size=262144 max_used_connections=10 max_threads=65537 thread_count=10 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2181073419 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x18312b5748 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7fffdb340ed8 thread_stack 0x49000 0x1232c3c <my_print_stacktrace+0x3c> at /usr/local/libexec/mysqld 0xbdb439 <handle_fatal_signal+0x299> at /usr/local/libexec/mysqld 0x8018ffe62 <pthread_sigmask+0x532> at /lib/libthr.so.3 Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0x1831306220): UPDATE `wp_wrc_caches` SET `expiration` = '1970-01-01 00:00:01' WHERE `cache_type` = 'endpoint' AND `object_type` = 'de_opinion' AND `is_single` = 0 Connection ID (thread ID): 76 Status: NOT_KILLED Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains information that should help you find out what is causing the crash. Core pattern: /var/tmp/%U.%N.core .core backtrace: Core was generated by `/usr/local/libexec/mysqld --defaults-extra-file=/var/db/mysql/my.cnf --basedir=/'. Program terminated with signal SIGSEGV, Segmentation fault. #0 kill () at kill.S:4 4 RSYSCALL(kill) [Current thread is 1 (LWP 153459)] #0 kill () at kill.S:4 #1 0x0000000000bdb65a in handle_fatal_signal () #2 0x00000008018ffe62 in handle_signal (actp=actp@entry=0x7fffdbc9be40, sig=sig@entry=11, info=info@entry=0x7fffdbc9c230, ucp=ucp@entry=0x7fffdbc9bec0) at /usr/src/lib/libthr/thread/thr_sig.c:303 #3 0x00000008018ff36e in thr_sighandler (sig=11, info=0x7fffdbc9c230, _ucp=0x7fffdbc9bec0) at /usr/src/lib/libthr/thread/thr_sig.c:246 #4 <signal handler called> #5 0x000000000107c62d in ?? () #6 0x0000000000fc5ad7 in ?? () #7 0x0000000000fdbcd4 in ?? () #8 0x0000000000fdc51f in ?? () #9 0x0000000000f10fa6 in ?? () #10 0x0000000000f06ead in ?? () #11 0x0000000000aca7c7 in handler::ha_open(TABLE*, char const*, int, unsigned int, st_mem_root*, List<String>*) () #12 0x0000000000db427e in open_table_from_share(THD*, TABLE_SHARE*, st_mysql_const_lex_string const*, unsigned int, unsigned int, unsigned int, TABLE*, bool, List<String>*) () #13 0x0000000000c6e4ae in open_table(THD*, TABLE_LIST*, Open_table_context*) () #14 0x0000000000c70c3d in open_tables(THD*, DDL_options_st const&, TABLE_LIST**, unsigned int*, unsigned int, Prelocking_strategy*) () #15 0x0000000000da1bcb in mysql_update(THD*, TABLE_LIST*, List<Item>&, List<Item>&, Item*, unsigned int, st_order*, unsigned long long, bool, unsigned long long*, unsigned long long*) () #16 0x0000000000ce1053 in mysql_execute_command(THD*) () #17 0x0000000000cdc3e6 in mysql_parse(THD*, char*, unsigned int, Parser_state*, bool, bool) () #18 0x0000000000cda406 in dispatch_command(enum_server_command, THD*, char*, unsigned int, bool, bool) () #19 0x0000000000cdc715 in do_command(THD*) () #20 0x0000000000ef42c4 in tp_callback(TP_connection*) () #21 0x0000000000ef6b90 in ?? () #22 0x00000008018f6768 in thread_start (curthread=0x1824e2f000) at /usr/src/lib/libthr/thread/thr_create.c:292 #23 0x0000000000000000 in ?? () Backtrace stopped: Cannot access memory at address 0x7fffdbca1000 I guess the problem is possibly related to mariadb restarts. Possibly mariadb doesn't properly close tables when it shutdowns the process or something like that. I hope this info will help somehow to find the problem.
*** Bug 258068 has been marked as a duplicate of this bug. ***
Apparently there are more issues with MariaDB 10.3, search bugzilla to find them. These issues have not been reported on later versions of MariaDB. If you can, upgrade MariaDB to a later version?
I think we see more issue with mariadb103 because more people use this version, in my case I still had issue with mariadb104 and mariadb105 . Doing a mysqlcheck --all-databases stop just after mysql.index_stats with: Lost connection to MySQL server. Maybe not related, but looking at upstream bug report, the latest mariadb105 is also crashing. https://jira.mariadb.org/browse/MDEV-26388
I likewise had the same experience with MariaDB104-server
I had same issue with mariadb10.4.21 - in my case I have noticed that its somehow related with zfs. I have two kind mariadb servers (more than 20) and 6 of them has mariadb in zfs. After upgrade mariadb's with ufs works fine, but all 6 mariadb's in zfs had sooner or later (in 12h) crashed - probably depended on workload. Those ufs mariadb's had worked fine 5 days. Crash report is in general same 2021-08-26T00:18:25+03:00 heze mysqld[25511]: 2021-08-26 0:18:25 0 [Note] /usr/local/libexec/mysqld (mysqld 10.4.21-MariaDB-log) starting as process 25505 ... 2021-08-26T00:18:25+03:00 heze mysqld[25511]: 2021-08-26 0:18:25 0 [Warning] The parameter innodb_large_prefix is deprecated and has no effect. It may be removed in future releases. See https://mariadb.com/kb/en/library/xtr adbinnodb-file-format/ 2021-08-26T00:18:25+03:00 heze mysqld[25511]: 2021-08-26 0:18:25 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-08-26T00:18:25+03:00 heze mysqld[25511]: 2021-08-26 0:18:25 0 [Note] InnoDB: Uses event mutexes 2021-08-26T00:18:25+03:00 heze mysqld[25511]: 2021-08-26 0:18:25 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-08-26T00:18:25+03:00 heze mysqld[25511]: 2021-08-26 0:18:25 0 [Note] InnoDB: Number of pools: 1 2021-08-26T00:18:25+03:00 heze mysqld[25511]: 2021-08-26 0:18:25 0 [Note] InnoDB: Using SSE2 crc32 instructions 2021-08-26T00:18:25+03:00 heze mysqld[25511]: 2021-08-26 0:18:25 0 [Note] InnoDB: Initializing buffer pool, total size = 8G, instances = 8, chunk size = 128M 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] InnoDB: Completed initialization of buffer pool 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] InnoDB: Waiting for purge to start 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] InnoDB: 10.4.21 started; log sequence number 4536500070931; transaction id 452653255 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] InnoDB: Loading buffer pool(s) from /www/db/mysql/ib_buffer_pool 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] Plugin 'FEEDBACK' is disabled. 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] Server socket created on IP: '0.0.0.0'. 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] Reading of all Master_info entries succeeded 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] Added new Master_info '' to hash table 2021-08-26T00:18:26+03:00 heze mysqld[25511]: 2021-08-26 0:18:26 0 [Note] /usr/local/libexec/mysqld: ready for connections. 2021-08-26T00:18:26+03:00 heze mysqld[25511]: Version: '10.4.21-MariaDB-log' socket: '/tmp/mysql.sock' port: 3306 FreeBSD Ports 2021-08-26T00:18:30+03:00 heze mysqld[25511]: 2021-08-26 0:18:30 16 [Warning] IP address '192.168.6.248' could not be resolved: Name does not resolve 2021-08-26T00:18:32+03:00 heze mysqld[25511]: 2021-08-26 0:18:32 0 [Note] InnoDB: Buffer pool(s) load completed at 210826 0:18:32 2021-08-26T00:18:57+03:00 heze mysqld[25511]: 2021-08-26 0:18:57 86 [Warning] IP address '91.219.62.240' could not be resolved: Name does not resolve 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 2021-08-26 00:25:26 0xa671b4100 InnoDB: Assertion failure in file /wrkdirs/usr/ports/databases/mariadb104-server/work/mariadb-10.4.21/storage/innobase/btr/btr0pcur.cc line 524 2021-08-26T00:25:26+03:00 heze mysqld[25511]: InnoDB: Failing assertion: page_is_comp(next_page) == page_is_comp(page) 2021-08-26T00:25:26+03:00 heze mysqld[25511]: InnoDB: We intentionally generate a memory trap. 2021-08-26T00:25:26+03:00 heze mysqld[25511]: InnoDB: Submit a detailed bug report to https://jira.mariadb.org/ 2021-08-26T00:25:26+03:00 heze mysqld[25511]: InnoDB: If you get repeated assertion failures or crashes, even 2021-08-26T00:25:26+03:00 heze mysqld[25511]: InnoDB: immediately after the mysqld startup, there may be 2021-08-26T00:25:26+03:00 heze mysqld[25511]: InnoDB: corruption in the InnoDB tablespace. Please refer to 2021-08-26T00:25:26+03:00 heze mysqld[25511]: InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/ 2021-08-26T00:25:26+03:00 heze mysqld[25511]: InnoDB: about forcing recovery. 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 210826 0:25:26 [ERROR] mysqld got signal 6 ; 2021-08-26T00:25:26+03:00 heze mysqld[25511]: This could be because you hit a bug. It is also possible that this binary 2021-08-26T00:25:26+03:00 heze mysqld[25511]: or one of the libraries it was linked against is corrupt, improperly built, 2021-08-26T00:25:26+03:00 heze mysqld[25511]: or misconfigured. This error can also be caused by malfunctioning hardware. 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 2021-08-26T00:25:26+03:00 heze mysqld[25511]: To report this bug, see https://mariadb.com/kb/en/reporting-bugs 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 2021-08-26T00:25:26+03:00 heze mysqld[25511]: We will try our best to scrape up some info that will hopefully help 2021-08-26T00:25:26+03:00 heze mysqld[25511]: diagnose the problem, but since we have already crashed, 2021-08-26T00:25:26+03:00 heze mysqld[25511]: something is definitely wrong and this may fail. 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 2021-08-26T00:25:26+03:00 heze mysqld[25511]: Server version: 10.4.21-MariaDB-log 2021-08-26T00:25:26+03:00 heze mysqld[25511]: key_buffer_size=67108864 2021-08-26T00:25:26+03:00 heze mysqld[25511]: read_buffer_size=262144 2021-08-26T00:25:26+03:00 heze mysqld[25511]: max_used_connections=10 2021-08-26T00:25:26+03:00 heze mysqld[25511]: max_threads=602 2021-08-26T00:25:26+03:00 heze mysqld[25511]: thread_count=15 2021-08-26T00:25:26+03:00 heze mysqld[25511]: It is possible that mysqld could use up to 2021-08-26T00:25:26+03:00 heze mysqld[25511]: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 349008 K bytes of memory 2021-08-26T00:25:26+03:00 heze mysqld[25511]: Hope that's ok; if not, decrease some variables in the equation. 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 2021-08-26T00:25:26+03:00 heze mysqld[25511]: Thread pointer: 0x0 2021-08-26T00:25:26+03:00 heze mysqld[25511]: Attempting backtrace. You can use the following information to find out 2021-08-26T00:25:26+03:00 heze mysqld[25511]: where mysqld died. If you see no messages after this, something went 2021-08-26T00:25:26+03:00 heze mysqld[25511]: terribly wrong... 2021-08-26T00:25:26+03:00 heze mysqld[25511]: stack_bottom = 0x0 thread_stack 0x49000 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 0x121d79c <my_print_stacktrace+0x3c> at /usr/local/libexec/mysqld 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 0xbe5bd5 <handle_fatal_signal+0x295> at /usr/local/libexec/mysqld 2021-08-26T00:25:26+03:00 heze mysqld[25511]: 0x8018cab70 <_pthread_sigmask+0x530> at /lib/libthr.so.3 2021-08-26T00:25:26+03:00 heze mysqld[25511]: The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains 2021-08-26T00:25:26+03:00 heze mysqld[25511]: information that should help you find out what is causing the crash. 2021-08-26T00:25:26+03:00 heze mysqld[25511]: Core pattern: %N.core 2021-08-26T00:25:26+03:00 heze mysqld_safe[20091]: mysqld restarted
Hello, The change of the locking behavior in https://jira.mariadb.org/browse/MDEV-24393 may be also related, rolling back this part of code may help debugging? Regards
For info, I rollbacked the MDEV-24393 on 10.5.12 but the crash still happens. And on all my mariadb servers running 10.5.11 never had the issue, but on the 3 servers I upgraded to 10.5.12 I have the issue, so it looks like another patch from the 10.5.11 to the 10.5.12.
Hi All, Can you please check if https://jira.mariadb.org/browse/MDEV-26388 is the fix for the issues in 10.3/10.4 as well?
It appears it's not even fixed for 105... https://jira.mariadb.org/browse/MDEV-26388?focusedCommentId=197921&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-197921
Created attachment 227833 [details] patch for MariaDB 10.3 and 10.4 Throw this patch in databases/mariadb103-server/files Rebuild/reinstall Report if this fixes the issue please
(In reply to Oclair from comment #19) My bad, wrong MDEV number, the appropriate patch would be MDEV-26537 https://jira.mariadb.org/browse/MDEV-26537 That's the source for the attached patch.
Thank you Bernard, I compiled the port you committed and installed. Playing the same scenario that crashed every time, and very good news, didn't crash with this patch. I will keep replaying it to be sure. Thank you again
I can confirm that the 10.5 does not crash anymore on me. Thank you very much
A commit in branch main references this bug: URL: https://cgit.FreeBSD.org/ports/commit/?id=c7054cfdf84443f275ef3979680e21c5ee61dee1 commit c7054cfdf84443f275ef3979680e21c5ee61dee1 Author: Bernard Spil <brnrd@FreeBSD.org> AuthorDate: 2021-09-12 12:02:39 +0000 Commit: Bernard Spil <brnrd@FreeBSD.org> CommitDate: 2021-09-12 12:04:19 +0000 databases/mariadb103-server: Fix DB corruption * InnoDB corrupts files due to incorrect st_blksize calculation PR: 257728, 257958 Reported by: mfechner, iron udjin gmail com Obtained from: https://jira.mariadb.org/projects/MDEV/issues/MDEV-26537 MFH: 2021Q3 databases/mariadb103-server/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
A commit in branch main references this bug: URL: https://cgit.FreeBSD.org/ports/commit/?id=15c1622ad8fa4271d5f18831528af4d5b215e79e commit 15c1622ad8fa4271d5f18831528af4d5b215e79e Author: Bernard Spil <brnrd@FreeBSD.org> AuthorDate: 2021-09-12 12:00:07 +0000 Commit: Bernard Spil <brnrd@FreeBSD.org> CommitDate: 2021-09-12 12:04:19 +0000 databases/mariadb104-server: Fix DB corruption * InnoDB corrupts files due to incorrect st_blksize calculation PR: 257728, 257958 Reported by: mfechner, iron udjin gmail com Obtained from: https://jira.mariadb.org/projects/MDEV/issues/MDEV-26537 MFH: 2021Q3 .../mariadb103-server/files/patch-MDEV-26537 (new) | 126 +++++++++++++++++++++ databases/mariadb104-server/Makefile | 1 + .../mariadb104-server/files/patch-MDEV-26537 (new) | 126 +++++++++++++++++++++ 3 files changed, 253 insertions(+)
A commit in branch 2021Q3 references this bug: URL: https://cgit.FreeBSD.org/ports/commit/?id=56dddf512791ce4c73a9ecb62f822732699afeaf commit 56dddf512791ce4c73a9ecb62f822732699afeaf Author: Bernard Spil <brnrd@FreeBSD.org> AuthorDate: 2021-09-12 12:00:07 +0000 Commit: Bernard Spil <brnrd@FreeBSD.org> CommitDate: 2021-09-12 12:06:33 +0000 databases/mariadb104-server: Fix DB corruption * InnoDB corrupts files due to incorrect st_blksize calculation PR: 257728, 257958 Reported by: mfechner, iron udjin gmail com Obtained from: https://jira.mariadb.org/projects/MDEV/issues/MDEV-26537 MFH: 2021Q3 (cherry picked from commit 15c1622ad8fa4271d5f18831528af4d5b215e79e) .../mariadb103-server/files/patch-MDEV-26537 (new) | 126 +++++++++++++++++++++ databases/mariadb104-server/Makefile | 1 + .../mariadb104-server/files/patch-MDEV-26537 (new) | 126 +++++++++++++++++++++ 3 files changed, 253 insertions(+)
A commit in branch 2021Q3 references this bug: URL: https://cgit.FreeBSD.org/ports/commit/?id=6b48974b1abf27d0c2f51c3bb0b730ac58b7bb4f commit 6b48974b1abf27d0c2f51c3bb0b730ac58b7bb4f Author: Bernard Spil <brnrd@FreeBSD.org> AuthorDate: 2021-09-12 12:02:39 +0000 Commit: Bernard Spil <brnrd@FreeBSD.org> CommitDate: 2021-09-12 12:07:53 +0000 databases/mariadb103-server: Fix DB corruption * InnoDB corrupts files due to incorrect st_blksize calculation PR: 257728, 257958 Reported by: mfechner, iron udjin gmail com Obtained from: https://jira.mariadb.org/projects/MDEV/issues/MDEV-26537 MFH: 2021Q3 (cherry picked from commit c7054cfdf84443f275ef3979680e21c5ee61dee1) databases/mariadb103-server/Makefile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
Finally closing this, thanks for your help in tickets and testing!
I guess we need this patch for 10.5 also according to MDEV-26388 report.
(In reply to iron.udjin from comment #29) read the comments, the fix is this one