Bug #109633: Display DB (Schema) name the thread is using in the logs during a crash
I confirm the code being submitted is offered under the terms of the OCA, and that I am authorized to contribute it.
Bug Reference: https://bugs.mysql.com/bug.php?id=109633
If a thread in MySQL causes a segfault, the signal handler will output information about that thread including the query it was executing but not the DB it is using. Below is an example of this message:
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fd94c421e50): select city,state,sleep(10) from offices
Connection ID (thread ID): 13
Status: NOT_KILLED
This information does not contain the DB name the thread was using. If the MySQL instance has many DBs with identical table names it can be difficult to determine on which DB the query was executing. As a result identifying a corrupted table or index can not be done easily.
Testing:
Ran a slow query with a sleep in it:
select city,state,sleep(10) from offices;
Found the OS thread ID in performance_schema.threads
Next sent a SIGSEGV to the thread with kill
Then captured the output in the error log.
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3) [0x7fd9c619bfa3]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7fd9c596d06f]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7fd94c421e50): select city,state,sleep(10) from offices
DB (7fd94c324ce0): classicmodels
Connection ID (thread ID): 13
Status: NOT_KILLED
Revert "Bug #33897859: Unexpected behaviour seen with revoking a privilege from role."
The bugfix breakes replication from older server version to patched server in specific cases.
This reverts commit 894c8e8fdef98c2b0e83c5444ff8932dce402832.
RB #27795
Merge branch 'mysql-8.0' into mysql-trunk Change-Id: Ie39ce6b168a1ae073650553610ba7907e57205f7
Bug#34163987 enable dev-entitlements on macosx
On MacOS 11 specifying --core-file doesn't generate core-dump on crash even with:
$ ulimit -c unlimited $ chmod a+rw /cores/ $ sysctl kern.coredump=1
Coredumps are limited to processes which have the get-task-allow entitlement.
The entitlement allows other processes to attach and read/modify the processes memory.
On MacOS add the 'get-task-allow' entitlement to all executables if the a new cmake option WITH_DEVELOPER_ENTITLEMENTS is explicitly enabled.
Change-Id: Ia6eb8dc86f7996d52667010c9df79c715a0856ed
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I2d63652e6b1b6fc05e3856560edf5394ed208233
Bug#34190004 Add support for el9 RPMS
Change-Id: I1a45d25aaff9b31ff03dce611aa37ae28d36e540
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: If823c25bc295e1d3726284fa0c7e9124f2a47657
Bug#34131402: uint Filesort::make_sortorder(ORDER *, bool): Assertion `count > 0' failed.
An assert failure was seen in some aggregated queries that had an ORDER BY clause that was made redundant by an equality predicate, and the equality predicate could not be pushed down as a table predicate. (In practice, this means the predicate had to be an IS NULL predicate on a column in the inner table of an outer join, and the ORDER BY clause referenced the same column.)
The assert failure was raised because the query plan contained a SORT path for the ORDER BY clause, but the sort key of that path was empty.
The SORT path was added because LogicalOrderings::DoesFollowOrder(), when called in ApplyDistinctAndOrder(), was not capable of seeing that the AGGREGATE path already had the order required by the ORDER BY clause. This again happens because, during construction of the state machines in LogicalOrderings, it is assumed that a grouping never needs to become an ordering. And it's true that, in general, a grouping cannot become an ordering without sorting. It is however possible that a grouping satisfies an ordering when there is an equality predicate that ensures one of the columns have only one specific value.
LogicalOrderings is usually capable of seeing such trivial orderings, even after grouping, because these predicates tend to be pushed down as table predicates which result in always active functional dependencies in the interesting orderings machinery. Orderings that come from always active functional dependencies are baked into the initial state of the resulting DFSM. The problem here was that the predicate could not be pushed down, and its corresponding functional dependency is therefore not always active.
The assert failure is fixed by not adding a SORT node when we have found that the entire ORDER BY clause is redundant, and not even checking what DoesFollowOrder() says.
One might argue that a more correct fix would have been to improve the state machine in LogicalOrderings so that DoesFollowOrder() doesn't give a false negative in this case. And it is entirely possible to add edges from the interesting groupings to these trivial orderings in the state machine in order to get the correct answer. It is not done in this patch because:
It's not immediately obvious that it would have any benefits for other than corner cases, so it's better to keep the complexity of the state machine lower. At least until we find a case where the benefit is more obvious.
The extra check added in this patch is needed even if we improve the state machine. This is because the size of the state machine is capped (see kMaxNFSMStates and kMaxDFSMStates) to avoid runaway computation if the query is very complex. We must therefore always account for the possibility of a false negative answer from DoesFollowOrder(), simply because the state machine might be incomplete.
Change-Id: Ie81b4ad6e7b69766c982ccf628282eb6fbdbcca3
Bug #34155862 comp_err could check if the output content is changed before rewriting the file
The comp_err is generating the new message files based on source txt files. It is invoked on any change in these source files. It then generates new data and overwrites the output files always. This triggers a lot of objects to be recompiled. However, C++ files do not use the error messages, only the headers with error names. If comp_err is to check if the output differs from the content of the file before rewriting the file, it could save up a lot of incremental compilation time for developers when only message texts are changed.
Change-Id: I5f2a4b0455ae7687952018de9f5f61edeead8854
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I2babd349e978e574c266d03fd56d4b2575de5525
Bug#34226926 CertificateHandlerTest.create_fail segfaults with openssl 3.0.3
Ubuntu 22.04 and Debian ship with openssl 3.0.x and may include a patch which breaks EVP_gen_rsa() if the library isn't initialized.
Ref:
Change-Id: Ib4aa6f6c9a9a1f0f0c8983ec2d2427b48146a815
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I98e808084aba0cdbc97afb497b3ddc31129432f9
Bug#34227321 make mysqlrouter_all does not build integration tests
The integration tests (routertest_integration_*) are not built by the "mysqlrouter_all" target which may lead to build-failures before push as an extra step is need to build the integration tests.
Change-Id: Ib26c0dec4a828f6eccc863ec601cd6933db8bc25
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I056c61858226bc5d851bdb72165bed9eb1204e94
Bug#33824058 NDB_BUG17624736 FAILS IN PB2 #1 [noclose]
The test "ndb.ndb_bug17624736" was constantly failing in
[daily|weekly]-8.0-cluster branches in PB2, whether on ndb-ps
or
ndb-default-big
profile test runs. The high-level reason for the
failure was the installation of a duplicate entry in the Data
Dictionary in respect to the engine
-se_private_id
pair, even when
the previous table definition should have been dropped.
NDB reuses the least available ID for the dictionary table ID. The ID
is then used by the NDB plugin to install as SE private ID field of
the MySQL Data Dictionary table definition. If a problem occurs during
the synchronization of NDB table definitions in the Data
Dictionary (i.e., a previous definition was not successfully removed),
then an attempt to install a table using an already installed SE
private ID can occur. If that ID was inadvertedly cached as missing
,
then the function acquire_uncached_table_by_se_private_id
will
return fast without retrieving the table definition. Therefore, the
old table definition on that ID cannot be retrieved ever for that Data
Dictionary client instance, the new one won't be installed, and errors
will be raised.
For NDB plugin to query a table definition, using the SE private ID (without causing a missing entry to be cached forever for that client instance), this patch adds a flag argument to the function to allow the caller to request it to skip the fast cache.
Change-Id: I45eef594ee544000fe6b30b86977e5e91155dc80
Bug#33824058 NDB_BUG17624736 FAILS IN PB2 #2
The test "ndb.ndb_bug17624736" was constantly failing in
[daily|weekly]-8.0-cluster branches in PB2, whether on ndb-ps
or
ndb-default-big
profile test runs. The high-level reason for the
failure was the installation of a duplicate entry in the Data
Dictionary in respect to the engine
-se_private_id
pair, even when
the previous table definition should have been dropped.
When data nodes fail and need to reorganize, the MySQL servers
connected start to synchronize the schema definition in their own Data
Dictionary. The se_private_id
for NDB tables installed in the DD is
the same as the NDB table ID, hereafter refered to as just ID, and
thus a pair engine
-se_private_id
is installed in the
tables.engine
. It is common tables to be updated with different IDs,
such as when an ALTER table or a DROP/CREATE occurs. The previous
table definition, gotten by table full qualified name ("schema.table"
format), is usually sufficient to be dropped and hence the new table
to be installed with the new ID, since it is assumed that no other
table definition is installed with that ID. However, on the
synchronization phase, if the data node failure caused a previous
table definition of a different table than the one to be installed
to still exist with the ID to be installed, then that old definition
won't be dropped and thus a duplicate entry warning will be logged on
the THD.
Example: t1 - id=13,version=1 t2 - id=15,version=1 t1 = id=9,version=2 t2 = id=13,version=2 (previous def=15, but ndbcluster-13 still exists)
One of the reasons for the error is that on
Ndb_dd_client::install_table
the name is used to fetch the previous
definition while on Ndb_dd_client::store_table
the ID is used
instead. Also, Ndb_dd_client::install_table
should be able to drop
the required table definitions on the DD in order to install the new
one, as dictated by the data nodes. It was just dropping the one found
by the name of the table to be installed.
The solution was to add procedures to check if the ID to be installed is different than the previous, then it must be checked if an old table definition already exists with that ID. If it does, drop it also.
Additionally, some renaming (object_id
to spi
, refering to
se_private_id
) and a new struct were employed to make it
simpler to keep the pair (ID-VERSION) together and respectively
install these on the new table's definition SE fields.
Change-Id: Ie671a5fc58646e02c21ef1299309303f33173e95
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Ie1a49e2c1afa158d680d7d72ab2953bd486e202f
Bug25364178: XA PREPARE INCONSISTENT WITH XTRABACKUP
Post push fix for sporadic failure of flush_read_lock.test
BUG#28022129: NOW() DOESN'T HONOR NO_ZERO_DATE SQL_MODE
CREATE TABLE..SELECT generates zero date as default value for temporal columns in STRICT, NO_ZERO_DATE mode when the columns are not based on source table columns directly.
CREATE TABLE..SELECT generates implicit default values for columns without a default value which in case of temporal columns contains zero date component. The implict defaults generated are marked as explicit default value (because the flag 'NO_DEFAULT_VALUE_FLAG' is not set), thus skipping the checks which validate the default value based on the SQL mode. Hence tables with zero date are created in strict mode.
The function "create_table_from_items()" has been modified to mark columns having date component which are based on expressions and not source table columns directly as having no default value in strict mode i.e 'NO_DEFAULT_VALUE_FLAG' flag is set in such cases. This ensures that tables are created without zero date as default value.
Bug#28483283: BACKPORT BUGFIX 26929724 TO MYSQL 5.5 AND 5.6
Description: MYISAM index corruption occurs for bulk insert or repair table which involves "repair by sorting" algorithm.
Analysis: The index corruption happens because of the incorrect sorting done by "my_qsort2()". This happens for a bulk insert with more than 450001959 rows or repair table with more than 450001959 rows. In 5.7, "my_qsort2()" is replaced by std::sort() as part of Bug#26929724.
Fix:- Back ported Bug#26929724 fix to ensure MyISAM repair by sorting algorithm uses std::sort().
Merge branch 'mysql-5.6' into mysql-5.7
BUG#28022129: NOW() DOESN'T HONOR NO_ZERO_DATE SQL_MODE
Post push fix for test failure.
Bug#28897799 BACKPORT TO 5.7: WORKAROUND ASAN BUG FOR TIRPC
This is a backport of: Bug#28785835 WORKAROUND ASAN BUG FOR TIRPC
Sun RPC, and XDR, is being removed from glibc, and into a separate libtirpc library. This is not compatible with libasan. The interceptor functions inserted into the code will segfault. As a workaround, do LD_PRELOAD=/lib64/libtirpc.so For dynamically linked libasan (default for gcc), we must preload that as well.
This currently affects fedora, but will likely also affect other linux variants in the future.
This patch also enables ASAN suppressions in asan.supp This patch also enables LSAN suppressions in lsan.supp
Change-Id: I1bc777482535ce21595a48ae12679c325667d722 (cherry picked from commit aa9698313b6d412265608fe9f1ffc05938448f81)
Merge branch 'mysql-5.7' into mysql-5.7-wl12005
Merge branch 'mysql-5.7-wl12005' of myrepo.no.oracle.com:mysql into mysql-5.7-wl12005
Bug#26997096: RELAY_LOG_SPACE IS INACCURATE AND LEAKS
The Relay_Log_Space variable shown in SHOW SLAVE STATUS is sometimes much higher than the actual disk space used by relay logs.
This is because we are not writing to Relay_log_info::log_space_total in a synchronized manner. i.e, no lock is being taken by the IO thread while updating the variable.
The Relay_log_info::log_space_total is now guarded by the Relay_log_info::log_space_lock and this protects concurrent update on Relay_log_info::log_space_total.
Merge branch 'mysql-5.6' into mysql-5.7
Bug#28900691 BACKPORT BUG#28443958 TO MYSQL-5.7
When certification information was too big to transmit an event was generated that caused failures in all group members. To avoid this, we no longer send this information if its size is too big. We instead encode an error that will make the joiner leave the group.
Bug #27595603: SETTING SYSTEM VARIABLE CAN CAUSE SERVER EXIT
The fix rejects malformed assignments with a syntax error.
Merge branch 'mysql-5.5' into mysql-5.6
Merge branch 'mysql-5.6' into mysql-5.7
Bug #25633994: WRONG OOM CHECK
The patch has fixed wrong OOM checks.
Change-Id: I19c7c3cc54a0aac8996c101aa5d88278d35914e3
Bug #28531922: KEYRING TESTS ARE FAILING FOR VALGRIND RUN ON PB2 5.7
Description :- Keyring aws tests are failing for valgrind runs on PB2 daily-5.7 and weekly-5.7
Fix :- Valgrind suppression are added for resolving the keyring aws valgrind test failures.
Bug#33674059: Inconsistency in P_S COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE BUG#33602354: COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE doesn't show actual delay
Description: View_change_log_event despite being queued on the applier channel is applied through recovery channel. There is a race between decrement of decrement_transactions_waiting_apply and increment by applier_module. Most of the times decrement happens first causing loss of decrement. The result is positive transactions_waiting_apply because of lost decrement.
Resolution: Decrement for recovery channel has been moved to handle_recovery_message. This adds the network delay allowing applier channel to increment the transactions_waiting_apply. Additionally when applier_module is not busy counter is reset to 0.
ReviewBoard: 27627
Merge branch 'mysql-8.0' into mysql-trunk
Bug #33732838 Bundle the openssl command line binary into the test archive & expose it in mtr
For builds with "custom" OpenSSL: -DWITH_SSL=<path/to/custom/openssl> we now look for the 'openssl' binary, and copy it into the build tree.
We don't want to install a binary called 'openssl' in a public bin/ directory, so we rename it to 'my_openssl'. 'my_openssl' is also INSTALLed as part of the Test component.
Extend mysql-test-run.pl to look for my_openssl in the build/install directories, and set environment variable OPENSSL_EXECUTABLE. Use this variable in all mtr tests that need to invoke openssl.
Fix regexps in check_openssl_version.inc. my $search_pattern_1= "0.9.*" matched '019' for ./bin/openssl version OpenSSL 1.1.1d 10 Sep 2019 and several tests were incorrectly skipped.
Remove all usage of 'have_openssl_binary.inc'.
Change-Id: Ib7f48acbc8f604e493cc84dcf9315f6c9abf1ea5
Bug #33986826 Clean up the 'package' target for Xcode builds
This command for an Xcode build: cmake --build . --config Debug --target package may, or may not, fail depending on CMAKE_BUILD_TYPE
Clean up the INSTALL(FILES ...) commands for copying the crypto/ssl libraries on macOS.
Change-Id: I5852266666b782c2bb1d34a2e35f8de7655dcfa4
WL#14611 BACKPORT SHOW PROCESSLIST TO 5.7
Improve test robustness for:
Approved by: Chris Powers chris.powers@oracle.com
Merge branch 'mysql-5.7' into mysql-8.0
Merge branch 'mysql-8.0' into mysql-trunk
Bug #33643149: mysqltest crashes with async client and shutdown commands
The following asserts in mysql_send_query_nonblocking fail in some cases: assert(async_context->async_qp_data == nullptr); assert(async_context->async_qp_data_length == 0);
The buffer async_context->async_qp_data is expected to be empty before allocating it again in mysql_prepare_com_query_parameters. Looking more generally, the buffer is not needed when async_query_state == QUERY_IDLE. The buffer is cleaned sometimes altogether with setting QUERY_IDLE, sometimes it is not.
Dirty fix would be cleaning the buffer just before mysql_prepare_com_query_parameters call or just after setting QUERY_IDLE. The implemented solution is coupling setting the flag and cleaning the buffer in a single function. Possible further enchancement is to use this function in the whole file (at the moment only failing mysql_prepare_com_query_parameters was fixed). The implemented functions are inline to suggest the compiler optimize number of function calls.
RB #27763 Approved by Bharathy X Satish Approved by Marek Szymczak
Merge branch 'mysql-8.0' into mysql-trunk
Bug#33797357 ndbapi [-Wclass-memaccess] [noclose]
Instead of using memset to initialize some objects, make sure default constructor initialize all members.
In cases there object is already constructed and are reset to a newly constructed state use assignment with temporary default constructed object instead of memset.
Also removed unused trp_node::operator==, since it did not compare m_info members for unknown reason and also compared m_state using memcmp which is not valid since NodeState has an union part that is only defined for some start levels.
Remove some unused members in NodeState::stopping, and adding a compat member to keep size of signal GSN_API_REGCONF.
Change-Id: I2dc977a568ac46e2bb906dec0416fe92c8dfbfe9
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: I23cb00b7fe93b6345003e4ca243192cc767bfc50
Testcase for BUG#33001701: VERIFY STATEMENTS THAT GENERATE GTID IF THE SERVER IS IN SUPER_READ_ONLY MODE. RB#26691
Bug#33952115: Semijoin may give wrong result Bug#33957233: Incorrect inner hash join when using materialization
These two bugs have the same cause. The symptoms of the problem is incorrect results from execution of semi-join with materialization. It may happen when there is an equality in the WHERE clause of the subquery. In some cases, such as when one side of the equality is an IN or NOT IN subquery, the equality is neither pushed down to the materialized subquery, nor evaluated as part of the semi-join. This happens because the equality remains in the semi-join evaluation, and its field references are only partially substituted with field references from the materialized subquery. The problem with this is that the equality gets a table map that consists of tables from both the outer tables and the subquery tables, and the optimizer is unable to push such condition into any table condition.
The solution to the problem is however to realize that such conditions can be evaluated when materializing the subquery. If the condition's table map only contains tables from the subquery, no substitution should take place, and the condition will be properly pushed down to the subquery materialization.
Another small fix was also included: For a triggered condition, it was forgotten to add triggered function tables in fix_after_pullout().
Change-Id: I4fc1a2d0a280e3675ea1824ef88f078ee667b23e
Bug #33451101: Plugins can delete system variables not owned by them
When a plugin tried to register a system variable using a duplicate name of existing variable, it can cause two issues:
A plugin deleted an existing static (compiled-in) system variable with a duplicate name. For example, if plugin name is "sql", and variable name "mode", the INSTALL PLUGIN command hid the @@sql_mode variable.
If a duplicate was not the first variable in the list of registered variables, the UNINSTALL PLUGIN command led to dangling pointers pointing to the freed memory of the unloaded plugin. For example, if the plugin "sql" registers variables "mode2" and "mode", the sequence of commands "UNINSTALL PLUGIN..." and "SELECT @@sql_mode2" caused a failure.
Change-Id: I41f61824d65bd6c9ed3d4dd9a6165348f5ad28eb
Bug#33996132 option 'metadata_cache.ssl_ca' is not supported
When the Router is bootstrap if one of the following parameters: --ssl-mode=disabled --ssl-cipher=some --tls-version=TLSv1.2 --ssl-ca=some --ssl-capath=some --ssl-crl=some --ssl-crlpath=some it will add a coresponding option to the metadata_cache section of the configuration file. The problem is that the Router does not allow any of those options for metadata_cache after introducing WL#14823.
This patch adds those config options back as a valid options for metadata_cache section.
Change-Id: Ib9ade3e5d471ef83a7b8f5818931122c51527e81
Merge branch 'mysql-8.0' into mysql-trunk
Change-Id: Ief0fb0eda77ff7f8ba4b5d1489e0a2647a8a1ac7