Primary source of truth for the Docker "Official Images" program
auth-request allows you to add access control to your HTTP services based on a subrequest to a configured HAProxy backend.
Fix strlen error message param name
Fix [-Wstrict-prototypes] in DBA
I've missed this case while doing all the other ones.
Remove unnecessary workaround for the true type
mb_encode_mimeheader does not crash if provided encoding has no MIME name set
Merge branch 'PHP-8.1' into PHP-8.2
Merge branch 'PHP-8.2'
Enable GitHub actions cancel-in-progress for PRs
Pushing many commits to a pull request in a short amount of time can stall the merge builds and also wastes energy unnecessarily. Enable concurrency to cancel workflows of old commits in pull requests. Generate a common group name for pull requests using github.event.pull_request.url with github.run_id as a fallback for branches, which is unique and always available.
Closes GH-10799
Merge branch 'PHP-8.1' into PHP-8.2
Merge branch 'PHP-8.2'
Fix readonly+clone JIT issues
Closes GH-10748
/.m4: update main() signatures.
The next generation of C compilers is going to enforce the C standard more strictly:
https://wiki.gentoo.org/wiki/Modern_C_porting
One warning that will soon become an error is -Wstrict-prototypes. This is relatively easy to catch in most code (it will fail to compile), but inside of autoconf tests it can go unnoticed because many feature-test compilations fail by design. For example,
$ export CFLAGS="$CFLAGS -Werror=strict-prototypes" $ ./configure ... checking if iconv supports errno... no configure: error: iconv does not support errno
(this is on a system where iconv does support errno). If errno support were optional, that test would have "silently" disabled it. The underlying issue here, from config.log, is
conftest.c:211:5: error: function declaration isn't a prototype [-Werror=strict-prototypes] 211 | int main() {
This commit goes through all of our autoconf tests, replacing main() with main(void). Up to equivalent types and variable renamings, that's one of the two valid signatures, and satisfies the compiler (gcc-12 in this case).
Fixes GH-10751
ext/iconv/config.m4: add missing stdio.h include.
The next generation of C compilers is going to enforce the C standard more strictly:
https://wiki.gentoo.org/wiki/Modern_C_porting
One warning that will eventually become an error is -Wimplicit-function-declaration. This is relatively easy to catch in most code (it will fail to compile), but inside of autoconf tests it can go unnoticed because many feature-test compilations fail by design. For example,
AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include <iconv.h>]], [[iconv_ccs_init(NULL, NULL);]])]...
is designed to fail if iconv_ccs_init() is not in iconv.h. On the other hand,
AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include <iconv.h> int main() { printf("%d", _libiconv_version); return 0; }
should pass if _libiconv_version is defined. If the user has -Werror=implicit-function-declaration in his CFLAGS, however, it will not:
$ export CFLAGS="$CFLAGS -Werror=implicit-function-declaration" $ ./configure ... checking if using GNU libiconv... no
This is because the stdio.h header that defines printf() is missing:
conftest.c:240:3: error: implicit declaration of function 'printf' [-Werror=implicit-function-declaration] 240 | printf("%d", _libiconv_version); | ^~~~~~ conftest.c:239:1: note: include '<stdio.h>' or provide a declaration of 'printf'
This commit adds the include, correcting the test with any compiler that balks at implicit function definitions.
Closes GH-10751
RFC: Saner array_(sum|product)() (#10161)
RFC: https://wiki.php.net/rfc/saner-array-sum-product
Moreover, the internal fast_add_function() function was removed.
[skip ci] Update UPGRADING for saner array_(sum|product)() RFC
Imply UTF8 validity in implode function (#10780)
Sets the UTF-8 valid flag if all parts are valid, or numeric (which are valid UTF-8 by definition).
remove unuseful comments
Imply UTF8 validity in implode function
revert zend_string_dup change
Fix GH-8646: Memory leak PHP FPM 8.1
Fixes GH-8646 See https://github.com/php/php-src/issues/8646 for thorough discussion.
Interned strings that hold class entries can get a corresponding slot in map_ptr for the CE cache. map_ptr works like a bump allocator: there is a counter which increases to allocate the next slot in the map.
For class name strings in non-opcache we have:
Notice that the map_ptr layout always has the permanent strings first, and the request strings after. In non-opcache, a request string may get a slot in map_ptr, and that interned request string gets destroyed at the end of the request. The corresponding map_ptr slot can thereafter never be used again. This causes map_ptr to keep reallocating to larger and larger sizes.
We solve it as follows: We can check whether we had any interned request strings, which only happens in non-opcache. If we have any, we reset map_ptr to the last permanent string. We can't lose any permanent strings because of map_ptr's layout.
Closes GH-10783.
Fix GH-8065: opcache.consistency_checks > 0 causes segfaults in PHP >= 8.1.5 in fpm context
Disable opcache.consistency_checks.
This feature does not work right now and leads to memory leaks and other problems. For analysis and discussion see GH-8065. In GH-10624 it was decided to disable the feature to prevent problems for end users. If end users which to get some consistency guarantees, they can rely on opcache.protect_memory.
Closes GH-10798.
Merge branch 'PHP-8.1' into PHP-8.2
Re-add some CTE functions that were removed from being CTE by a mistake
These functions were accidentally removed from being CTE in GH-7780. This patch brings them back.
Closes GH-10768.
Merge branch 'PHP-8.2'
Fix GH-10248: Assertion `!(zval_get_type(&(*(property))) == 10)' failed.
The assertion failure was triggered in a debug code-path that validates property types for internal classes. zend_verify_internal_read_property_type was called with retval being a reference, which is not allowed because that function eventually calls to i_zend_check_property_type, which does not expect a reference. The non-debug code-path already takes into account that retval can be a reference, as it optionally dereferences retval.
Add a dereference in zend_verify_internal_read_property_type just before the call to zend_verify_property_type, which is how other callers often behave as well.
[ci skip] NEWS
Merge branch 'PHP-8.1' into PHP-8.2
[ci skip] NEWS
Merge branch 'PHP-8.2'
[ci skip] UPGRADING
GH-10149
Fix GH-10292 make the default value of the first parame of srand() and mt_srand() nullable (#10380)
Co-authored-by: Tim Düsterhus timwolla@googlemail.com
[ci skip] random: Fix whitespace errors in randomizer.c
Use fast encoding conversion filters in mb_send_mail
Use smart_str as dynamic buffer for extra headers in mb_send_mail
Remove duplicated length check in exif and remove always false condition from exif
The latter condition will never trigger because otherwise the do-while loop wouldn't have exited.
Close GH-10402
mb_scrub does not attempt to scrub known-valid UTF-8 strings
Use RETURN_STR_COPY in mb_output_handler
This means the same thing and makes the code read a tiny bit better.
Thanks to Nikita Popov for the tip.
Honor constant expressions instead of just taking the last constant encountered in stubs
As an example: should be translated to: ZVAL_LONG(&attribute_Attribute_class_test_arg0, ZEND_ATTRIBUTE_TARGET_FUNCTION | ZEND_ATTRIBUTE_TARGET_METHOD);
Add a couple clarifying comments
sockets add AF_DIVERT constant.
Allow to bind a socket to a divert port without being concerned by its address. for ipfw filter purpose (SO_USER_COOKIE constant). FreeBSD only.
Close GH-10415.
Fix GH-8329 Print true/false instead of bool in error and debug messages (#8385)
strtok warns in case the string to split was not set.
Close GH-10016.
exif add simple assert into jpeg header parsing as safety net more in a context of a possible text change. follow-up on GH-10402.
Close GH-10416.
[ci skip] UPGRADING
when using it e.g. in a background job.
Background jobs should not run within the ACP. This PR is likely correct, but that explanation does not fit.
Also it would likely make sense to call
pool_gc(p->table->pool)
instead ofpool_gc(NULL)
so that it only tries to release the current table's pools and not all pools, since the other ones have no reason for being affected by the previous operation.
I've seen the fix committed (thanks!), but this was apparently not yet done. Mentioning to make sure it isn't accidentally overlooked.
solid
in NotificationSettings. This makes the visual difference between “disabled” and “enabled” clearer, especially since the colored outline is fairly thin. Mockup (middle is the suggested version):
Everything is ticked off, closing here. Anything else can be resolved with dedicated issues (like #5371).
Allow users that may manage own articles to actually create articles
Allow users that may manage own articles to view them independent of publication status
Unify default publication status for users that may manage articles and users that may manage their own articles
Allow users that may contribute articles to see their own articles independent of publication status in lists
Unify visibility of articles in ACP's article list with frontend
Check edit permissions before showing edit link in ACP's article list
Merge pull request #5372 from WoltLab/article-permissions
Fix several article related permissions
Merge branch '5.5'
ps: Actually I can't find any occurrence of
stop_time
elsewhere in the code, likely related to eb77824
Not sure if you meant me, but I can confirm that ->stop_time
is never written to.
Allow users that may manage own articles to actually create articles
Allow users that may manage own articles to view them independent of publication status
Unify default publication status for users that may manage articles and users that may manage their own articles
Allow users that may contribute articles to see their own articles independent of publication status in lists
Unify visibility of articles in ACP's article list with frontend
Check edit permissions before showing edit link in ACP's article list
Merge pull request #5372 from WoltLab/article-permissions
Fix several article related permissions
Most notably, the admin.content.article.canManageOwnArticles
permission was
effectively broken without these changes.
I'd like to avoid compiling a custom HAProxy if not necessary, but I'm happy to report back if the change makes it into a regular release (unfortunately too late for 2.7.6).
Most notably, the admin.content.article.canManageOwnArticles
permission was
effectively broken without these changes.
gcc 13.0.1
./haproxy -f .github/h2spec.config -c
Configuration file is valid
=================================================================
==180576==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 20 byte(s) in 1 object(s) allocated from:
#0 0x7f61704d8035 in __interceptor_realloc.part.0 (/lib64/libasan.so.8+0xd8035) (BuildId: a8eed40544b66c84b4dd6cb8b80129e58ebafc51)
#1 0x65fae2 in my_realloc2 include/haproxy/tools.h:1037
#2 0x65fae2 in memvprintf src/tools.c:4206
SUMMARY: AddressSanitizer: 20 byte(s) leaked in 1 allocation(s).
No response
haproxy -vv
HAProxy version 2.8-dev5-ac78c4-26 2023/03/17 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 6.2.6-300.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Mar 13 14:30:47 UTC 2023 x86_64
Build options :
TARGET = linux-glibc
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fsanitize=address -Wall -Wextra -Wundef -Wdeclaration-after-statement -Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment -Werror
OPTIONS = USE_OPENSSL=1
DEBUG = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS
Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH -MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_WOLFSSL -OT -PCRE -PCRE2 -PCRE2_JIT -PCRE_JIT +POLL +PRCTL -PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC +RT +SHM_OPEN +SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 -SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL -ZLIB
Default settings :
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=2).
Built with OpenSSL version : OpenSSL 3.0.8 7 Feb 2023
Running on OpenSSL version : OpenSSL 3.0.8 7 Feb 2023
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 13.0.1 20230310 (Red Hat 13.0.1-0) with address sanitizer
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG
fcgi : mode=HTTP side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG
<default> : mode=HTTP side=FE|BE mux=H1 flags=HTX
h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG
<default> : mode=TCP side=FE|BE mux=PASS flags=
none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG
Available services : none
Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
I needed to reboot the box in question, because I messed up something while extracting data, but I believe the gdb
stuff above confirms @Darlelet's suspicion.
$ gdb
GNU gdb (Debian 10.1-1.7) 10.1.90.20210103-git
Copyright (C) 2021 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
(gdb) file /usr/sbin/haproxy
Reading symbols from /usr/sbin/haproxy...
Reading symbols from /usr/lib/debug/.build-id/36/a7d1d757be77f98cdca2bfac55f35b06928b59.debug...
(gdb) break manage_proxy
Breakpoint 1 at 0x189f00: file src/proxy.c, line 1985.
(gdb) commands 1
Type commands for breakpoint(s) 1, one per line.
End with a line saying just "end".
>silent
>bt 2
>c
>end
(gdb) break pool_gc
Breakpoint 2 at 0x1eed20: file src/pool.c, line 703.
(gdb) commands 2
Type commands for breakpoint(s) 2, one per line.
End with a line saying just "end".
>silent
>bt 2
>c
>end
(gdb) break malloc_trim
Breakpoint 3 at 0x50700
(gdb) commands 3
Type commands for breakpoint(s) 3, one per line.
End with a line saying just "end".
>silent
>bt 3
>c
>end
(gdb) attach 2668931
Attaching to program: /usr/sbin/haproxy, process 2668931
[New LWP 2668932]
[New LWP 2668933]
[New LWP 2668934]
[New LWP 2668935]
[New LWP 2668936]
[New LWP 2668937]
[New LWP 2668938]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f20b4dcbd56 in epoll_wait () from target:/lib/x86_64-linux-gnu/libc.so.6
(gdb) cont
Continuing.
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20acf6e700 (LWP 2668938)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20b490d700 (LWP 2668932)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ad76f700 (LWP 2668937)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
--Type <RET> for more, q to quit, c to continue without paging--c
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20acf6e700 (LWP 2668938)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20af773700 (LWP 2668933)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20acf6e700 (LWP 2668938)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20acf6e700 (LWP 2668938)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20acf6e700 (LWP 2668938)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
[Switching to Thread 0x7f20ae771700 (LWP 2668935)]
#0 manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=516) at src/proxy.c:1985
#1 0x000055d28daf544f in run_tasks_from_lists (budgets=<optimized out>) at src/task.c:634
#0 pool_gc (pool_ctx=0x0) at src/pool.c:703
#1 0x000055d28daa70dc in manage_proxy (t=0x55d28fa6b220, context=0x55d28eac4730, state=<optimized out>) at src/proxy.c:2023
#0 0x00007f20b4d571f0 in malloc_trim () from target:/lib/x86_64-linux-gnu/libc.so.6
#1 0x000055d28db0bf0d in trim_all_pools () at src/pool.c:142
#2 pool_gc (pool_ctx=<optimized out>) at src/pool.c:724
^C
Thread 1 "haproxy" received signal SIGINT, Interrupt.
[Switching to Thread 0x7f20b4919180 (LWP 2668931)]
0x00007f20b4dcbd56 in epoll_wait () from target:/lib/x86_64-linux-gnu/libc.so.6
(gdb) detach
Detaching from program: /usr/sbin/haproxy, process 2668931
[Inferior 1 (process 2668931) detached]
quit)
$ strace -c -f -p 2668931
strace: Process 2668931 attached with 8 threads
^Cstrace: Process 2668931 detached
strace: Process 2668932 detached
strace: Process 2668933 detached
strace: Process 2668934 detached
strace: Process 2668935 detached
strace: Process 2668936 detached
strace: Process 2668937 detached
strace: Process 2668938 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
46.77 1.840549 3941 467 1 epoll_wait
43.82 1.724708 13 128509 sched_yield
8.82 0.346968 11 29696 madvise
0.58 0.023011 24 951 clock_gettime
0.01 0.000257 10 25 7 recvfrom
0.00 0.000033 11 3 sendto
0.00 0.000021 21 1 rt_sigreturn
0.00 0.000021 21 1 timer_settime
------ ----------- ----------- --------- --------- ----------------
100.00 3.935568 24 159653 8 total
$ nc -U /run/haproxy-master.sock
prompt
master> @!2668931
2668931> show sess
0x7f20904b3770: proto=tcpv6 src=*snip*:46250 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=9h42m calls=280 rate=0 cpu=0 lat=0 rq[f=48c40080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=55] scb=[8,11h,fd=40] exp=42s rc=0 c_exp=
0x7f20906c05c0: proto=tcpv6 src=*snip*:46250 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=9h20m calls=269 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=55] scb=[8,11h,fd=78] exp=1m14s rc=0 c_exp=
0x7f2090189ad0: proto=tcpv6 src=*snip*:46250 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=9h20m calls=270 rate=0 cpu=0 lat=0 rq[f=48c40080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=55] scb=[8,11h,fd=151] exp=1m14s rc=0 c_exp=
0x7f20a82a2520: proto=sockpair src=unix:2 fe=GLOBAL be=<NONE> srv=<none> ts=00 epoch=0 age=0s calls=1 rate=0 cpu=0 lat=0 rq[f=c08000h,i=0,an=00h,rx=,wx=,ax=] rp[f=80008002h,i=0,an=00h,rx=,wx=,ax=] scf=[8,200h,fd=15] scb=[8,1h,fd=-1] exp= rc=0 c_exp=
0x7f209c3532c0: proto=tcpv6 src=*snip*:41584 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=10h47m calls=312 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=102] scb=[8,11h,fd=44] exp=1m14s rc=0 c_exp=
0x7f209c0ef150: proto=tcpv6 src=*snip*:39960 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=9h34m calls=277 rate=0 cpu=0 lat=0 rq[f=48c40080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040000h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=305] scb=[8,11h,fd=279] exp=1m46s rc=0 c_exp=
Very strange, we don't see malloc_trim() here
That output is from strace
, thus syscalls only. I don't expect to see malloc_trim
there.
Do you use sticktables on one of your proxies?
Yes: https://www.haproxy.com/user-spotlight-series/using-haproxy-peers-for-real-time-quota-tracking/ :smile:
Does it last for a few seconds or for as long as it takes for the old process to exit?
The above snapshots were from a process that was soft-stopping for several hours by then. Here's another one. 2225288 is stopping for roughly 10 hours and only has a few connections left.
$ echo show proc |nc -U /run/haproxy-master.sock
*snip*
2668931 worker 11 0d09h51m26s 2.7.5-1~bpo11+1
2225288 worker 13 0d12h02m14s 2.7.5-1~bpo11+1
# programs
^C
$ strace -c -f -p 2225288
strace: Process 2225288 attached with 8 threads
^Cstrace: Process 2225288 detached
strace: Process 2225289 detached
strace: Process 2225290 detached
strace: Process 2225291 detached
strace: Process 2225292 detached
strace: Process 2225293 detached
strace: Process 2225294 detached
strace: Process 2225295 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
53.77 3.711778 5022 739 1 epoll_wait
37.20 2.568065 12 205264 sched_yield
8.43 0.581538 9 59044 madvise
0.58 0.039776 26 1496 clock_gettime
0.01 0.000990 24 40 12 recvfrom
0.00 0.000315 31 10 sendto
0.00 0.000033 16 2 rt_sigreturn
0.00 0.000032 16 2 timer_settime
------ ----------- ----------- --------- --------- ----------------
100.00 6.902527 25 266597 13 total
$ nc -U /run/haproxy-master.sock
prompt
master> @!2225288
2225288> show sess
0x7efd041a8cc0: proto=tcpv6 src=*snip*:40342 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=12h2m calls=348 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040000h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=104] scb=[8,11h,fd=160] exp=1m46s rc=0 c_exp=
0x7efd04141e60: proto=tcpv6 src=*snip*:52280 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=11h45m calls=337 rate=0 cpu=0 lat=0 rq[f=48c40080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m47s,wx=,ax=] scf=[8,0h,fd=669] scb=[8,11h,fd=1004] exp=1m15s rc=0 c_exp=
0x7efd0c0cac90: proto=tcpv6 src=*snip*:43266 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=12h2m calls=348 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040000h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=436] scb=[8,11h,fd=251] exp=1m46s rc=0 c_exp=
0x7efd0c1c4cc0: proto=tcpv6 src=*snip*:43266 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=12h2m calls=347 rate=0 cpu=0 lat=0 rq[f=48c40080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040000h,i=0,an=4000000h,rx=1m47s,wx=,ax=] scf=[8,0h,fd=436] scb=[8,11h,fd=94] exp=1m47s rc=0 c_exp=
0x7efd0c364960: proto=tcpv6 src=*snip*:50636 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=12h2m calls=348 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040000h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=368] scb=[8,11h,fd=48] exp=1m46s rc=0 c_exp=
0x7efcf4064400: proto=tcpv6 src=*snip*:49812 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=12h2m calls=346 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040000h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=468] scb=[8,11h,fd=309] exp=1m46s rc=0 c_exp=
0x7efd000621f0: proto=tcpv6 src=*snip*:38562 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=12h2m calls=348 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=110] scb=[8,11h,fd=168] exp=1m15s rc=0 c_exp=
0x7efd0018db60: proto=tcpv6 src=*snip*:47358 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=11h44m calls=339 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=1340] scb=[8,11h,fd=825] exp=43s rc=0 c_exp=
0x7efcfc096010: proto=tcpv6 src=*snip*:49546 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=12h2m calls=348 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040000h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=439] scb=[8,11h,fd=380] exp=1m46s rc=0 c_exp=
0x7efcfc2d9660: proto=tcpv6 src=*snip*:54808 fe=https be=*snip* srv=*snip* ts=00 epoch=0 age=11h40m calls=336 rate=0 cpu=0 lat=0 rq[f=48c40080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m46s,wx=,ax=] scf=[8,0h,fd=826] scb=[8,11h,fd=1076] exp=43s rc=0 c_exp=
0x7efcfc1ec840: proto=sockpair src=unix:2 fe=GLOBAL be=<NONE> srv=<none> ts=00 epoch=0 age=0s calls=1 rate=0 cpu=0 lat=0 rq[f=c08000h,i=0,an=00h,rx=,wx=,ax=] rp[f=80008002h,i=0,an=00h,rx=,wx=,ax=] scf=[8,200h,fd=21] scb=[8,1h,fd=-1] exp= rc=0 c_exp=
This issue is not actionable, because it lacks the necessary information to debug it. The error message is part of an intentional check:
https://github.com/WoltLab/WCF/blob/6d269e22eaa69e3fc993ab97e9ca3aaa697b96c8/wcfsetup/install/files/lib/system/package/PackageInstallationDispatcher.class.php#L483
Consider dumping both of the values that are compared within the if()
condition. This will reveal how they are different.
@orlitzky You could probably just
cat
theconfig.log
by adding it to the failing entry in push.yml after the config step.
@iluuu1994 Should we add that as a step to all CI by default? It would automatically be collapsed anyway. Alternatively it could be uploaded as an artifact.
This effectively is a follow-up for #1874. Once the health checks and DNS lookups were no longer running, I noticed that old workers still require a non-trivial amount of CPU, despite just handling single-digit numbers of almost-idle connections.
This effectively is a follow-up for #1874. Once the health checks and DNS lookups were no longer running, I noticed that old workers still require a non-trivial amount of CPU, despite just handling single-digit numbers of almost-idle connections.
master> show proc snip 1737902 worker 13 0d14h09m57s 2.7.5-1~bpo11 1
master> @!1737902 1737902> show sess 0x56334cfaf070: proto=tcpv6 src=snip:35124 fe=https be=snip srv=snip ts=00 epoch=0 age=13h44m calls=396 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m53s,wx=,ax=] scf=[8,0h,fd=465] scb=[8,11h,fd=528] exp=1m22s rc=0 c_exp= 0x7f22043065f0: proto=tcpv6 src=snip:48324 fe=https be=snip srv=snip ts=00 epoch=0 age=11h18m calls=326 rate=0 cpu=0 lat=0 rq[f=48c40080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m53s,wx=,ax=] scf=[8,0h,fd=201] scb=[8,11h,fd=242] exp=1m22s rc=0 c_exp= 0x7f2204381a90: proto=sockpair src=unix:2 fe=GLOBAL be= srv= ts=00 epoch=0 age=0s calls=1 rate=0 cpu=0 lat=0 rq[f=c08000h,i=0,an=00h,rx=,wx=,ax=] rp[f=80008002h,i=0,an=00h,rx=,wx=,ax=] scf=[8,200h,fd=19] scb=[8,1h,fd=-1] exp= rc=0 c_exp= 0x7f21f82c9490: proto=tcpv6 src=snip:38754 fe=https be=snip srv=snip ts=00 epoch=0 age=12h20m calls=356 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m53s,wx=,ax=] scf=[8,0h,fd=120] scb=[8,11h,fd=77] exp=1m22s rc=0 c_exp= 0x7f21fc52c870: proto=tcpv6 src=snip:60942 fe=https be=snip srv=snip ts=00 epoch=0 age=11h52m calls=341 rate=0 cpu=0 lat=0 rq[f=48840080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m53s,wx=,ax=] scf=[8,0h,fd=58] scb=[8,11h,fd=143] exp=1m22s rc=0 c_exp= 0x7f21fc4a8f10: proto=tcpv6 src=snip:60942 fe=https be=snip srv=snip ts=00 epoch=0 age=11h52m calls=340 rate=0 cpu=0 lat=0 rq[f=48c40080h,i=0,an=8000h,rx=,wx=,ax=] rp[f=80040202h,i=0,an=4000000h,rx=1m53s,wx=,ax=] scf=[8,0h,fd=58] scb=[8,11h,fd=338] exp=1m22s rc=0 c_exp=
1737902> show info Name: HAProxy Version: 2.7.5-1~bpo11 1 Release_date: 2023/03/18 Nbthread: 8 Nbproc: 1 Process_num: 1 Pid: 1737902 Uptime: 0d 14h10m13s Uptime_sec: 51013 Memmax_MB: 0 PoolAlloc_MB: 4 PoolUsed_MB: 4 PoolFailed: 0 Ulimit-n: 20457 Maxsock: 20457 Maxconn: 10000 Hard_maxconn: 10000 CurrConns: 4 CumConns: snip CumReq: snip MaxSslConns: 0 CurrSslConns: 9 CumSslConns: snip Maxpipes: 0 PipesUsed: 0 PipesFree: 0 ConnRate: 0 ConnRateLimit: 0 MaxConnRate: 3392 SessRate: 0 SessRateLimit: 0 MaxSessRate: 3392 SslRate: 0 SslRateLimit: 0 MaxSslRate: snip SslFrontendKeyRate: 0 SslFrontendMaxKeyRate: 45 SslFrontendSessionReuse_pct: 0 SslBackendKeyRate: 0 SslBackendMaxKeyRate: 281 SslCacheLookups: snip SslCacheMisses: snip CompressBpsIn: 0 CompressBpsOut: 0 CompressBpsRateLim: 0 Tasks: 806 Run_queue: 0 Idle_pct: 99 node: 0004.host.woltlab.cloud Stopping: 1 Jobs: 6 Unstoppable Jobs: 1 Listeners: 1 ActivePeers: 0 ConnectedPeers: 0 DroppedLogs: 231 BusyPolling: 0 FailedResolutions: 0 TotalBytesOut: snip TotalSplicdedBytesOut: 0 BytesOutRate: 0 DebugCommandsIssued: 0 CumRecvLogs: 0 Build info: 2.7.5-1~bpo11 1 Memmax_bytes: 0 PoolAlloc_bytes: 4319168 PoolUsed_bytes: 4319168 Start_time_sec: 1679867736 Tainted: 0
The perf recording was created with `perf record -s -T -g -p 1737902`:
78.29% 1.31% haproxy libc-2.31.so [.] __sched_yield
56.95% 13.89% haproxy [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
25.26% 25.17% haproxy [kernel.kallsyms] [k] syscall_return_via_sysret
24.72% 0.50% haproxy [kernel.kallsyms] [k] do_syscall_64
18.69% 17.86% haproxy [kernel.kallsyms] [k] syscall_exit_to_user_mode
16.66% 0.54% haproxy [kernel.kallsyms] [k] __x64_sys_sched_yield
15.44% 0.36% haproxy libc-2.31.so [.] __madvise
13.58% 0.17% haproxy [kernel.kallsyms] [k] schedule
13.45% 3.37% haproxy [kernel.kallsyms] [k] __schedule
12.12% 10.96% haproxy [kernel.kallsyms] [k] entry_SYSCALL_64
6.60% 2.02% haproxy [kernel.kallsyms] [k] pick_next_task_fair
5.10% 0.82% haproxy [kernel.kallsyms] [k] entry_SYSCALL_64_safe_stack
4.32% 0.14% haproxy [kernel.kallsyms] [k] __x64_sys_madvise
4.10% 0.88% haproxy [kernel.kallsyms] [k] do_sched_yield
4.07% 0.39% haproxy [kernel.kallsyms] [k] do_madvise.part.0
3.50% 1.23% haproxy [kernel.kallsyms] [k] update_curr
2.49% 0.22% haproxy [kernel.kallsyms] [k] zap_page_range
2.47% 0.73% haproxy [kernel.kallsyms] [k] yield_task_fair
2.16% 0.03% haproxy libc-2.31.so [.] epoll_wait
1.82% 1.73% haproxy libc-2.31.so [.] malloc_trim
1.76% 0.59% haproxy [kernel.kallsyms] [k] update_rq_clock
1.72% 0.11% haproxy [kernel.kallsyms] [k] do_epoll_wait
1.72% 0.00% haproxy [kernel.kallsyms] [k] __x64_sys_epoll_wait
1.69% 1.65% haproxy [kernel.kallsyms] [k] _raw_spin_lock
1.49% 0.02% haproxy [kernel.kallsyms] [k] schedule_hrtimeout_range_clock
1.48% 1.36% haproxy [kernel.kallsyms] [k] cpuacct_charge
1.47% 0.99% haproxy [kernel.kallsyms] [k] unmap_page_range
strace:
strace: Process 1737902 attached with 8 threads [pid 1737909] 1679919113.775122 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737908] 1679919113.775404 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737907] 1679919113.775475 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737906] 1679919113.775546 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737905] 1679919113.775614 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737904] 1679919113.775773 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737903] 1679919113.775872 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737902] 1679919113.775942 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737909] 1679919113.776069 <... clock_gettime resumed>{tv_sec=707, tv_nsec=905721296}) = 0 [pid 1737908] 1679919113.776212 <... clock_gettime resumed>{tv_sec=756, tv_nsec=446952375}) = 0 [pid 1737907] 1679919113.776308 <... clock_gettime resumed>{tv_sec=730, tv_nsec=455350328}) = 0 [pid 1737906] 1679919113.776377 <... clock_gettime resumed>{tv_sec=743, tv_nsec=374513757}) = 0 [pid 1737905] 1679919113.776510 <... clock_gettime resumed>{tv_sec=805, tv_nsec=487638745}) = 0 [pid 1737904] 1679919113.776623 <... clock_gettime resumed>{tv_sec=781, tv_nsec=259112163}) = 0 [pid 1737903] 1679919113.776680 <... clock_gettime resumed>{tv_sec=818, tv_nsec=679144285}) = 0 [pid 1737909] 1679919113.776785 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737908] 1679919113.776867 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737907] 1679919113.776921 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737906] 1679919113.777015 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737905] 1679919113.777113 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737904] 1679919113.777165 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737903] 1679919113.777214 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737902] 1679919113.777261 <... clock_gettime resumed>{tv_sec=772, tv_nsec=256215944}) = 0 [pid 1737909] 1679919113.777384 <... clock_gettime resumed>{tv_sec=707, tv_nsec=905767135}) = 0 [pid 1737908] 1679919113.777444 <... clock_gettime resumed>{tv_sec=756, tv_nsec=447010304}) = 0 [pid 1737907] 1679919113.777529 <... clock_gettime resumed>{tv_sec=730, tv_nsec=455378316}) = 0 [pid 1737906] 1679919113.777599 <... clock_gettime resumed>{tv_sec=743, tv_nsec=374551443}) = 0 [pid 1737905] 1679919113.777669 <... clock_gettime resumed>{tv_sec=805, tv_nsec=487724634}) = 0 [pid 1737904] 1679919113.777720 <... clock_gettime resumed>{tv_sec=781, tv_nsec=259166961}) = 0 [pid 1737903] 1679919113.777769 <... clock_gettime resumed>{tv_sec=818, tv_nsec=679180340}) = 0 [pid 1737902] 1679919113.777839 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737909] 1679919113.777916 epoll_wait(32, <unfinished ...> [pid 1737908] 1679919113.778034 epoll_wait(29, <unfinished ...> [pid 1737907] 1679919113.778093 epoll_wait(35, <unfinished ...> [pid 1737906] 1679919113.778169 epoll_wait(14, <unfinished ...> [pid 1737905] 1679919113.778268 epoll_wait(21, <unfinished ...> [pid 1737904] 1679919113.778320 epoll_wait(17, <unfinished ...> [pid 1737903] 1679919113.778398 epoll_wait(9, <unfinished ...> [pid 1737902] 1679919113.778490 <... clock_gettime resumed>{tv_sec=772, tv_nsec=256291185}) = 0 [pid 1737902] 1679919113.778599 epoll_wait(5, <unfinished ...> [pid 1737908] 1679919113.787312 <... epoll_wait resumed>[], 200, 9) = 0 [pid 1737908] 1679919113.787522 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=756, tv_nsec=447120588}) = 0 [pid 1737908] 1679919113.787841 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=756, tv_nsec=447173683}) = 0 [pid 1737908] 1679919113.788139 epoll_wait(29, <unfinished ...> [pid 1737909] 1679919113.860274 <... epoll_wait resumed>[], 200, 82) = 0 [pid 1737909] 1679919113.860466 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=707, tv_nsec=905970117}) = 0 [pid 1737909] 1679919113.860873 clock_gettime(CLOCK_THREAD_CPUTIME_ID, {tv_sec=707, tv_nsec=906111390}) = 0 [pid 1737909] 1679919113.861230 epoll_wait(32, <unfinished ...> [pid 1737908] 1679919114.531212 <... epoll_wait resumed>[], 200, 742) = 0 [pid 1737909] 1679919114.531303 <... epoll_wait resumed>[], 200, 669) = 0 [pid 1737908] 1679919114.531392 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737909] 1679919114.531489 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737908] 1679919114.531584 <... clock_gettime resumed>{tv_sec=756, tv_nsec=447349219}) = 0 [pid 1737909] 1679919114.531696 <... clock_gettime resumed>{tv_sec=707, tv_nsec=906391477}) = 0 [pid 1737908] 1679919114.531854 madvise(0x56334cc23000, 12288, MADV_DONTNEED <unfinished ...> [pid 1737909] 1679919114.531968 sched_yield( <unfinished ...> [pid 1737908] 1679919114.532064 <... madvise resumed>) = 0 [pid 1737909] 1679919114.532151 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.532233 madvise(0x56334c934000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737909] 1679919114.532335 sched_yield( <unfinished ...> [pid 1737908] 1679919114.532428 <... madvise resumed>) = 0 [pid 1737909] 1679919114.532547 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.532623 <... epoll_wait resumed>[], 200, 754) = 0 [pid 1737909] 1679919114.532740 sched_yield( <unfinished ...> [pid 1737908] 1679919114.532825 madvise(0x56334cb0c000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.532916 <... epoll_wait resumed>[], 200, 754) = 0 [pid 1737906] 1679919114.532992 <... epoll_wait resumed>[], 200, 754) = 0 [pid 1737905] 1679919114.533070 <... epoll_wait resumed>[], 200, 754) = 0 [pid 1737904] 1679919114.533147 <... epoll_wait resumed>[], 200, 754) = 0 [pid 1737903] 1679919114.533228 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737909] 1679919114.533342 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.533417 <... madvise resumed>) = 0 [pid 1737907] 1679919114.533491 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737906] 1679919114.533575 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737905] 1679919114.533660 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737904] 1679919114.533746 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737909] 1679919114.533880 sched_yield( <unfinished ...> [pid 1737908] 1679919114.533968 madvise(0x56334c9ab000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.534055 <... clock_gettime resumed>{tv_sec=730, tv_nsec=455443488}) = 0 [pid 1737906] 1679919114.534153 <... clock_gettime resumed>{tv_sec=743, tv_nsec=374700826}) = 0 [pid 1737905] 1679919114.534244 <... clock_gettime resumed>{tv_sec=805, tv_nsec=487825575}) = 0 [pid 1737903] 1679919114.534332 <... clock_gettime resumed>{tv_sec=818, tv_nsec=679367811}) = 0 [pid 1737904] 1679919114.534426 <... clock_gettime resumed>{tv_sec=781, tv_nsec=259236980}) = 0 [pid 1737902] 1679919114.534520 <... epoll_wait resumed>[], 200, 754) = 0 [pid 1737909] 1679919114.534649 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.534727 <... madvise resumed>) = 0 [pid 1737907] 1679919114.534793 sched_yield( <unfinished ...> [pid 1737906] 1679919114.534880 sched_yield( <unfinished ...> [pid 1737905] 1679919114.534964 sched_yield( <unfinished ...> [pid 1737904] 1679919114.535048 sched_yield( <unfinished ...> [pid 1737903] 1679919114.535132 sched_yield( <unfinished ...> [pid 1737902] 1679919114.535214 clock_gettime(CLOCK_THREAD_CPUTIME_ID, <unfinished ...> [pid 1737909] 1679919114.535440 sched_yield( <unfinished ...> [pid 1737908] 1679919114.535533 madvise(0x56334c936000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.535692 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.535778 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.535851 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.536039 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.536113 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.536306 <... clock_gettime resumed>{tv_sec=772, tv_nsec=256364029}) = 0 [pid 1737909] 1679919114.536432 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.536626 <... madvise resumed>) = 0 [pid 1737907] 1679919114.536705 sched_yield( <unfinished ...> [pid 1737906] 1679919114.536903 sched_yield( <unfinished ...> [pid 1737905] 1679919114.536980 sched_yield( <unfinished ...> [pid 1737904] 1679919114.537062 sched_yield( <unfinished ...> [pid 1737903] 1679919114.537212 sched_yield( <unfinished ...> [pid 1737902] 1679919114.537370 sched_yield( <unfinished ...> [pid 1737909] 1679919114.537620 sched_yield( <unfinished ...> [pid 1737908] 1679919114.537856 madvise(0x56334d142000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.537989 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.538144 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.538244 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.538347 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.538426 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.538498 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.538617 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.538695 <... madvise resumed>) = 0 [pid 1737907] 1679919114.538770 sched_yield( <unfinished ...> [pid 1737906] 1679919114.538855 sched_yield( <unfinished ...> [pid 1737905] 1679919114.538940 sched_yield( <unfinished ...> [pid 1737904] 1679919114.539020 sched_yield( <unfinished ...> [pid 1737903] 1679919114.539102 sched_yield( <unfinished ...> [pid 1737902] 1679919114.539190 sched_yield( <unfinished ...> [pid 1737909] 1679919114.539316 sched_yield( <unfinished ...> [pid 1737908] 1679919114.539402 madvise(0x56334d054000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.539495 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.539569 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.539637 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.539712 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.539795 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.539867 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.539979 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.540091 <... madvise resumed>) = 0 [pid 1737907] 1679919114.540235 sched_yield( <unfinished ...> [pid 1737906] 1679919114.540326 sched_yield( <unfinished ...> [pid 1737905] 1679919114.540421 sched_yield( <unfinished ...> [pid 1737904] 1679919114.540530 sched_yield( <unfinished ...> [pid 1737903] 1679919114.540626 sched_yield( <unfinished ...> [pid 1737902] 1679919114.540709 sched_yield( <unfinished ...> [pid 1737909] 1679919114.540814 sched_yield( <unfinished ...> [pid 1737908] 1679919114.540874 madvise(0x56334ca86000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737906] 1679919114.540929 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.540980 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.541030 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.541077 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.541152 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.541201 <... madvise resumed>) = 0 [pid 1737906] 1679919114.541248 sched_yield( <unfinished ...> [pid 1737905] 1679919114.541310 sched_yield( <unfinished ...> [pid 1737904] 1679919114.541383 sched_yield( <unfinished ...> [pid 1737902] 1679919114.541463 sched_yield( <unfinished ...> [pid 1737909] 1679919114.541582 sched_yield( <unfinished ...> [pid 1737908] 1679919114.541668 madvise(0x56334c248000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.541751 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.541823 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.541901 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.541975 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.542050 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.542125 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.542242 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.542318 <... madvise resumed>) = 0 [pid 1737907] 1679919114.542395 sched_yield( <unfinished ...> [pid 1737906] 1679919114.542473 sched_yield( <unfinished ...> [pid 1737905] 1679919114.542559 sched_yield( <unfinished ...> [pid 1737904] 1679919114.542645 sched_yield( <unfinished ...> [pid 1737903] 1679919114.542728 sched_yield( <unfinished ...> [pid 1737902] 1679919114.542811 sched_yield( <unfinished ...> [pid 1737909] 1679919114.542946 sched_yield( <unfinished ...> [pid 1737908] 1679919114.543036 madvise(0x56334c9da000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.543145 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.543206 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.543272 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.543337 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.543394 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.543458 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.543553 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.543609 <... madvise resumed>) = 0 [pid 1737906] 1679919114.543675 sched_yield( <unfinished ...> [pid 1737905] 1679919114.543747 sched_yield( <unfinished ...> [pid 1737904] 1679919114.543818 sched_yield( <unfinished ...> [pid 1737903] 1679919114.543883 sched_yield( <unfinished ...> [pid 1737902] 1679919114.543953 sched_yield( <unfinished ...> [pid 1737909] 1679919114.544052 sched_yield( <unfinished ...> [pid 1737908] 1679919114.544117 madvise(0x56334cad8000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737906] 1679919114.544190 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.544256 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.544320 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.544375 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.544440 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.544555 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.544630 <... madvise resumed>) = 0 [pid 1737906] 1679919114.544706 sched_yield( <unfinished ...> [pid 1737905] 1679919114.544783 sched_yield( <unfinished ...> [pid 1737904] 1679919114.544867 sched_yield( <unfinished ...> [pid 1737903] 1679919114.544957 sched_yield( <unfinished ...> [pid 1737902] 1679919114.545043 sched_yield( <unfinished ...> [pid 1737909] 1679919114.545176 sched_yield( <unfinished ...> [pid 1737908] 1679919114.545263 madvise(0x56334ca31000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.545351 sched_yield( <unfinished ...> [pid 1737906] 1679919114.545434 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.545509 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.545585 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.545659 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.545735 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.545845 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.545921 <... madvise resumed>) = 0 [pid 1737907] 1679919114.545992 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.546065 sched_yield( <unfinished ...> [pid 1737905] 1679919114.546149 sched_yield( <unfinished ...> [pid 1737904] 1679919114.546236 sched_yield( <unfinished ...> [pid 1737903] 1679919114.546319 sched_yield( <unfinished ...> [pid 1737902] 1679919114.546402 sched_yield( <unfinished ...> [pid 1737909] 1679919114.546524 sched_yield( <unfinished ...> [pid 1737908] 1679919114.546610 madvise(0x56334d14e000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.546701 sched_yield( <unfinished ...> [pid 1737906] 1679919114.546778 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.546843 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.546916 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.546988 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.547108 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.547184 <... madvise resumed>) = 0 [pid 1737907] 1679919114.547259 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.547330 sched_yield( <unfinished ...> [pid 1737905] 1679919114.547418 sched_yield( <unfinished ...> [pid 1737904] 1679919114.547502 sched_yield( <unfinished ...> [pid 1737903] 1679919114.547586 sched_yield( <unfinished ...> [pid 1737902] 1679919114.547672 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.547787 sched_yield( <unfinished ...> [pid 1737908] 1679919114.547876 madvise(0x56334cb79000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.547962 sched_yield( <unfinished ...> [pid 1737906] 1679919114.548050 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.548142 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.548211 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.548276 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.548341 sched_yield( <unfinished ...> [pid 1737909] 1679919114.548444 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.548551 <... madvise resumed>) = 0 [pid 1737906] 1679919114.548636 sched_yield( <unfinished ...> [pid 1737905] 1679919114.548709 sched_yield( <unfinished ...> [pid 1737904] 1679919114.548780 sched_yield( <unfinished ...> [pid 1737903] 1679919114.548849 sched_yield( <unfinished ...> [pid 1737902] 1679919114.548920 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.549015 sched_yield( <unfinished ...> [pid 1737908] 1679919114.549083 madvise(0x56334d149000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737906] 1679919114.549161 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.549226 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.549290 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.549354 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.549419 sched_yield( <unfinished ...> [pid 1737909] 1679919114.549522 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.549579 <... madvise resumed>) = 0 [pid 1737906] 1679919114.549645 sched_yield( <unfinished ...> [pid 1737905] 1679919114.549718 sched_yield( <unfinished ...> [pid 1737904] 1679919114.549789 sched_yield( <unfinished ...> [pid 1737903] 1679919114.549859 sched_yield( <unfinished ...> [pid 1737902] 1679919114.549929 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.550023 sched_yield( <unfinished ...> [pid 1737908] 1679919114.550086 madvise(0x56334cdef000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737906] 1679919114.550146 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.550201 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.550265 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.550316 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.550380 sched_yield( <unfinished ...> [pid 1737909] 1679919114.550471 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.550520 <... madvise resumed>) = 0 [pid 1737906] 1679919114.550570 sched_yield( <unfinished ...> [pid 1737905] 1679919114.550633 sched_yield( <unfinished ...> [pid 1737904] 1679919114.550713 sched_yield( <unfinished ...> [pid 1737903] 1679919114.550790 sched_yield( <unfinished ...> [pid 1737902] 1679919114.550869 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.550990 sched_yield( <unfinished ...> [pid 1737908] 1679919114.551079 madvise(0x56334c9c3000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.551160 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.551229 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.551304 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.551376 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.551448 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.551524 sched_yield( <unfinished ...> [pid 1737909] 1679919114.551655 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.551733 <... madvise resumed>) = 0 [pid 1737907] 1679919114.551807 sched_yield( <unfinished ...> [pid 1737906] 1679919114.551891 sched_yield( <unfinished ...> [pid 1737905] 1679919114.551972 sched_yield( <unfinished ...> [pid 1737904] 1679919114.552058 sched_yield( <unfinished ...> [pid 1737903] 1679919114.552142 sched_yield( <unfinished ...> [pid 1737902] 1679919114.552219 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.552341 sched_yield( <unfinished ...> [pid 1737908] 1679919114.552427 madvise(0x56334cc1f000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.552537 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.552615 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.552691 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.552763 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.552840 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.552911 sched_yield( <unfinished ...> [pid 1737909] 1679919114.553036 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.553106 <... madvise resumed>) = 0 [pid 1737907] 1679919114.553179 sched_yield( <unfinished ...> [pid 1737906] 1679919114.553261 sched_yield( <unfinished ...> [pid 1737905] 1679919114.553343 sched_yield( <unfinished ...> [pid 1737904] 1679919114.553430 sched_yield( <unfinished ...> [pid 1737903] 1679919114.553513 sched_yield( <unfinished ...> [pid 1737909] 1679919114.553632 sched_yield( <unfinished ...> [pid 1737908] 1679919114.553716 madvise(0x56334d0f6000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.553799 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.553872 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.553943 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.554017 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.554086 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.554202 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.554278 <... madvise resumed>) = 0 [pid 1737907] 1679919114.554350 sched_yield( <unfinished ...> [pid 1737906] 1679919114.554434 sched_yield( <unfinished ...> [pid 1737905] 1679919114.554517 sched_yield( <unfinished ...> [pid 1737904] 1679919114.554601 sched_yield( <unfinished ...> [pid 1737903] 1679919114.554682 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.554758 sched_yield( <unfinished ...> [pid 1737909] 1679919114.554881 sched_yield( <unfinished ...> [pid 1737908] 1679919114.554990 madvise(0x56334ca9a000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.555079 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.555154 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.555225 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.555300 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.555372 sched_yield( <unfinished ...> [pid 1737902] 1679919114.555452 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.555572 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.555646 <... madvise resumed>) = 0 [pid 1737907] 1679919114.555719 sched_yield( <unfinished ...> [pid 1737906] 1679919114.555779 sched_yield( <unfinished ...> [pid 1737905] 1679919114.555837 sched_yield( <unfinished ...> [pid 1737904] 1679919114.555895 sched_yield( <unfinished ...> [pid 1737903] 1679919114.555950 <... sched_yield resumed>) = 0 [pid 1737902] 1679919114.556003 sched_yield( <unfinished ...> [pid 1737909] 1679919114.556091 sched_yield( <unfinished ...> [pid 1737908] 1679919114.556155 madvise(0x56334cd9c000, 4096, MADV_DONTNEED <unfinished ...> [pid 1737907] 1679919114.556213 <... sched_yield resumed>) = 0 [pid 1737906] 1679919114.556265 <... sched_yield resumed>) = 0 [pid 1737905] 1679919114.556320 <... sched_yield resumed>) = 0 [pid 1737904] 1679919114.556370 <... sched_yield resumed>) = 0 [pid 1737903] 1679919114.556425 sched_yield( <unfinished ...> [pid 1737902] 1679919114.556482 <... sched_yield resumed>) = 0 [pid 1737909] 1679919114.556576 <... sched_yield resumed>) = 0 [pid 1737908] 1679919114.556625 <... madvise resumed>) = 0
### Expected Behavior
I expected HAProxy to trim down memory usage once when stopping. Or possible whenever another connection exits, but not several times per second.
### Steps to Reproduce the Behavior
1. Look at a stopping worker process with `perf` and/or `strace`.
### Do you have any idea what may have caused this?
_No response_
### Do you have an idea how to solve the issue?
_No response_
### What is your configuration?
```haproxy
global
log stdout format short daemon
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
set-dumpable
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base "$CONFIG_DIRECTORY"/tls
tune.ssl.default-dh-param 2048
# TLS 1.2-
ssl-default-bind-ciphers ECDHE CHACHA20:ECDHE AES128:ECDHE AES256:!MD5
# TLS 1.3
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
# Require TLS 1.2 or higher
ssl-default-bind-options ssl-min-ver TLSv1.2 prefer-client-ciphers
maxconn 10000
# 12 hours
hard-stop-after 43200s
defaults
timeout connect 5s
timeout client 50s
timeout server 130s
timeout http-request 5s
timeout check 1s
timeout tarpit 3s
unique-id-format %{ X}o\ %[hostname,field(1,.),upper]-%Ts%rt
default-server init-addr libc,none resolvers unbound
resolvers unbound
nameserver unbound 127.0.0.1:53
# A large number of backends, but I do not believe that those are relevant here.
haproxy -vv
HAProxy version 2.7.5-1~bpo11 1 2023/03/18 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2024.
Known bugs: http://www.haproxy.org/bugs/bugs-2.7.5.html
Running on: Linux 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64
Build options :
TARGET = linux-glibc
CPU = generic
CC = cc
CFLAGS = -O2 -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wall -Wextra -Wundef -Wdeclaration-after-statement -Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment
OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1 USE_SYSTEMD=1 USE_PROMEX=1
DEBUG = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS
Feature list : -51DEGREES ACCEPT4 BACKTRACE -CLOSEFROM CPU_AFFINITY CRYPT_H -DEVICEATLAS DL -ENGINE EPOLL -EVPORTS GETADDRINFO -KQUEUE LIBCRYPT LINUX_SPLICE LINUX_TPROXY LUA -MEMORY_PROFILING NETFILTER NS -OBSOLETE_LINKER OPENSSL -OPENSSL_WOLFSSL -OT -PCRE PCRE2 PCRE2_JIT -PCRE_JIT POLL PRCTL -PROCCTL PROMEX -PTHREAD_EMULATION -QUIC RT SHM_OPEN SLZ -STATIC_PCRE -STATIC_PCRE2 SYSTEMD TFO THREAD THREAD_DUMP TPROXY -WURFL -ZLIB
Default settings :
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=8).
Built with OpenSSL version : OpenSSL 1.1.1n 15 Mar 2022
Running on OpenSSL version : OpenSSL 1.1.1n 15 Mar 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with the Prometheus exporter as a service
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE2 version : 10.36 2020-12-04
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 10.2.1 20210110
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|HOL_RISK|NO_UPG
fcgi : mode=HTTP side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG
<default> : mode=HTTP side=FE|BE mux=H1 flags=HTX
h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG
<default> : mode=TCP side=FE|BE mux=PASS flags=
none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG
Available services : prometheus-exporter
Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
No response
No response
Update esbuild
Update the CKEditor 37 Alpha
I was on Vacation last week, but a co-worker performed the update to 2.7.5 last week. It's much better now: The number of of DNS queries per second is now pretty much constant and was reduced by approximately 90%. Thank you!
Old workers are still pretty busy with a large number of madvise(?, ?, MADV_DONTNEED)
calls which looks wrong, but that's something for a different issue.
Picked into 5.4, thank you.