I personally like to have health checks to container level and also to application level for a better faults tolerance in production. They are complementary imo. However, I totally understand you want to keep project source simple. I'll update the PR soon.
I had to deploy my container in the serverless Google Cloud Run solution and the health checks defined in the Dockerfile are not used. So, the traffic was served whereas php fpm wasn't ready. Using the Caddy health check could be a good last safety net imo.
Thanks for the feedback @bzoks. I simplified the PR by keeping the docker healthcheck as-it.
Use native caddy healthcheck
Use native caddy healthcheck
Resolves https://github.com/dunglas/symfony-docker/issues/377
caddy_1 | {"level":"info","ts":1677492269.4432561,"logger":"http.handlers.reverse_proxy.health_checker.active","msg":"HTTP request failed","host":"localhost","error":"Get \"http://localhost/ping\": dialing backend: dial unix /var/run/php/php-fpm.sock: connect: no such file or directory"}
php_1 | Executing script assets:install public [OK]
php_1 | [27-Feb-2023 10:04:29] NOTICE: fpm is running, pid 1
php_1 | [27-Feb-2023 10:04:29] NOTICE: ready to handle connections
caddy_1 | {"level":"info","ts":1677492270.444267,"logger":"http.handlers.reverse_proxy.health_checker.active","msg":"host is up","host":"localhost"}
Works like a charm, thanks for the tip!
Define lb_try_duration to 5s in Caddyfile
alpine 3.17.2
caddy version
or paste commit SHA)v2.6.4 h1:2hwYqiRwk1tf3VruhMpLcYTg+11fCdr8S3jhNAdnPy8=
go version
)n/a
I'm trying https://github.com/caddyserver/caddy/issues/5281 on my prod and I get a 503 error if I access to the page from the browser while caddy is iterating heathchecks.
503 response
{
"ts": 1677436806.2188864,
"err_id": "iq5jrxxqh",
"logger": "http.log.error",
"duration": 0.00016863,
"status": 503,
"level": "error",
"err_trace": "reverseproxy.(*Handler).proxyLoopIteration (reverseproxy.go:547)",
"msg": "no upstreams available"
}
n/a
n/a
#syntax=docker/dockerfile:1.4
FROM php:8.2-fpm-alpine AS app_php
RUN mkdir -p /var/run/php
RUN apk add caddy
ADD https://github.com/just-containers/s6-overlay/releases/download/v3.1.2.1/s6-overlay-noarch.tar.xz /tmp
RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz
ADD https://github.com/just-containers/s6-overlay/releases/download/v3.1.2.1/s6-overlay-x86_64.tar.xz /tmp
RUN tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz
COPY <<EOF /etc/services.d/caddy/run
#!/command/with-contenv sh
caddy run --config /etc/caddy/Caddyfile --adapter caddyfile
EOF
RUN chmod +x /etc/services.d/caddy/run
COPY <<EOF /etc/services.d/php-fpm/run
#!/command/with-contenv sh
sleep 10
php-fpm
EOF
RUN chmod +x /etc/services.d/php-fpm/run
COPY <<EOF /usr/local/etc/php-fpm.d/zz-docker.conf
[global]
daemonize = no
[www]
listen = /var/run/php/php-fpm.sock
listen.mode = 0666
ping.path = /ping
EOF
COPY <<EOF /etc/caddy/Caddyfile
{
auto_https disable_redirects
}
:80,
:443
log
route {
root * /srv/app/public
php_fastcgi unix//var/run/php/php-fpm.sock {
health_uri /ping
health_interval 1s
}
file_server
}
EOF
COPY <<EOF /srv/app/public/index.php
<?php echo "hello";
EOF
ENTRYPOINT ["/init"]
Build and run :
$ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker build . -t caddy-heathcheck-php
$ docker run --rm -it -p 80:80 caddy-heathcheck-php sh
Run the curl in the next 10 seconds :
curl http://localhost/index.php
I misunderstood the feature. I thought it worked like the kubernetes readiness probe that waits the health check to be green before serving traffic to the service. I thought that the error in the log was unexpected. Sorry for the noise, and thanks for your help !
It looks like there isn't more useful information:
2023/02/26 20:43:35.326 INFO http.handlers.reverse_proxy.health_checker.active HTTP request failed {"host": "localhost", "error": "Get \"http://localhost/ping\": dialing backend: dial unix /var/run/php/php-fpm.sock: connect: no such file or directory"}
2023/02/26 20:43:35.854 ERROR http.log.error no upstreams available {"request": {"remote_ip": "172.17.0.1", "remote_port": "40306", "proto": "HTTP/1.1", "method": "GET", "host": "localhost:8080", "uri": "/index.php", "headers": {"User-Agent": ["curl/7.81.0"], "Accept": ["*/*"]}}, "duration": 0.000495611, "status": 503, "err_id": "4zye6nzsn", "err_trace": "reverseproxy.(*Handler).proxyLoopIteration (reverseproxy.go:547)"}
2023/02/26 20:43:35.854 ERROR http.log.access handled request {"request": {"remote_ip": "172.17.0.1", "remote_port": "40306", "proto": "HTTP/1.1", "method": "GET", "host": "localhost:8080", "uri": "/index.php", "headers": {"Accept": ["*/*"], "User-Agent": ["curl/7.81.0"]}}, "user_id": "", "duration": 0.000495611, "size": 0, "status": 503, "resp_headers": {"Server": ["Caddy"]}}
2023/02/26 20:43:36.326 DEBUG http.reverse_proxy.transport.fastcgi roundtrip {"request": {"remote_ip": "", "remote_port": "", "proto": "HTTP/1.1", "method": "GET", "host": "localhost", "uri": "", "headers": {}}, "env": {"CONTENT_LENGTH": "", "PATH_INFO": "", "REMOTE_PORT": "", "REQUEST_METHOD": "GET", "REQUEST_SCHEME": "http", "SERVER_NAME": "localhost", "SERVER_PROTOCOL": "HTTP/1.1", "HTTP_HOST": "localhost", "REMOTE_IDENT": "", "QUERY_STRING": "", "REMOTE_HOST": "", "REMOTE_USER": "", "SERVER_SOFTWARE": "Caddy/unknown", "DOCUMENT_URI": "/ping", "REQUEST_URI": "/ping", "SERVER_PORT": "80", "CONTENT_TYPE": "", "REMOTE_ADDR": "", "AUTH_TYPE": "", "GATEWAY_INTERFACE": "CGI/1.1", "DOCUMENT_ROOT": "/run/s6/legacy-services/caddy", "SCRIPT_FILENAME": "/run/s6/legacy-services/caddy/ping", "SCRIPT_NAME": "/ping"}, "dial": "/var/run/php/php-fpm.sock", "env": {"SERVER_SOFTWARE": "Caddy/unknown", "DOCUMENT_URI": "/ping", "REQUEST_URI": "/ping", "SERVER_PORT": "80", "REMOTE_IDENT": "", "QUERY_STRING": "", "REMOTE_HOST": "", "REMOTE_USER": "", "CONTENT_TYPE": "", "REMOTE_ADDR": "", "SCRIPT_NAME": "/ping", "AUTH_TYPE": "", "GATEWAY_INTERFACE": "CGI/1.1", "DOCUMENT_ROOT": "/run/s6/legacy-services/caddy", "SCRIPT_FILENAME": "/run/s6/legacy-services/caddy/ping", "REQUEST_SCHEME": "http", "SERVER_NAME": "localhost", "SERVER_PROTOCOL": "HTTP/1.1", "HTTP_HOST": "localhost", "CONTENT_LENGTH": "", "PATH_INFO": "", "REMOTE_PORT": "", "REQUEST_METHOD": "GET"}, "request": {"remote_ip": "", "remote_port": "", "proto": "HTTP/1.1", "method": "GET", "host": "localhost", "uri": "", "headers": {}}}
Thanks for your help @francislavoie, I just updated the PR description with the template.
The caddy version is v2.6.4 h1:2hwYqiRwk1tf3VruhMpLcYTg+11fCdr8S3jhNAdnPy8=
.
I'm trying https://github.com/caddyserver/caddy/issues/5281 on my prod and I get a 503 error if I access to the page from the browser while caddy is iterating heathchecks.
{
"ts": 1677436806.2188864,
"err_id": "iq5jrxxqh",
"logger": "http.log.error",
"duration": 0.00016863,
"status": 503,
"level": "error",
"err_trace": "reverseproxy.(*Handler).proxyLoopIteration (reverseproxy.go:547)",
"msg": "no upstreams available"
}
Use native caddy health check
Merge pull request #7 from maidmaid/caddy-healthcheck
Use native caddy health check
A workaround could be to use an intermediate null_resource depending on the real resources:
resource "null_resource" "depends_on_sleep_and_test" {
depends_on = [time_sleep.wait_300_seconds, null_resource.test]
}
module "execute_create_admin" {
source = "terraform-google-modules/gcloud/google"
module_depends_on = [null_resource.depends_on_sleep_and_test]
}