You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi All, I have deployed Airflow 2.9.3 using the bitnami helm chart version 19.0.0. We are using the standard charts and customizing the ,securityContext,fsGroup, uids, gids according to our namespace removed the seccompProfile, readOnlyRootFilesystem(for adding external dependencies).
The current LivenessProbe in the woker pod is generating defunct process even if there are no DAGs in our Instance.
Bitnami Airflow worker pod image used: docker.io/bitnami/airflow-worker:2.9.3-debian-12-r4
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1111117777+ 1 0.2 0.0 2901500 204768 ? Ss 15:18 0:09 [celeryd: celery@bitnami-airflow-gaming-0:MainProcess] -active- (celery worker --pid /opt/bitnami/airflow/tmp/airflow-worker.pid)
1111117777+ 126 0.0 0.0 1254284 145740 ? S 15:20 0:00 gunicorn: master [gunicorn]
1111117777+ 127 0.0 0.0 1254284 144984 ? S 15:20 0:00 gunicorn: worker [gunicorn]
1111117777+ 128 0.0 0.0 1254284 144988 ? S 15:20 0:00 gunicorn: worker [gunicorn]
1111117777+ 177 0.0 0.0 2902596 173700 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-1]
1111117777+ 178 0.0 0.0 2902600 173672 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-2]
1111117777+ 179 0.0 0.0 2902604 173676 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-3]
1111117777+ 180 0.0 0.0 2902608 173692 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-4]
1111117777+ 181 0.0 0.0 2902612 173692 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-5]
1111117777+ 182 0.0 0.0 2902616 173708 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-6]
1111117777+ 183 0.0 0.0 2902620 170324 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-7]
1111117777+ 184 0.0 0.0 2902624 170296 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-8]
1111117777+ 185 0.0 0.0 2902628 170272 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-9]
1111117777+ 186 0.0 0.0 2902632 170304 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-10]
1111117777+ 187 0.0 0.0 2902636 170216 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-11]
1111117777+ 188 0.0 0.0 2902640 170296 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-12]
1111117777+ 189 0.0 0.0 2902644 170300 ? S 15:20 0:00 [celeryd: celery@bitnami-airflow-gaming-0:ForkPoolWorker-13]
1111117777+ 253 0.0 0.0 2576 928 pts/0 Ss 15:26 0:00 sh -i -c TERM=xterm sh
1111117777+ 259 0.0 0.0 2576 924 pts/0 S+ 15:26 0:00 sh
1111117777+ 582 0.0 0.0 2576 920 pts/1 Ss 15:42 0:00 sh -i -c TERM=xterm sh
1111117777+ 588 0.0 0.0 2576 944 pts/1 S+ 15:42 0:00 sh
1111117777+ 968 0.0 0.0 0 0 ? Z 16:00 0:00 [python] <defunct>
1111117777+ 969 0.0 0.0 0 0 ? Z 16:03 0:00 [python] <defunct>
Are you using any custom parameters or values?
customizing the ,securityContext,fsGroup, uids, gids according to our namespace removed the seccompProfile, readOnlyRootFilesystem(for adding external dependencies).
Value.yaml
## Global settings for the Bitnami Airflow Helm Chart
# Global settings
global:
# Override Bitnami images
imageRegistry: ""
imagePullSecrets: []
# Common settings for all Airflow containers
common:
securityContext:
runAsUser: 1111117777
runAsGroup: 1111117777
fsGroup: 1111117777
## Scheduler settings
scheduler:
securityContext:
runAsUser: 1111117777
runAsGroup: 1111117777
fsGroup: 1111117777
# Specify additional Airflow Scheduler container environment variables
env: []
# Customize the number of replicas for the Airflow Scheduler Deployment
replicas: 1
## Worker settings
workers:
securityContext:
runAsUser: 1111117777
runAsGroup: 1111117777
fsGroup: 1111117777
## Web server settings
web:
securityContext:
runAsUser: 1111117777
runAsGroup: 1111117777
fsGroup: 1111117777
# Airflow base settings
airflow:
# (Optional) Extra arguments to be passed to Airflow
extraArgs: ""
# Airflow External dependencies (install additional packages)
extraPipPackages: []
# Security Context settings
securityContext:
runAsUser: 1111117777
runAsGroup: 1111117777
fsGroup: 1111117777
What is the expected behavior?
No response
What do you see instead?
When I used the the below livenessProbe the defunct process is not there :
After implementing the above Liveness Probe in the worker Pod there is no defunct process to be seem.
The defunct process are only generated at the worker pods not in the rest of the airflow components and I'm seeing liveness probe failing when I'm using the official bitnami helm chart configs for the liveness probe.
Kubenetes version
OpenShift Container Platform 4.14
What are the disadvantages of using the above Livenessprobe in the worker pod and how to get rid of defunct process from the official bitnami charts.
Additional information
No response
The text was updated successfully, but these errors were encountered:
javsalgar
changed the title
LivenessProbe creating defunct process in Bitnami/Airflow :19.0.0
[bitnami/airflow] LivenessProbe creating defunct process in Bitnami/Airflow :19.0.0
Oct 8, 2024
Name and Version
Bitnami/Airflow 19.0.0
What architecture are you using?
amd64
What steps will reproduce the bug?
Hi All, I have deployed Airflow 2.9.3 using the bitnami helm chart version 19.0.0. We are using the standard charts and customizing the
,securityContext,fsGroup, uids, gids
according to our namespace removed theseccompProfile
,readOnlyRootFilesystem
(for adding external dependencies).The current LivenessProbe in the woker pod is generating defunct process even if there are no DAGs in our Instance.
Bitnami Airflow worker pod image used: docker.io/bitnami/airflow-worker:2.9.3-debian-12-r4
The defunct process in the process Logs:
Are you using any custom parameters or values?
customizing the
,securityContext,fsGroup, uids, gids
according to our namespace removed theseccompProfile
,readOnlyRootFilesystem
(for adding external dependencies).Value.yaml
What is the expected behavior?
No response
What do you see instead?
When I used the the below livenessProbe the defunct process is not there :
After implementing the above Liveness Probe in the worker Pod there is no defunct process to be seem.
The defunct process are only generated at the worker pods not in the rest of the airflow components and I'm seeing liveness probe failing when I'm using the official bitnami helm chart configs for the liveness probe.
Kubenetes version
OpenShift Container Platform 4.14
What are the disadvantages of using the above Livenessprobe in the worker pod and how to get rid of defunct process from the official bitnami charts.
Additional information
No response
The text was updated successfully, but these errors were encountered: