Fluentd Issue #4814
Unanswered
basuadrija
asked this question in
Q&A
Fluentd Issue
#4814
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
What is a problem?
Hello Community we have implemented a robust monitoring system with EFK where fluentd is processing the logs to elastic , recently we have noticed when the logs size increases we are getting an error as "Worker Node ) killed by SIGKILL" whenever the log size increases , currently we have configured memory to 3072 Mi in resources limit and requests memory to 1024 Mi and cpu to 1500 Mi in the daemonset file also in fluentd config we have set the values for buffer like flush_interval:2s , flush_thread_count:8, chunk_limit_size:20M,total_limit_size:1G,retry_type eponential_backoff,retry_wait:1s,retry_max_interval:60,retry_forever true, queue_limit_length 20, overflow_action block the worker node where the daemonset is running has memory size of 32g and cpu size of 8 core please suggest what config we can give to avoid this issue
Describe the configuration of Fluentd
type file path /var/log/fluentd-buffer flush_mode interval flush_thread_count "{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT']||'8'}" flush_interval "{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL']||'5s'}" chunk_limit_size "{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE']||'2M'}" queue_limit_length "{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH']||'32'}" retry_max_interval "{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL']||'30'}" retry_forever trueDescribe the logs of Fluentd
failed to write data into buffer by buffer overflow action=block Worker 0 exited unexpectedly with signal SIGKILL
Environment
Beta Was this translation helpful? Give feedback.
All reactions