[2022/03/24 04:21:08] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] re-using retry for task_id=1 attempts=3 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [outputes.0] task_id=14 assigned to thread #1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [ warn] [engine] failed to flush chunk '1-1648192124.833819.flb', retry in 10 seconds: task_id=17, input=tail.0 > output=es.0 (out_id=0) Another solution can be to convert your Angular site to a PWA (Progressive Web App). To Reproduce Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 removing file name /var/log/containers/hello-world-89skv_argo_wait-5d919c301d4709b0304c6c65a8389aac10f30b8617bd935a9680a84e1873542b.log Fluentd collects log data in a single blob called a chunk.When Fluentd creates a chunk, the chunk is considered to be in the stage, where the chunk gets filled with data.When the chunk is full, Fluentd moves the chunk to the queue, where chunks are held before being flushed, or written out to their destination.Fluentd can fail to flush a chunk for a number of reasons, such as network issues or . [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=1885001 with offset=0 appended as /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available But the situation is the same. [2022/03/24 04:20:49] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"d-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [retry] re-using retry for task_id=2 attempts=2 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802 [2022/03/24 04:19:50] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available [2022/03/24 04:20:06] [debug] [out coro] cb_destroy coro_id=5 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [retry] re-using retry for task_id=15 attempts=2 Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [ warn] [engine] failed to flush chunk '1-1648192108.829100670.flb', retry in 8 seconds: task_id=7, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [task] created task=0x7ff2f183a840 id=13 OK Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 events: IN_ATTRIB "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"BOMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. mentioned this issue. Here is screenshot from DataGrip: Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [retry] re-using retry for task_id=1 attempts=2 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY I am seeing this in fluentd logs in kubernetes. [2022/03/24 04:19:24] [debug] [retry] new retry created for task_id=1 attempts=1 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [out coro] cb_destroy coro_id=11 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 file has been deleted: /var/log/containers/hello-world-89skv_argo_wait-5d919c301d4709b0304c6c65a8389aac10f30b8617bd935a9680a84e1873542b.log Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [out coro] cb_destroy coro_id=2 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=19 assigned to thread #0 [2022/03/24 04:20:36] [debug] [out coro] cb_destroy coro_id=6 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=1167 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:49] [debug] [http_client] not using http_proxy for header [2022/03/24 04:19:49] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [task] created task=0x7ff2f1839b20 id=6 OK Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [task] created task=0x7ff2f183a480 id=11 OK Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/fluent-bit-9hwpg_logging_fluent-bit-a7e85dd8e51db82db787e3386358a885ccff94c3411c8ba80a9a71598c01f387.log, inode 35353988 Describe the bug. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 11 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [task] created task=0x7ff2f183ade0 id=16 OK Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=15 Edit: If you're worried about something happening at 13:52:12 on 08/24, It's high probability is nothing special. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IeMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [out coro] cb_destroy coro_id=17 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NOMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Match kube. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/workflow-controller-bb7c78c7b-w2n5c_argo_workflow-controller-7f4797ff53352e50ff21cf9625ec02ffb226172a2a3ed9b0cee0cb1d071a2990.log, inode 34598688 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY $ sudo kubectl logs -n rtf -l app=external-log-forwarder [2021/03/01 12:55:57] [ warn . [2022/03/22 03:57:46] [ warn] [engine] failed to flush chunk '1-1647920587.172892529.flb', retry in 92 seconds: task_id=394, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JuMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. retry_time=5929 [2022/03/24 04:19:49] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 19 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Bug Report Describe the bug Looks like there is issue during recycling multiple TLS connections (when there is only one opened connection to upstream, or no TLS is used, everything works fine), tha. Are you still receiving some of the records on the ES side, or does it stopped receiving records altogether? retry_time=29 next_retry_seconds=2021-04-26 15:58:43 +0000 chunk . Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [task] created task=0x7ff2f183a2a0 id=10 OK "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [out coro] cb_destroy coro_id=12 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [outputes.0] task_id=0 assigned to thread #1 * ra: fix typo of comment Signed-off-by: Takahiro YAMASHITA <nokute78@gmail.com> * build: add an option for OSS-Fuzz builds (fluent#2502) This will make things a lot easier from the OSS-Fuzz side and also make it easier to construct new fuzzers. Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [http_client] not using http_proxy for header I have also set Replace_Dots On. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"N-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [retry] re-using retry for task_id=12 attempts=2 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 events: IN_ATTRIB Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:51] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=1085 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Name es Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [ warn] [engine] failed to flush chunk '1-1648192097.600252923.flb', retry in 14 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Match kube. * Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [ warn] [engine] failed to flush chunk '1-1648192109.839317289.flb', retry in 8 seconds: task_id=8, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [retry] re-using retry for task_id=2 attempts=3 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"f-Mnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall fluentbit fails to communicate with fluentd. [2022/03/24 04:19:34] [debug] [upstream] KA connection #103 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"luMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. I am getting these errors. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:54] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [retry] new retry created for task_id=6 attempts=1 [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"PeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Already on GitHub? If I send the CONT signal to fluentbit I see that fluentbit still has them. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:54] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [retry] new retry created for task_id=16 attempts=1 Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY What version? [2022/03/24 04:19:49] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 15 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:19:38] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 14 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [out coro] cb_destroy coro_id=22 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=697 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. I used a Premium Block Blob storage account, but the account kind/SKU don't seem to matter. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [outputes.0] task_id=5 assigned to thread #1 [2022/03/24 04:19:59] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-89knq_argo_wait-a7f77229883282b7aebce253b8c371dd28e0606575ded307669b43b272d9a2f4.log, inode 1772851 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"ZOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:34] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=665 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"XuMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:34] [debug] [upstream] KA connection #104 to 10.3.4.84:9200 has been assigned (recycled) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Btw, although some warn messages, I still can search specific app logs from elastic search. It's possible for the HTTP status to be zero because it's unparseable -- specifically, the source uses atoi () -- but flb_http_do () will still return successfully. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192121.87279162.flb', retry in 10 seconds: task_id=15, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [outputes.0] task_id=2 assigned to thread #0 N must be >= 1 (default: 1) When Retry_Limit is set to no_limits or False, means that there is not limit for the number of retries that the Scheduler can do. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Under this scenario what I believe is happening is that the buffer is filled with junk but Fluent . Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192113.5409018.flb', retry in 7 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0) {"took":2250,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"-uMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:38] [debug] [retry] re-using retry for task_id=0 attempts=2 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"I-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance].
Hanako Kamado Death, Articles F