Description:
When using TTL to manage disk usage on system log tables (such as query_log, trace_log), new tables (e.g., query_log_2, query_log_3, trace_log_3) are created during upgrades, schema changes, or server restarts. These new tables do not inherit the TTL configuration from the original table, resulting in uncontrolled disk growth unless TTL is manually applied to each new table.
I have applied TTL configuration from helm release:
configmap:
keep_alive_timeout: "40"
configOverride: |
<yandex>
<asynchronous_metric_log>
<ttl>event_date + INTERVAL 90 DAY DELETE</ttl>
</asynchronous_metric_log>
<query_log>
<ttl>event_date + INTERVAL 90 DAY DELETE</ttl>
</query_log>
<trace_log>
<ttl>event_date + INTERVAL 90 DAY DELETE</ttl>
</trace_log>
<part_log>
<ttl>event_date + INTERVAL 90 DAY DELETE</ttl>
</part_log>
<query_views_log>
<ttl>event_date + INTERVAL 90 DAY DELETE</ttl>
</query_views_log>
</yandex>

trace_log_3 is huge in size.
Has anyone else faced anything similar? Any workarounds to this?
Description:
When using TTL to manage disk usage on system log tables (such as query_log, trace_log), new tables (e.g., query_log_2, query_log_3, trace_log_3) are created during upgrades, schema changes, or server restarts. These new tables do not inherit the TTL configuration from the original table, resulting in uncontrolled disk growth unless TTL is manually applied to each new table.
I have applied TTL configuration from helm release:
Has anyone else faced anything similar? Any workarounds to this?