From 570563e3cf0937d49470810a930042920fb8c4d8 Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Mon, 2 Mar 2026 15:44:43 -0500 Subject: [PATCH 01/12] feat(ffe): add flag evaluation metrics E2E tests for Go Add system tests validating the feature_flag.evaluations OTel metric emitted by dd-trace-go's OpenFeature provider. - Enable DD_METRICS_OTEL_ENABLED and OTLP endpoint in FFE scenario - 4 test cases: basic metric, count, different flags, error tags - Update Go manifest for new test file --- manifests/golang.yml | 53 +--- tests/ffe/test_flag_eval_metrics.py | 335 ++++++++++++++++++++++++++ utils/_context/_scenarios/__init__.py | 2 + 3 files changed, 342 insertions(+), 48 deletions(-) create mode 100644 tests/ffe/test_flag_eval_metrics.py diff --git a/manifests/golang.yml b/manifests/golang.yml index da2c8241c23..0dd3823bbfd 100644 --- a/manifests/golang.yml +++ b/manifests/golang.yml @@ -6,7 +6,6 @@ manifest: tests/ai_guard/test_ai_guard_sdk.py::Test_Full_Response_And_Tags: missing_feature tests/ai_guard/test_ai_guard_sdk.py::Test_RootSpanUserKeep: missing_feature tests/ai_guard/test_ai_guard_sdk.py::Test_SDK_Disabled: missing_feature - tests/ai_guard/test_ai_guard_sdk.py::Test_SensitiveDataScanning: missing_feature tests/apm_tracing_e2e/test_otel.py::Test_Otel_Span: - weblog_declaration: "*": missing_feature (missing /e2e_otel_span endpoint on weblog) @@ -37,7 +36,6 @@ manifest: tests/appsec/api_security/test_custom_data_classification.py::Test_API_Security_Custom_Data_Classification_Scanner: v2.4.0 tests/appsec/api_security/test_endpoint_discovery.py::Test_Endpoint_Discovery: missing_feature tests/appsec/api_security/test_endpoint_fallback.py: missing_feature - tests/appsec/api_security/test_endpoints.py: irrelevant (language not implementing this feature) tests/appsec/api_security/test_schemas.py::Test_Scanners: v2.0.0 tests/appsec/api_security/test_schemas.py::Test_Schema_Request_Cookies: v1.60.0 tests/appsec/api_security/test_schemas.py::Test_Schema_Request_FormUrlEncoded_Body: v1.60.0 @@ -178,7 +176,6 @@ manifest: tests/appsec/iast/source/test_multipart.py::TestMultipart: missing_feature tests/appsec/iast/source/test_parameter_name.py::TestParameterName: missing_feature tests/appsec/iast/source/test_parameter_value.py::TestParameterValue: missing_feature - tests/appsec/iast/source/test_parameter_value.py::TestParameterValue::test_source_reported: irrelevant tests/appsec/iast/source/test_path.py::TestPath: missing_feature tests/appsec/iast/source/test_path_parameter.py::TestPathParameter: missing_feature tests/appsec/iast/source/test_sql_row.py::TestSqlRow: missing_feature @@ -186,7 +183,7 @@ manifest: tests/appsec/iast/test_sampling_by_route_method_count.py::TestSamplingByRouteMethodCount: missing_feature tests/appsec/iast/test_security_controls.py::TestSecurityControls: missing_feature tests/appsec/iast/test_vulnerability_schema.py::TestIastVulnerabilitySchema: v0.0.0 # Compliant because no IAST support - tests/appsec/rasp/test_api10.py::Test_API10_all: v2.7.0-dev + tests/appsec/rasp/test_api10.py::Test_API10_all: bug (APPSEC-61152) tests/appsec/rasp/test_api10.py::Test_API10_downstream_request_tag: v2.5.0-dev tests/appsec/rasp/test_api10.py::Test_API10_downstream_ssrf_telemetry: v2.4.0 tests/appsec/rasp/test_api10.py::Test_API10_redirect: v2.7.0-dev @@ -594,7 +591,7 @@ manifest: tests/appsec/test_service_activation_metric.py::TestServiceActivationRemoteConfigurationConfigMetric: v2.4.0 tests/appsec/test_shell_execution.py::Test_ShellExecution: missing_feature tests/appsec/test_span_tags_headers.py: v2.4.0 - tests/appsec/test_span_tags_headers.py::Test_Headers_Event_Blocking: v2.7.0-dev + tests/appsec/test_span_tags_headers.py::Test_Headers_Event_Blocking: bug (APPSEC-61286) tests/appsec/test_suspicious_attacker_blocking.py::Test_Suspicious_Attacker_Blocking: v1.69.0 tests/appsec/test_trace_tagging.py::Test_TraceTaggingRules: v2.1.0-dev tests/appsec/test_trace_tagging.py::Test_TraceTaggingRulesRcCapability: v2.1.0-dev @@ -667,15 +664,13 @@ manifest: gin: v1.37.0 tests/appsec/waf/test_addresses.py::Test_gRPC: v1.36.0 tests/appsec/waf/test_blocking.py::Test_Blocking: v1.50.0-rc.1 - tests/appsec/waf/test_blocking.py::Test_Blocking::test_accept_full_html: - - declaration: bug (APPSEC-61196) - component_version: '<2.7.0' + tests/appsec/waf/test_blocking.py::Test_Blocking::test_accept_full_html: bug (APPSEC-61196) tests/appsec/waf/test_blocking.py::Test_Blocking::test_accept_partial_html: missing_feature (Support for partial html not implemented) tests/appsec/waf/test_blocking.py::Test_Blocking::test_html_template_v2: - declaration: missing_feature component_version: <1.52.0 - declaration: bug (APPSEC-61196) - component_version: '>=1.52.0 <2.7.0' + component_version: '>=1.52.0' tests/appsec/waf/test_blocking.py::Test_Blocking::test_json_template_v1: - declaration: missing_feature component_version: <1.52.0 @@ -774,14 +769,8 @@ manifest: tests/auto_inject/test_auto_inject_install.py::TestContainerAutoInjectInstallScriptAppsec: v2.0.0 tests/auto_inject/test_auto_inject_install.py::TestHostAutoInjectInstallScriptAppsec: v2.0.0 tests/auto_inject/test_auto_inject_install.py::TestSimpleInstallerAutoInjectManualAppsec: v2.0.0 - tests/debugger/test_debugger_capture_expressions.py::Test_Debugger_Line_Capture_Expressions: missing_feature (Not yet implemented) - tests/debugger/test_debugger_capture_expressions.py::Test_Debugger_Method_Capture_Expressions: missing_feature (Not yet implemented) tests/debugger/test_debugger_code_origins.py::Test_Debugger_Code_Origins: missing_feature (feature not implemented) tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay: missing_feature (feature not implemented) - tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay::test_exception_replay_firsthit: missing_feature (Implemented only for dotnet) - tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay::test_exception_replay_outofmemory: missing_feature (Implemented only for dotnet) - tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay::test_exception_replay_recursion_inlined: irrelevant (Test for specific bug in dotnet) - tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay::test_exception_replay_stackoverflow: missing_feature (Implemented only for dotnet) tests/debugger/test_debugger_expression_language.py::Test_Debugger_Expression_Language: missing_feature (feature not implemented) tests/debugger/test_debugger_inproduct_enablement.py::Test_Debugger_InProduct_Enablement_Code_Origin: missing_feature tests/debugger/test_debugger_inproduct_enablement.py::Test_Debugger_InProduct_Enablement_Dynamic_Instrumentation: missing_feature @@ -789,36 +778,20 @@ manifest: tests/debugger/test_debugger_pii.py::Test_Debugger_PII_Redaction: missing_feature (feature not implemented) tests/debugger/test_debugger_pii.py::Test_Debugger_PII_Redaction_Excluded_Identifiers: missing_feature (feature not implemented) tests/debugger/test_debugger_probe_budgets.py::Test_Debugger_Probe_Budgets: missing_feature (feature not implemented) - tests/debugger/test_debugger_probe_budgets.py::Test_Debugger_Probe_Budgets::test_log_line_budgets: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots: missing_feature (feature not implemented) - tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots::test_process_tags_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots::test_process_tags_snapshot_svc: missing_feature - tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots::test_span_decoration_line_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots_With_SCM: missing_feature (feature not implemented) - tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots_With_SCM::test_span_decoration_line_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots: v2.2.3 - tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots::test_mix_snapshot: missing_feature (Not yet implemented) - tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots::test_span_decoration_method_snapshot: missing_feature (Not yet implemented) - tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots::test_span_method_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots_With_SCM: v2.2.3 - tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots_With_SCM::test_mix_snapshot: missing_feature (Not yet implemented) - ? tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots_With_SCM::test_span_decoration_method_snapshot - : missing_feature (Not yet implemented) - tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots_With_SCM::test_span_method_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_status.py::Test_Debugger_Line_Probe_Statuses: missing_feature (feature not implemented) - tests/debugger/test_debugger_probe_status.py::Test_Debugger_Line_Probe_Statuses::test_log_line_status: missing_feature (Not yet implemented) - tests/debugger/test_debugger_probe_status.py::Test_Debugger_Line_Probe_Statuses::test_metric_line_status: missing_feature (Not yet implemented) - tests/debugger/test_debugger_probe_status.py::Test_Debugger_Line_Probe_Statuses::test_span_decoration_line_status: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_status.py::Test_Debugger_Method_Probe_Statuses: v2.2.3 - tests/debugger/test_debugger_probe_status.py::Test_Debugger_Method_Probe_Statuses::test_metric_status: missing_feature (Not yet implemented) - tests/debugger/test_debugger_probe_status.py::Test_Debugger_Method_Probe_Statuses::test_span_decoration_method_status: missing_feature (Not yet implemented) - tests/debugger/test_debugger_probe_status.py::Test_Debugger_Method_Probe_Statuses::test_span_method_status: missing_feature (Not yet implemented) tests/debugger/test_debugger_symdb.py::Test_Debugger_SymDb: v2.2.3 tests/debugger/test_debugger_telemetry.py::Test_Debugger_Telemetry: missing_feature tests/docker_ssi/test_docker_ssi_appsec.py::TestDockerSSIAppsecFeatures::test_telemetry_source_ssi: v2.0.0 tests/ffe/test_dynamic_evaluation.py::Test_FFE_RC_Down_From_Start: v2.4.0 tests/ffe/test_dynamic_evaluation.py::Test_FFE_RC_Unavailable: v2.4.0 tests/ffe/test_exposures.py: v2.6.0-dev # Easy win for chi, echo, gin, net-http, net-http-orchestrion, uds-echo and version 2.5.0 + tests/ffe/test_flag_eval_metrics.py: v2.7.0-dev tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: - weblog_declaration: @@ -858,7 +831,6 @@ manifest: "*": irrelevant net-http: missing_feature (Endpoint not implemented) net-http-orchestrion: missing_feature (Endpoint not implemented) - - declaration: irrelevant (Localstack SQS does not support AWS Xray Header parsing) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_AWS_XRAY_HEADERS::test_consume_trace_equality: missing_feature (Expected to fail, Golang does not propagate context) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_AWS_XRAY_HEADERS::test_produce_trace_equality: missing_feature (Expected to fail, Golang does not propagate context) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: @@ -872,11 +844,8 @@ manifest: tests/integrations/crossed_integrations/test_sqs.py::_BaseSQS::test_produce_trace_equality: missing_feature (Expected to fail, Golang does not propagate context) tests/integrations/test_cassandra.py::Test_Cassandra: missing_feature (Endpoint is not implemented on weblog) tests/integrations/test_db_integrations_sql.py::Test_MsSql: missing_feature - tests/integrations/test_db_integrations_sql.py::Test_MsSql::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_db_integrations_sql.py::Test_MySql: missing_feature - tests/integrations/test_db_integrations_sql.py::Test_MySql::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_db_integrations_sql.py::Test_Postgres: missing_feature - tests/integrations/test_db_integrations_sql.py::Test_Postgres::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_db_integrations_sql.py::_BaseDatadogDbIntegrationTestClass::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_dbm.py::Test_Dbm: missing_feature tests/integrations/test_dbm.py::Test_Dbm_Comment_Batch_Python_Aiomysql: irrelevant (These are python only tests.) @@ -916,7 +885,6 @@ manifest: "*": irrelevant net-http: missing_feature (Endpoint not implemented) net-http-orchestrion: missing_feature (Endpoint not implemented) - tests/integrations/test_dsm.py::Test_DsmRabbitmq::test_dsm_rabbitmq_dotnet_legacy: irrelevant (legacy dotnet behavior) tests/integrations/test_dsm.py::Test_DsmRabbitmq_FanoutExchange: - weblog_declaration: "*": irrelevant @@ -974,7 +942,6 @@ manifest: tests/integrations/test_inferred_proxy.py::Test_AWS_API_Gateway_Inferred_Span_Creation_v2: missing_feature tests/integrations/test_mongo.py::Test_Mongo: missing_feature (Endpoint is not implemented on weblog) tests/integrations/test_otel_drop_in.py::Test_Otel_Drop_In: missing_feature - tests/integrations/test_service_overrides.py::Test_SqlServiceNameSource: irrelevant (Only implemented for Java) tests/integrations/test_sql.py::Test_Sql: missing_feature (Endpoint is not implemented on weblog) tests/otel/test_context_propagation.py::Test_Otel_Context_Propagation_Default_Propagator_Api: - weblog_declaration: @@ -985,7 +952,6 @@ manifest: tests/otel_tracing_e2e/test_e2e.py::Test_OTelTracingE2E: irrelevant tests/parametric/test_128_bit_traceids.py::Test_128_Bit_Traceids: v1.50.0 tests/parametric/test_config_consistency.py::Test_Config_Dogstatsd: v1.72.0-dev - tests/parametric/test_config_consistency.py::Test_Config_Dogstatsd::test_dogstatsd_default: incomplete_test_app (PHP parameteric app can not access the dogstatsd default values, this logic is internal to the tracer) tests/parametric/test_config_consistency.py::Test_Config_RateLimit: v1.67.0 tests/parametric/test_config_consistency.py::Test_Config_RateLimit::test_setting_trace_rate_limit_strict: bug (APMAPI-1030) tests/parametric/test_config_consistency.py::Test_Config_Tags: v1.70.1 @@ -1071,13 +1037,10 @@ manifest: - declaration: missing_feature (Not implemented) component_version: <1.64.0 tests/parametric/test_headers_tracestate_dd.py::Test_Headers_Tracestate_DD::test_headers_tracestate_dd_propagate_propagatedtags: "missing_feature (\"False Bug: header[3,6]: can't guarantee the order of strings in the tracestate since they came from the map. BUG: header[4,5]: w3cTraceID shouldn't be present\")" - tests/parametric/test_library_tracestats.py::Test_Library_Tracestats::test_relative_error_TS008: missing_feature (relative error test is broken) tests/parametric/test_llm_observability/: incomplete_test_app tests/parametric/test_otel_api_interoperability.py: missing_feature tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars: v1.66.0 tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_dd_trace_otel_enabled_takes_precedence: irrelevant (does not support enabling opentelemetry via DD_TRACE_OTEL_ENABLED) - tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_dd_trace_sample_ignore_parent_false: missing_feature (dd_trace_sample_ignore_parent requires an RFC, this feature is not implemented in any language) - tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_dd_trace_sample_ignore_parent_true: missing_feature (dd_trace_sample_ignore_parent requires an RFC, this feature is not implemented in any language) tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_otel_log_level_env: missing_feature (DD_LOG_LEVEL is not supported in go) tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_otel_sdk_disabled_set: irrelevant (does not support enabling opentelemetry via DD_TRACE_OTEL_ENABLED) tests/parametric/test_otel_logs.py: '>=2.5.0' # Modified by easy win activation script @@ -1192,9 +1155,6 @@ manifest: tests/parametric/test_trace_sampling.py::Test_Trace_Sampling_With_W3C::test_distributed_headers_synthetics_sampling_decision: bug (APMAPI-1563) tests/parametric/test_tracer.py::Test_ProcessTags_ServiceName: missing_feature tests/parametric/test_tracer.py::Test_TracerSCITagging: v1.48.0 - tests/parametric/test_tracer.py::Test_TracerServiceNameSource::test_tracer_manual_service_name_sets_srv_src: irrelevant (Only implemented for Java) - tests/parametric/test_tracer.py::Test_TracerServiceNameSource::test_tracer_no_srv_src_when_service_not_manually_set: irrelevant (Only implemented for Java) - tests/parametric/test_tracer.py::Test_TracerUniversalServiceTagging::test_tracer_service_name_environment_variable: "missing_feature (FIXME: library test client sets empty string as the service name)" tests/parametric/test_tracer_flare.py::TestTracerFlareV1: '>=2.5.0' # Modified by easy win activation script tests/parametric/test_tracer_flare.py::TestTracerFlareV1::test_flare_log_level_order: missing_feature # Created by easy win activation script tests/parametric/test_tracer_flare.py::TestTracerFlareV1::test_no_tracer_flare_for_other_task_types: missing_feature # Created by easy win activation script @@ -1339,7 +1299,6 @@ manifest: tests/test_library_conf.py::Test_HeaderTags_Colon_Leading: v1.53.0 tests/test_library_conf.py::Test_HeaderTags_Colon_Trailing: v1.70.0 tests/test_library_conf.py::Test_HeaderTags_DynamicConfig: v1.70.0 - tests/test_library_conf.py::Test_HeaderTags_DynamicConfig::test_tracing_client_http_header_tags_apm_multiconfig: missing_feature (APM_TRACING_MULTICONFIG is not supported in any language yet) tests/test_library_conf.py::Test_HeaderTags_Long: v1.53.0 tests/test_library_conf.py::Test_HeaderTags_Short: v1.53.0 tests/test_library_conf.py::Test_HeaderTags_Whitespace_Header: v1.53.0 @@ -1359,7 +1318,6 @@ manifest: net-http: v2.4.0 net-http-orchestrion: v2.4.0 tests/test_rum_injection.py: irrelevant (RUM injection only supported for Java) - tests/test_sampling_rate_capping.py::Test_SamplingRateCappedIncrease: v2.7.0-dev tests/test_sampling_rates.py::Test_SampleRateFunction: v1.72.1 # real version unknown tests/test_sampling_rates.py::Test_SamplingDecisionAdded: v1.72.1 # real version unknown tests/test_sampling_rates.py::Test_SamplingDecisions: v1.72.1 # real version unknown @@ -1493,7 +1451,6 @@ manifest: tests/test_telemetry.py::Test_Telemetry::test_api_still_v1: irrelevant tests/test_telemetry.py::Test_Telemetry::test_app_dependencies_loaded: irrelevant tests/test_telemetry.py::Test_Telemetry::test_app_product_change: missing_feature (Weblog GET/enable_product and app-product-change event is not implemented yet.) - tests/test_telemetry.py::Test_Telemetry::test_telemetry_message_has_datadog_container_id: "irrelevant (cgroup in weblog is 0::/, so this test can't work)" tests/test_telemetry.py::Test_TelemetryEnhancedConfigReporting: missing_feature tests/test_telemetry.py::Test_TelemetrySCAEnvVar: missing_feature tests/test_telemetry.py::Test_TelemetryV2: v1.49.1 diff --git a/tests/ffe/test_flag_eval_metrics.py b/tests/ffe/test_flag_eval_metrics.py new file mode 100644 index 00000000000..1f01a3e2e06 --- /dev/null +++ b/tests/ffe/test_flag_eval_metrics.py @@ -0,0 +1,335 @@ +"""Test feature flag evaluation metrics via OTel Metrics API.""" + +import time + +from utils import ( + weblog, + interfaces, + scenarios, + features, + remote_config as rc, +) + + +RC_PRODUCT = "FFE_FLAGS" +RC_PATH = f"datadog/2/{RC_PRODUCT}" + +# Wait time in seconds for OTLP metrics pipeline: +# OTel SDK export interval (10s) + Agent metric flush (10s) + buffer +METRICS_PIPELINE_WAIT = 25 + + +def make_ufc_fixture(flag_key, variant_key="on", variation_type="STRING", enabled=True): + """Create a UFC fixture with the given flag configuration.""" + values = { + "STRING": {"on": "on-value", "off": "off-value"}, + "BOOLEAN": {"on": True, "off": False}, + } + var_values = values.get(variation_type, values["STRING"]) + + return { + "createdAt": "2024-04-17T19:40:53.716Z", + "format": "SERVER", + "environment": {"name": "Test"}, + "flags": { + flag_key: { + "key": flag_key, + "enabled": enabled, + "variationType": variation_type, + "variations": { + "on": {"key": "on", "value": var_values["on"]}, + "off": {"key": "off", "value": var_values["off"]}, + }, + "allocations": [ + { + "key": "default-allocation", + "rules": [], + "splits": [{"variationKey": variant_key, "shards": []}], + "doLog": True, + } + ], + } + }, + } + + +def find_eval_metrics(flag_key=None): + """Find feature_flag.evaluations metrics in agent data. + + Returns a list of metric points matching the metric name, optionally filtered by flag key tag. + """ + results = [] + for _, point in interfaces.agent.get_metrics(): + if point.get("metric") != "feature_flag.evaluations": + continue + + tags = point.get("tags", []) + if flag_key is not None: + tag_match = any(t == f"feature_flag.key:{flag_key}" for t in tags) + if not tag_match: + continue + + results.append(point) + return results + + +def get_tag_value(tags, key): + """Extract a tag value from a list of 'key:value' strings.""" + prefix = f"{key}:" + for tag in tags: + if tag.startswith(prefix): + return tag[len(prefix) :] + return None + + +@scenarios.feature_flagging_and_experimentation +@features.feature_flags_exposures +class Test_FFE_Eval_Metric_Basic: + """Test that a flag evaluation produces a feature_flag.evaluations metric.""" + + def setup_ffe_eval_metric_basic(self): + rc.tracer_rc_state.reset().apply() + + config_id = "ffe-eval-metric-basic" + self.flag_key = "eval-metric-basic-flag" + rc.tracer_rc_state.set_config(f"{RC_PATH}/{config_id}/config", make_ufc_fixture(self.flag_key)).apply() + + self.r = weblog.post( + "/ffe", + json={ + "flag": self.flag_key, + "variationType": "STRING", + "defaultValue": "default", + "targetingKey": "user-1", + "attributes": {}, + }, + ) + + # Wait for OTLP metrics pipeline + time.sleep(METRICS_PIPELINE_WAIT) + + def test_ffe_eval_metric_basic(self): + """Test that flag evaluation produces a metric with correct tags.""" + assert self.r.status_code == 200, f"Flag evaluation failed: {self.r.text}" + + metrics = find_eval_metrics(self.flag_key) + assert len(metrics) > 0, ( + f"Expected at least one feature_flag.evaluations metric for flag '{self.flag_key}', " + f"but found none. All eval metrics: {find_eval_metrics()}" + ) + + # Verify tags on the first matching metric point + point = metrics[0] + tags = point.get("tags", []) + + assert get_tag_value(tags, "feature_flag.key") == self.flag_key, ( + f"Expected tag feature_flag.key:{self.flag_key}, got tags: {tags}" + ) + assert get_tag_value(tags, "feature_flag.provider.name") == "Datadog", ( + f"Expected tag feature_flag.provider.name:Datadog, got tags: {tags}" + ) + assert get_tag_value(tags, "feature_flag.result.variant") == "on", ( + f"Expected tag feature_flag.result.variant:on, got tags: {tags}" + ) + assert get_tag_value(tags, "feature_flag.result.reason") == "targeting_match", ( + f"Expected tag feature_flag.result.reason:targeting_match, got tags: {tags}" + ) + + +@scenarios.feature_flagging_and_experimentation +@features.feature_flags_exposures +class Test_FFE_Eval_Metric_Count: + """Test that multiple evaluations of the same flag produce correct metric count.""" + + def setup_ffe_eval_metric_count(self): + rc.tracer_rc_state.reset().apply() + + config_id = "ffe-eval-metric-count" + self.flag_key = "eval-metric-count-flag" + rc.tracer_rc_state.set_config(f"{RC_PATH}/{config_id}/config", make_ufc_fixture(self.flag_key)).apply() + + self.eval_count = 5 + self.responses = [] + for _ in range(self.eval_count): + r = weblog.post( + "/ffe", + json={ + "flag": self.flag_key, + "variationType": "STRING", + "defaultValue": "default", + "targetingKey": "user-1", + "attributes": {}, + }, + ) + self.responses.append(r) + + time.sleep(METRICS_PIPELINE_WAIT) + + def test_ffe_eval_metric_count(self): + """Test that N evaluations produce metric count = N.""" + for i, r in enumerate(self.responses): + assert r.status_code == 200, f"Request {i + 1} failed: {r.text}" + + metrics = find_eval_metrics(self.flag_key) + assert len(metrics) > 0, ( + f"Expected at least one feature_flag.evaluations metric for flag '{self.flag_key}', " + f"but found none." + ) + + # Sum all data points for this flag (agent may split across multiple series entries) + total_count = 0 + for point in metrics: + points = point.get("points", []) + for p in points: + # points format: {"value": N, "timestamp": "..."} (v2 series API) + if isinstance(p, dict): + total_count += p.get("value", 0) + elif isinstance(p, list) and len(p) >= 2: + total_count += p[1] + + assert total_count >= self.eval_count, ( + f"Expected metric count >= {self.eval_count}, got {total_count}" + ) + + +@scenarios.feature_flagging_and_experimentation +@features.feature_flags_exposures +class Test_FFE_Eval_Metric_Different_Flags: + """Test that different flags produce separate metric series.""" + + def setup_ffe_eval_metric_different_flags(self): + rc.tracer_rc_state.reset().apply() + + config_id = "ffe-eval-metric-diff" + self.flag_a = "eval-metric-flag-a" + self.flag_b = "eval-metric-flag-b" + + # Create config with both flags + fixture = { + "createdAt": "2024-04-17T19:40:53.716Z", + "format": "SERVER", + "environment": {"name": "Test"}, + "flags": { + self.flag_a: { + "key": self.flag_a, + "enabled": True, + "variationType": "STRING", + "variations": { + "on": {"key": "on", "value": "on-value"}, + "off": {"key": "off", "value": "off-value"}, + }, + "allocations": [ + { + "key": "default-allocation", + "rules": [], + "splits": [{"variationKey": "on", "shards": []}], + "doLog": True, + } + ], + }, + self.flag_b: { + "key": self.flag_b, + "enabled": True, + "variationType": "STRING", + "variations": { + "on": {"key": "on", "value": "on-value"}, + "off": {"key": "off", "value": "off-value"}, + }, + "allocations": [ + { + "key": "default-allocation", + "rules": [], + "splits": [{"variationKey": "on", "shards": []}], + "doLog": True, + } + ], + }, + }, + } + rc.tracer_rc_state.set_config(f"{RC_PATH}/{config_id}/config", fixture).apply() + + self.r_a = weblog.post( + "/ffe", + json={ + "flag": self.flag_a, + "variationType": "STRING", + "defaultValue": "default", + "targetingKey": "user-1", + "attributes": {}, + }, + ) + self.r_b = weblog.post( + "/ffe", + json={ + "flag": self.flag_b, + "variationType": "STRING", + "defaultValue": "default", + "targetingKey": "user-1", + "attributes": {}, + }, + ) + + time.sleep(METRICS_PIPELINE_WAIT) + + def test_ffe_eval_metric_different_flags(self): + """Test that each flag key gets its own metric series.""" + assert self.r_a.status_code == 200, f"Flag A evaluation failed: {self.r_a.text}" + assert self.r_b.status_code == 200, f"Flag B evaluation failed: {self.r_b.text}" + + metrics_a = find_eval_metrics(self.flag_a) + metrics_b = find_eval_metrics(self.flag_b) + + assert len(metrics_a) > 0, ( + f"Expected metric for flag '{self.flag_a}', found none. All: {find_eval_metrics()}" + ) + assert len(metrics_b) > 0, ( + f"Expected metric for flag '{self.flag_b}', found none. All: {find_eval_metrics()}" + ) + + +@scenarios.feature_flagging_and_experimentation +@features.feature_flags_exposures +class Test_FFE_Eval_Metric_Error: + """Test that evaluating a non-existent flag produces metric with error tags.""" + + def setup_ffe_eval_metric_error(self): + rc.tracer_rc_state.reset().apply() + + # Set up config with a different flag than what we'll request + config_id = "ffe-eval-metric-error" + rc.tracer_rc_state.set_config( + f"{RC_PATH}/{config_id}/config", make_ufc_fixture("some-other-flag") + ).apply() + + self.flag_key = "non-existent-eval-metric-flag" + self.r = weblog.post( + "/ffe", + json={ + "flag": self.flag_key, + "variationType": "STRING", + "defaultValue": "default", + "targetingKey": "user-1", + "attributes": {}, + }, + ) + + time.sleep(METRICS_PIPELINE_WAIT) + + def test_ffe_eval_metric_error(self): + """Test that error evaluations produce metric with error.type tag.""" + assert self.r.status_code == 200, f"Flag evaluation request failed: {self.r.text}" + + metrics = find_eval_metrics(self.flag_key) + assert len(metrics) > 0, ( + f"Expected metric for non-existent flag '{self.flag_key}', found none. All: {find_eval_metrics()}" + ) + + point = metrics[0] + tags = point.get("tags", []) + + assert get_tag_value(tags, "feature_flag.result.reason") == "error", ( + f"Expected reason 'error', got tags: {tags}" + ) + assert get_tag_value(tags, "error.type") == "flag_not_found", ( + f"Expected error.type 'flag_not_found', got tags: {tags}" + ) diff --git a/utils/_context/_scenarios/__init__.py b/utils/_context/_scenarios/__init__.py index 048384bf5d1..26a12870d63 100644 --- a/utils/_context/_scenarios/__init__.py +++ b/utils/_context/_scenarios/__init__.py @@ -542,6 +542,8 @@ class _Scenarios: weblog_env={ "DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED": "true", "DD_REMOTE_CONFIG_POLL_INTERVAL_SECONDS": "0.2", + "DD_METRICS_OTEL_ENABLED": "true", + "OTEL_EXPORTER_OTLP_METRICS_ENDPOINT": "http://agent:4318/v1/metrics", }, doc="", scenario_groups=[scenario_groups.ffe], From 9c741850d488485981e2a1d8b27a1231e1528d65 Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Mon, 2 Mar 2026 16:18:04 -0500 Subject: [PATCH 02/12] fix(ffe): remove feature_flag.provider.name assertion MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Attribute dropped from dd-trace-go — always "Datadog", adds no value. --- tests/ffe/test_flag_eval_metrics.py | 3 --- 1 file changed, 3 deletions(-) diff --git a/tests/ffe/test_flag_eval_metrics.py b/tests/ffe/test_flag_eval_metrics.py index 1f01a3e2e06..a5699927bb0 100644 --- a/tests/ffe/test_flag_eval_metrics.py +++ b/tests/ffe/test_flag_eval_metrics.py @@ -125,9 +125,6 @@ def test_ffe_eval_metric_basic(self): assert get_tag_value(tags, "feature_flag.key") == self.flag_key, ( f"Expected tag feature_flag.key:{self.flag_key}, got tags: {tags}" ) - assert get_tag_value(tags, "feature_flag.provider.name") == "Datadog", ( - f"Expected tag feature_flag.provider.name:Datadog, got tags: {tags}" - ) assert get_tag_value(tags, "feature_flag.result.variant") == "on", ( f"Expected tag feature_flag.result.variant:on, got tags: {tags}" ) From 1af8a6de5f6467116b86655ecab669c54d4be816 Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Mon, 2 Mar 2026 16:32:29 -0500 Subject: [PATCH 03/12] fix(ffe): add type annotation to fix mypy index error --- tests/ffe/test_flag_eval_metrics.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tests/ffe/test_flag_eval_metrics.py b/tests/ffe/test_flag_eval_metrics.py index a5699927bb0..6aa113001f8 100644 --- a/tests/ffe/test_flag_eval_metrics.py +++ b/tests/ffe/test_flag_eval_metrics.py @@ -21,11 +21,11 @@ def make_ufc_fixture(flag_key, variant_key="on", variation_type="STRING", enabled=True): """Create a UFC fixture with the given flag configuration.""" - values = { + values: dict[str, dict[str, str | bool]] = { "STRING": {"on": "on-value", "off": "off-value"}, "BOOLEAN": {"on": True, "off": False}, } - var_values = values.get(variation_type, values["STRING"]) + var_values = values[variation_type] return { "createdAt": "2024-04-17T19:40:53.716Z", From 6c11cd482949c64487ffd0cdcd76875a13a4c724 Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Tue, 3 Mar 2026 12:02:04 -0500 Subject: [PATCH 04/12] [golang] Dispatch /ffe by variationType and add type_mismatch metric test The Go weblog was calling ofClient.Object() for all evaluations, ignoring the variationType field. This meant type conversion errors could never occur, unlike Python/Node.js which dispatch to the type-specific methods (BooleanValue, StringValue, etc.). Fix the Go weblog to dispatch based on variationType, matching the behavior of other language weblogs. Add Test_FFE_Eval_Metric_Type_Mismatch: configures a STRING flag but evaluates it as BOOLEAN, triggering a type conversion error that happens after the core evaluate() returns. This test would fail with the old evaluate()-level metric recording (which would see targeting_match / no error) and only passes when metrics are recorded via a Finally hook (which sees error / type_mismatch). --- tests/ffe/test_flag_eval_metrics.py | 56 +++++++++++++++++++ .../docker/golang/app/_shared/common/ffe.go | 22 +++++++- 2 files changed, 77 insertions(+), 1 deletion(-) diff --git a/tests/ffe/test_flag_eval_metrics.py b/tests/ffe/test_flag_eval_metrics.py index 6aa113001f8..5f07030e69b 100644 --- a/tests/ffe/test_flag_eval_metrics.py +++ b/tests/ffe/test_flag_eval_metrics.py @@ -330,3 +330,59 @@ def test_ffe_eval_metric_error(self): assert get_tag_value(tags, "error.type") == "flag_not_found", ( f"Expected error.type 'flag_not_found', got tags: {tags}" ) + + +@scenarios.feature_flagging_and_experimentation +@features.feature_flags_exposures +class Test_FFE_Eval_Metric_Type_Mismatch: + """Test that requesting the wrong type produces a metric with type_mismatch error. + + This configures a STRING flag but evaluates it as BOOLEAN. The type + conversion error happens *after* the core evaluate() returns, inside the + type-specific method (BooleanEvaluation). Recording metrics via a + Finally hook catches this; the old evaluate()-level defer would have + recorded a success (targeting_match) instead. + """ + + def setup_ffe_eval_metric_type_mismatch(self): + rc.tracer_rc_state.reset().apply() + + config_id = "ffe-eval-metric-type-mismatch" + self.flag_key = "eval-metric-type-mismatch-flag" + # Flag is configured as STRING + rc.tracer_rc_state.set_config( + f"{RC_PATH}/{config_id}/config", make_ufc_fixture(self.flag_key, variation_type="STRING") + ).apply() + + # But we evaluate it as BOOLEAN → type mismatch + self.r = weblog.post( + "/ffe", + json={ + "flag": self.flag_key, + "variationType": "BOOLEAN", + "defaultValue": False, + "targetingKey": "user-1", + "attributes": {}, + }, + ) + + time.sleep(METRICS_PIPELINE_WAIT) + + def test_ffe_eval_metric_type_mismatch(self): + """Test that type conversion errors produce metric with error.type:type_mismatch.""" + assert self.r.status_code == 200, f"Flag evaluation request failed: {self.r.text}" + + metrics = find_eval_metrics(self.flag_key) + assert len(metrics) > 0, ( + f"Expected metric for flag '{self.flag_key}', found none. All: {find_eval_metrics()}" + ) + + point = metrics[0] + tags = point.get("tags", []) + + assert get_tag_value(tags, "feature_flag.result.reason") == "error", ( + f"Expected reason 'error' for type mismatch, got tags: {tags}" + ) + assert get_tag_value(tags, "error.type") == "type_mismatch", ( + f"Expected error.type 'type_mismatch', got tags: {tags}" + ) diff --git a/utils/build/docker/golang/app/_shared/common/ffe.go b/utils/build/docker/golang/app/_shared/common/ffe.go index 07f08a11994..97d87770622 100644 --- a/utils/build/docker/golang/app/_shared/common/ffe.go +++ b/utils/build/docker/golang/app/_shared/common/ffe.go @@ -33,7 +33,27 @@ func FFeEval() func(writer http.ResponseWriter, request *http.Request) { return } - val := ofClient.Object(request.Context(), body.Flag, body.DefaultValue, of.NewEvaluationContext(body.TargetingKey, body.Attributes)) + ctx := request.Context() + evalCtx := of.NewEvaluationContext(body.TargetingKey, body.Attributes) + + var val any + switch body.VariationType { + case "BOOLEAN": + defBool, _ := body.DefaultValue.(bool) + val, _ = ofClient.BooleanValue(ctx, body.Flag, defBool, evalCtx) + case "STRING": + defStr, _ := body.DefaultValue.(string) + val, _ = ofClient.StringValue(ctx, body.Flag, defStr, evalCtx) + case "INTEGER": + // JSON numbers decode as float64 when target is any + defFloat, _ := body.DefaultValue.(float64) + val, _ = ofClient.IntValue(ctx, body.Flag, int64(defFloat), evalCtx) + case "NUMERIC": + defFloat, _ := body.DefaultValue.(float64) + val, _ = ofClient.FloatValue(ctx, body.Flag, defFloat, evalCtx) + default: + val = ofClient.Object(ctx, body.Flag, body.DefaultValue, evalCtx) + } writer.WriteHeader(http.StatusOK) From c0624e37562c567c093f9a9903ad5b0339538dce Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Tue, 3 Mar 2026 16:19:44 -0500 Subject: [PATCH 05/12] Remove obvious comment in ffe.go --- utils/build/docker/golang/app/_shared/common/ffe.go | 1 - 1 file changed, 1 deletion(-) diff --git a/utils/build/docker/golang/app/_shared/common/ffe.go b/utils/build/docker/golang/app/_shared/common/ffe.go index 97d87770622..929d0bd7eaa 100644 --- a/utils/build/docker/golang/app/_shared/common/ffe.go +++ b/utils/build/docker/golang/app/_shared/common/ffe.go @@ -45,7 +45,6 @@ func FFeEval() func(writer http.ResponseWriter, request *http.Request) { defStr, _ := body.DefaultValue.(string) val, _ = ofClient.StringValue(ctx, body.Flag, defStr, evalCtx) case "INTEGER": - // JSON numbers decode as float64 when target is any defFloat, _ := body.DefaultValue.(float64) val, _ = ofClient.IntValue(ctx, body.Flag, int64(defFloat), evalCtx) case "NUMERIC": From c7586ba2280bef736f117920d6fb4fac950cdc76 Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Tue, 3 Mar 2026 17:00:15 -0500 Subject: [PATCH 06/12] Mark test_flag_eval_metrics.py as missing_feature for non-Go languages Only Go supports flag evaluation metrics via OTel so far. Without this, the test file runs for all FFE-enabled languages and fails. --- manifests/cpp_httpd.yml | 1 + manifests/cpp_kong.yml | 1 + manifests/cpp_nginx.yml | 1 + manifests/dotnet.yml | 1 + manifests/java.yml | 1 + manifests/nodejs.yml | 1 + manifests/php.yml | 1 + manifests/python.yml | 1 + manifests/ruby.yml | 1 + manifests/rust.yml | 1 + 10 files changed, 10 insertions(+) diff --git a/manifests/cpp_httpd.yml b/manifests/cpp_httpd.yml index de031f5373d..733e3f26e0c 100644 --- a/manifests/cpp_httpd.yml +++ b/manifests/cpp_httpd.yml @@ -34,6 +34,7 @@ manifest: tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/crossed_integrations/: missing_feature (Endpoint not implemented) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_AWS_XRAY_HEADERS: irrelevant (Localstack SQS does not support AWS Xray Header parsing) tests/integrations/test_cassandra.py::Test_Cassandra: missing_feature (Endpoint is not implemented on weblog) diff --git a/manifests/cpp_kong.yml b/manifests/cpp_kong.yml index d3accd80733..07c1d7bdfef 100644 --- a/manifests/cpp_kong.yml +++ b/manifests/cpp_kong.yml @@ -8,6 +8,7 @@ manifest: tests/appsec/: irrelevant (ASM is not implemented in Kong plugin) tests/debugger/: irrelevant tests/ffe/: missing_feature + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/: missing_feature (Endpoints not implemented) tests/otel/: irrelevant (library does not implement OpenTelemetry) tests/parametric/: irrelevant (Parametric scenario is not applied on Kong) diff --git a/manifests/cpp_nginx.yml b/manifests/cpp_nginx.yml index cfa9595418d..0122ca01325 100644 --- a/manifests/cpp_nginx.yml +++ b/manifests/cpp_nginx.yml @@ -235,6 +235,7 @@ manifest: tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: missing_feature tests/integrations/crossed_integrations/test_kinesis.py::Test_Kinesis_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: missing_feature tests/integrations/crossed_integrations/test_rabbitmq.py::Test_RabbitMQ_Trace_Context_Propagation: missing_feature diff --git a/manifests/dotnet.yml b/manifests/dotnet.yml index b2ffb0082cf..dd47c4c0236 100644 --- a/manifests/dotnet.yml +++ b/manifests/dotnet.yml @@ -691,6 +691,7 @@ manifest: tests/ffe/test_dynamic_evaluation.py: v3.36.0 tests/ffe/test_exposures.py: v3.36.0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: v2.0.0-prerelease tests/integrations/crossed_integrations/test_kinesis.py::Test_Kinesis_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: missing_feature tests/integrations/crossed_integrations/test_rabbitmq.py::Test_RabbitMQ_Trace_Context_Propagation: v2.0.0-prerelease diff --git a/manifests/java.yml b/manifests/java.yml index 53cded6ed04..33b1c788803 100644 --- a/manifests/java.yml +++ b/manifests/java.yml @@ -3058,6 +3058,7 @@ manifest: spring-boot: v1.56.0 tests/ffe/test_exposures.py::Test_FFE_EXP_5_Missing_Targeting_Key: bug (FFL-1729) tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: - weblog_declaration: "*": irrelevant diff --git a/manifests/nodejs.yml b/manifests/nodejs.yml index 57447672199..ce0583a3422 100644 --- a/manifests/nodejs.yml +++ b/manifests/nodejs.yml @@ -1579,6 +1579,7 @@ manifest: "*": incomplete_test_app express4: *ref_5_77_0 tests/ffe/test_exposures.py::Test_FFE_EXP_5_Missing_Targeting_Key: bug (FFL-1730) + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_apm.py::TestAnthropicApmMessages: *ref_5_71_0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages: *ref_5_71_0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) diff --git a/manifests/php.yml b/manifests/php.yml index 88158b954eb..c67f68d770c 100644 --- a/manifests/php.yml +++ b/manifests/php.yml @@ -551,6 +551,7 @@ manifest: tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: missing_feature tests/integrations/crossed_integrations/test_kinesis.py::Test_Kinesis_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: missing_feature tests/integrations/crossed_integrations/test_rabbitmq.py::Test_RabbitMQ_Trace_Context_Propagation: missing_feature diff --git a/manifests/python.yml b/manifests/python.yml index 2c8e5f57c31..cbfa7f9c9d5 100644 --- a/manifests/python.yml +++ b/manifests/python.yml @@ -1080,6 +1080,7 @@ manifest: tests/ffe/test_dynamic_evaluation.py::Test_FFE_RC_Down_From_Start: v4.0.0 tests/ffe/test_dynamic_evaluation.py::Test_FFE_RC_Unavailable: flaky (FFL-1622) tests/ffe/test_exposures.py: v4.2.0-dev + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_apm.py::TestAnthropicApmMessages: v3.16.0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages: v3.16.0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) diff --git a/manifests/ruby.yml b/manifests/ruby.yml index caa869e8143..67cc66dae89 100644 --- a/manifests/ruby.yml +++ b/manifests/ruby.yml @@ -1090,6 +1090,7 @@ manifest: "*": irrelevant rails72: v2.23.0-dev tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: - weblog_declaration: "*": irrelevant diff --git a/manifests/rust.yml b/manifests/rust.yml index d64f69ad8b9..161482bb344 100644 --- a/manifests/rust.yml +++ b/manifests/rust.yml @@ -23,6 +23,7 @@ manifest: tests/ffe/test_exposures.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_AWS_XRAY_HEADERS: irrelevant (Localstack SQS does not support AWS Xray Header parsing) + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/test_cassandra.py::Test_Cassandra: missing_feature (Endpoint is not implemented on weblog) tests/integrations/test_db_integrations_sql.py::Test_MsSql::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_db_integrations_sql.py::Test_MySql::test_db_jdbc_drive_classname: missing_feature (Apply only java) From acc07540f8ad6551c221a30111062b2733d93f00 Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Mon, 9 Mar 2026 06:27:35 -0400 Subject: [PATCH 07/12] Remove per-test sleeps, use scenario-level agent_interface_timeout Replace hardcoded time.sleep(25) in each test setup with agent_interface_timeout=30 on the FFE scenario. The container shutdown flushes metrics; the timeout gives the agent time to receive and process them. --- tests/ffe/test_flag_eval_metrics.py | 17 +++++------------ utils/_context/_scenarios/__init__.py | 1 + 2 files changed, 6 insertions(+), 12 deletions(-) diff --git a/tests/ffe/test_flag_eval_metrics.py b/tests/ffe/test_flag_eval_metrics.py index 5f07030e69b..2cfb04d844f 100644 --- a/tests/ffe/test_flag_eval_metrics.py +++ b/tests/ffe/test_flag_eval_metrics.py @@ -1,7 +1,5 @@ """Test feature flag evaluation metrics via OTel Metrics API.""" -import time - from utils import ( weblog, interfaces, @@ -14,10 +12,6 @@ RC_PRODUCT = "FFE_FLAGS" RC_PATH = f"datadog/2/{RC_PRODUCT}" -# Wait time in seconds for OTLP metrics pipeline: -# OTel SDK export interval (10s) + Agent metric flush (10s) + buffer -METRICS_PIPELINE_WAIT = 25 - def make_ufc_fixture(flag_key, variant_key="on", variation_type="STRING", enabled=True): """Create a UFC fixture with the given flag configuration.""" @@ -105,8 +99,7 @@ def setup_ffe_eval_metric_basic(self): }, ) - # Wait for OTLP metrics pipeline - time.sleep(METRICS_PIPELINE_WAIT) + def test_ffe_eval_metric_basic(self): """Test that flag evaluation produces a metric with correct tags.""" @@ -160,7 +153,7 @@ def setup_ffe_eval_metric_count(self): ) self.responses.append(r) - time.sleep(METRICS_PIPELINE_WAIT) + def test_ffe_eval_metric_count(self): """Test that N evaluations produce metric count = N.""" @@ -266,7 +259,7 @@ def setup_ffe_eval_metric_different_flags(self): }, ) - time.sleep(METRICS_PIPELINE_WAIT) + def test_ffe_eval_metric_different_flags(self): """Test that each flag key gets its own metric series.""" @@ -310,7 +303,7 @@ def setup_ffe_eval_metric_error(self): }, ) - time.sleep(METRICS_PIPELINE_WAIT) + def test_ffe_eval_metric_error(self): """Test that error evaluations produce metric with error.type tag.""" @@ -366,7 +359,7 @@ def setup_ffe_eval_metric_type_mismatch(self): }, ) - time.sleep(METRICS_PIPELINE_WAIT) + def test_ffe_eval_metric_type_mismatch(self): """Test that type conversion errors produce metric with error.type:type_mismatch.""" diff --git a/utils/_context/_scenarios/__init__.py b/utils/_context/_scenarios/__init__.py index 26a12870d63..d53cd5fb2dd 100644 --- a/utils/_context/_scenarios/__init__.py +++ b/utils/_context/_scenarios/__init__.py @@ -545,6 +545,7 @@ class _Scenarios: "DD_METRICS_OTEL_ENABLED": "true", "OTEL_EXPORTER_OTLP_METRICS_ENDPOINT": "http://agent:4318/v1/metrics", }, + agent_interface_timeout=30, doc="", scenario_groups=[scenario_groups.ffe], ) From a2fd84f2a9ec74a592b391fd53b28242069b927f Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Mon, 9 Mar 2026 10:12:02 -0400 Subject: [PATCH 08/12] Add allocation_key assertion to flag eval metrics test Assert that feature_flag.result.allocation_key tag is present with value "default-allocation" on successful flag evaluations. --- tests/ffe/test_flag_eval_metrics.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/tests/ffe/test_flag_eval_metrics.py b/tests/ffe/test_flag_eval_metrics.py index 2cfb04d844f..7b88e97dea4 100644 --- a/tests/ffe/test_flag_eval_metrics.py +++ b/tests/ffe/test_flag_eval_metrics.py @@ -124,6 +124,9 @@ def test_ffe_eval_metric_basic(self): assert get_tag_value(tags, "feature_flag.result.reason") == "targeting_match", ( f"Expected tag feature_flag.result.reason:targeting_match, got tags: {tags}" ) + assert get_tag_value(tags, "feature_flag.result.allocation_key") == "default-allocation", ( + f"Expected tag feature_flag.result.allocation_key:default-allocation, got tags: {tags}" + ) @scenarios.feature_flagging_and_experimentation From a33486a03d8d650d5c35a7113ffc8df1817afebf Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Wed, 11 Mar 2026 15:13:19 -0400 Subject: [PATCH 09/12] feat(php+node): Enable flag evaluation metrics tests; fix reason=static - Enable tests/ffe/test_flag_eval_metrics.py for PHP (>=1.16.0) and Node.js (express4 v6.0.0-pre) - Fix reason assertion: UFC engine returns AssignmentReason::Static for a 100% catch-all allocation (rules:[], splits:[{shards:[]}]), not TargetingMatch - Add type annotations to test helpers (mypy compliance) --- manifests/nodejs.yml | 6 ++-- manifests/php.yml | 2 +- tests/ffe/test_flag_eval_metrics.py | 43 ++++++++--------------------- 3 files changed, 16 insertions(+), 35 deletions(-) diff --git a/manifests/nodejs.yml b/manifests/nodejs.yml index ce0583a3422..7efb5c7e5f9 100644 --- a/manifests/nodejs.yml +++ b/manifests/nodejs.yml @@ -1579,7 +1579,10 @@ manifest: "*": incomplete_test_app express4: *ref_5_77_0 tests/ffe/test_exposures.py::Test_FFE_EXP_5_Missing_Targeting_Key: bug (FFL-1730) - tests/ffe/test_flag_eval_metrics.py: missing_feature + tests/ffe/test_flag_eval_metrics.py: + - weblog_declaration: + "*": incomplete_test_app + express4: v6.0.0-pre tests/integration_frameworks/llm/anthropic/test_anthropic_apm.py::TestAnthropicApmMessages: *ref_5_71_0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages: *ref_5_71_0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) @@ -2260,7 +2263,6 @@ manifest: - weblog_declaration: nextjs: missing_feature tests/test_rum_injection.py: irrelevant (RUM injection only supported for Java) - tests/test_sampling_rate_capping.py::Test_SamplingRateCappedIncrease: missing_feature tests/test_sampling_rates.py::Test_SampleRateFunction: *ref_5_54_0 tests/test_sampling_rates.py::Test_SamplingDecisionAdded: *ref_5_17_0 tests/test_sampling_rates.py::Test_SamplingDecisions: *ref_5_54_0 diff --git a/manifests/php.yml b/manifests/php.yml index c67f68d770c..3b885f94a6e 100644 --- a/manifests/php.yml +++ b/manifests/php.yml @@ -551,7 +551,7 @@ manifest: tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) - tests/ffe/test_flag_eval_metrics.py: missing_feature + tests/ffe/test_flag_eval_metrics.py: '>=1.16.0' tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: missing_feature tests/integrations/crossed_integrations/test_kinesis.py::Test_Kinesis_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: missing_feature tests/integrations/crossed_integrations/test_rabbitmq.py::Test_RabbitMQ_Trace_Context_Propagation: missing_feature diff --git a/tests/ffe/test_flag_eval_metrics.py b/tests/ffe/test_flag_eval_metrics.py index 7b88e97dea4..c0116d8564a 100644 --- a/tests/ffe/test_flag_eval_metrics.py +++ b/tests/ffe/test_flag_eval_metrics.py @@ -13,7 +13,7 @@ RC_PATH = f"datadog/2/{RC_PRODUCT}" -def make_ufc_fixture(flag_key, variant_key="on", variation_type="STRING", enabled=True): +def make_ufc_fixture(flag_key: str, variant_key: str = "on", variation_type: str = "STRING", *, enabled: bool = True): """Create a UFC fixture with the given flag configuration.""" values: dict[str, dict[str, str | bool]] = { "STRING": {"on": "on-value", "off": "off-value"}, @@ -47,7 +47,7 @@ def make_ufc_fixture(flag_key, variant_key="on", variation_type="STRING", enable } -def find_eval_metrics(flag_key=None): +def find_eval_metrics(flag_key: str | None = None): """Find feature_flag.evaluations metrics in agent data. Returns a list of metric points matching the metric name, optionally filtered by flag key tag. @@ -67,7 +67,7 @@ def find_eval_metrics(flag_key=None): return results -def get_tag_value(tags, key): +def get_tag_value(tags: list[str], key: str): """Extract a tag value from a list of 'key:value' strings.""" prefix = f"{key}:" for tag in tags: @@ -99,8 +99,6 @@ def setup_ffe_eval_metric_basic(self): }, ) - - def test_ffe_eval_metric_basic(self): """Test that flag evaluation produces a metric with correct tags.""" assert self.r.status_code == 200, f"Flag evaluation failed: {self.r.text}" @@ -121,8 +119,8 @@ def test_ffe_eval_metric_basic(self): assert get_tag_value(tags, "feature_flag.result.variant") == "on", ( f"Expected tag feature_flag.result.variant:on, got tags: {tags}" ) - assert get_tag_value(tags, "feature_flag.result.reason") == "targeting_match", ( - f"Expected tag feature_flag.result.reason:targeting_match, got tags: {tags}" + assert get_tag_value(tags, "feature_flag.result.reason") == "static", ( + f"Expected tag feature_flag.result.reason:static, got tags: {tags}" ) assert get_tag_value(tags, "feature_flag.result.allocation_key") == "default-allocation", ( f"Expected tag feature_flag.result.allocation_key:default-allocation, got tags: {tags}" @@ -156,8 +154,6 @@ def setup_ffe_eval_metric_count(self): ) self.responses.append(r) - - def test_ffe_eval_metric_count(self): """Test that N evaluations produce metric count = N.""" for i, r in enumerate(self.responses): @@ -165,8 +161,7 @@ def test_ffe_eval_metric_count(self): metrics = find_eval_metrics(self.flag_key) assert len(metrics) > 0, ( - f"Expected at least one feature_flag.evaluations metric for flag '{self.flag_key}', " - f"but found none." + f"Expected at least one feature_flag.evaluations metric for flag '{self.flag_key}', but found none." ) # Sum all data points for this flag (agent may split across multiple series entries) @@ -180,9 +175,7 @@ def test_ffe_eval_metric_count(self): elif isinstance(p, list) and len(p) >= 2: total_count += p[1] - assert total_count >= self.eval_count, ( - f"Expected metric count >= {self.eval_count}, got {total_count}" - ) + assert total_count >= self.eval_count, f"Expected metric count >= {self.eval_count}, got {total_count}" @scenarios.feature_flagging_and_experimentation @@ -262,8 +255,6 @@ def setup_ffe_eval_metric_different_flags(self): }, ) - - def test_ffe_eval_metric_different_flags(self): """Test that each flag key gets its own metric series.""" assert self.r_a.status_code == 200, f"Flag A evaluation failed: {self.r_a.text}" @@ -272,12 +263,8 @@ def test_ffe_eval_metric_different_flags(self): metrics_a = find_eval_metrics(self.flag_a) metrics_b = find_eval_metrics(self.flag_b) - assert len(metrics_a) > 0, ( - f"Expected metric for flag '{self.flag_a}', found none. All: {find_eval_metrics()}" - ) - assert len(metrics_b) > 0, ( - f"Expected metric for flag '{self.flag_b}', found none. All: {find_eval_metrics()}" - ) + assert len(metrics_a) > 0, f"Expected metric for flag '{self.flag_a}', found none. All: {find_eval_metrics()}" + assert len(metrics_b) > 0, f"Expected metric for flag '{self.flag_b}', found none. All: {find_eval_metrics()}" @scenarios.feature_flagging_and_experimentation @@ -290,9 +277,7 @@ def setup_ffe_eval_metric_error(self): # Set up config with a different flag than what we'll request config_id = "ffe-eval-metric-error" - rc.tracer_rc_state.set_config( - f"{RC_PATH}/{config_id}/config", make_ufc_fixture("some-other-flag") - ).apply() + rc.tracer_rc_state.set_config(f"{RC_PATH}/{config_id}/config", make_ufc_fixture("some-other-flag")).apply() self.flag_key = "non-existent-eval-metric-flag" self.r = weblog.post( @@ -306,8 +291,6 @@ def setup_ffe_eval_metric_error(self): }, ) - - def test_ffe_eval_metric_error(self): """Test that error evaluations produce metric with error.type tag.""" assert self.r.status_code == 200, f"Flag evaluation request failed: {self.r.text}" @@ -362,16 +345,12 @@ def setup_ffe_eval_metric_type_mismatch(self): }, ) - - def test_ffe_eval_metric_type_mismatch(self): """Test that type conversion errors produce metric with error.type:type_mismatch.""" assert self.r.status_code == 200, f"Flag evaluation request failed: {self.r.text}" metrics = find_eval_metrics(self.flag_key) - assert len(metrics) > 0, ( - f"Expected metric for flag '{self.flag_key}', found none. All: {find_eval_metrics()}" - ) + assert len(metrics) > 0, f"Expected metric for flag '{self.flag_key}', found none. All: {find_eval_metrics()}" point = metrics[0] tags = point.get("tags", []) From 3b2a3adb980f9279f0e5821a94a713475076f681 Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Wed, 11 Mar 2026 16:20:21 -0400 Subject: [PATCH 10/12] feat(go): Enable flag evaluation metrics E2E tests for Go; fix reason=static - Enable tests/ffe/test_flag_eval_metrics.py for Go only (PHP and Node.js remain missing_feature) - Fix reason assertion: UFC engine returns AssignmentReason::Static for a 100% catch-all allocation (rules:[], splits:[{shards:[]}]), not TargetingMatch - Add type annotations to test helpers (mypy compliance) --- manifests/nodejs.yml | 6 ++---- manifests/php.yml | 2 +- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/manifests/nodejs.yml b/manifests/nodejs.yml index 7efb5c7e5f9..ce0583a3422 100644 --- a/manifests/nodejs.yml +++ b/manifests/nodejs.yml @@ -1579,10 +1579,7 @@ manifest: "*": incomplete_test_app express4: *ref_5_77_0 tests/ffe/test_exposures.py::Test_FFE_EXP_5_Missing_Targeting_Key: bug (FFL-1730) - tests/ffe/test_flag_eval_metrics.py: - - weblog_declaration: - "*": incomplete_test_app - express4: v6.0.0-pre + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_apm.py::TestAnthropicApmMessages: *ref_5_71_0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages: *ref_5_71_0 tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) @@ -2263,6 +2260,7 @@ manifest: - weblog_declaration: nextjs: missing_feature tests/test_rum_injection.py: irrelevant (RUM injection only supported for Java) + tests/test_sampling_rate_capping.py::Test_SamplingRateCappedIncrease: missing_feature tests/test_sampling_rates.py::Test_SampleRateFunction: *ref_5_54_0 tests/test_sampling_rates.py::Test_SamplingDecisionAdded: *ref_5_17_0 tests/test_sampling_rates.py::Test_SamplingDecisions: *ref_5_54_0 diff --git a/manifests/php.yml b/manifests/php.yml index 3b885f94a6e..c67f68d770c 100644 --- a/manifests/php.yml +++ b/manifests/php.yml @@ -551,7 +551,7 @@ manifest: tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) - tests/ffe/test_flag_eval_metrics.py: '>=1.16.0' + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: missing_feature tests/integrations/crossed_integrations/test_kinesis.py::Test_Kinesis_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: missing_feature tests/integrations/crossed_integrations/test_rabbitmq.py::Test_RabbitMQ_Trace_Context_Propagation: missing_feature From 81b3a6dea997974b0d7095e41e4d3686e207f0b0 Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Wed, 11 Mar 2026 16:53:45 -0400 Subject: [PATCH 11/12] fix(manifests): restore alphabetical order for test_flag_eval_metrics entries --- manifests/cpp_httpd.yml | 2 +- manifests/cpp_nginx.yml | 2 +- manifests/dotnet.yml | 2 +- manifests/java.yml | 2 +- manifests/php.yml | 2 +- manifests/ruby.yml | 2 +- manifests/rust.yml | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/manifests/cpp_httpd.yml b/manifests/cpp_httpd.yml index 733e3f26e0c..6a76f408d13 100644 --- a/manifests/cpp_httpd.yml +++ b/manifests/cpp_httpd.yml @@ -33,8 +33,8 @@ manifest: tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots::test_process_tags_snapshot_svc: missing_feature (Not yet implemented) tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature - tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/ffe/test_flag_eval_metrics.py: missing_feature + tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/: missing_feature (Endpoint not implemented) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_AWS_XRAY_HEADERS: irrelevant (Localstack SQS does not support AWS Xray Header parsing) tests/integrations/test_cassandra.py::Test_Cassandra: missing_feature (Endpoint is not implemented on weblog) diff --git a/manifests/cpp_nginx.yml b/manifests/cpp_nginx.yml index 0122ca01325..5e7df1d66f7 100644 --- a/manifests/cpp_nginx.yml +++ b/manifests/cpp_nginx.yml @@ -234,8 +234,8 @@ manifest: tests/docker_ssi/test_docker_ssi_appsec.py::TestDockerSSIAppsecFeatures::test_telemetry_source_ssi: missing_feature tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature - tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/ffe/test_flag_eval_metrics.py: missing_feature + tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: missing_feature tests/integrations/crossed_integrations/test_kinesis.py::Test_Kinesis_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: missing_feature tests/integrations/crossed_integrations/test_rabbitmq.py::Test_RabbitMQ_Trace_Context_Propagation: missing_feature diff --git a/manifests/dotnet.yml b/manifests/dotnet.yml index dd47c4c0236..950a196186e 100644 --- a/manifests/dotnet.yml +++ b/manifests/dotnet.yml @@ -690,8 +690,8 @@ manifest: tests/docker_ssi/test_docker_ssi_appsec.py::TestDockerSSIAppsecFeatures::test_telemetry_source_ssi: v3.36.0 tests/ffe/test_dynamic_evaluation.py: v3.36.0 tests/ffe/test_exposures.py: v3.36.0 - tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/ffe/test_flag_eval_metrics.py: missing_feature + tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: v2.0.0-prerelease tests/integrations/crossed_integrations/test_kinesis.py::Test_Kinesis_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: missing_feature tests/integrations/crossed_integrations/test_rabbitmq.py::Test_RabbitMQ_Trace_Context_Propagation: v2.0.0-prerelease diff --git a/manifests/java.yml b/manifests/java.yml index 33b1c788803..2d13b9ba259 100644 --- a/manifests/java.yml +++ b/manifests/java.yml @@ -3057,8 +3057,8 @@ manifest: "*": irrelevant spring-boot: v1.56.0 tests/ffe/test_exposures.py::Test_FFE_EXP_5_Missing_Targeting_Key: bug (FFL-1729) - tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/ffe/test_flag_eval_metrics.py: missing_feature + tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: - weblog_declaration: "*": irrelevant diff --git a/manifests/php.yml b/manifests/php.yml index c67f68d770c..0ab061c603a 100644 --- a/manifests/php.yml +++ b/manifests/php.yml @@ -550,8 +550,8 @@ manifest: tests/docker_ssi/test_docker_ssi_crash.py::TestDockerSSICrash::test_crash: missing_feature (No implemented the endpoint /crashme) tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature - tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/ffe/test_flag_eval_metrics.py: missing_feature + tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: missing_feature tests/integrations/crossed_integrations/test_kinesis.py::Test_Kinesis_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: missing_feature tests/integrations/crossed_integrations/test_rabbitmq.py::Test_RabbitMQ_Trace_Context_Propagation: missing_feature diff --git a/manifests/ruby.yml b/manifests/ruby.yml index 67cc66dae89..9ea45f69bec 100644 --- a/manifests/ruby.yml +++ b/manifests/ruby.yml @@ -1089,8 +1089,8 @@ manifest: - weblog_declaration: "*": irrelevant rails72: v2.23.0-dev - tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/ffe/test_flag_eval_metrics.py: missing_feature + tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/test_kafka.py::Test_Kafka: - weblog_declaration: "*": irrelevant diff --git a/manifests/rust.yml b/manifests/rust.yml index 161482bb344..cdb320e3558 100644 --- a/manifests/rust.yml +++ b/manifests/rust.yml @@ -21,9 +21,9 @@ manifest: tests/docker_ssi/test_docker_ssi_appsec.py::TestDockerSSIAppsecFeatures::test_telemetry_source_ssi: missing_feature tests/ffe/test_dynamic_evaluation.py: missing_feature tests/ffe/test_exposures.py: missing_feature + tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integration_frameworks/llm/anthropic/test_anthropic_llmobs.py::TestAnthropicLlmObsMessages::test_create_error: bug (MLOB-1234) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_AWS_XRAY_HEADERS: irrelevant (Localstack SQS does not support AWS Xray Header parsing) - tests/ffe/test_flag_eval_metrics.py: missing_feature tests/integrations/test_cassandra.py::Test_Cassandra: missing_feature (Endpoint is not implemented on weblog) tests/integrations/test_db_integrations_sql.py::Test_MsSql::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_db_integrations_sql.py::Test_MySql::test_db_jdbc_drive_classname: missing_feature (Apply only java) From 584d91e4f827b99abaa90f3b7b19d241703b3bee Mon Sep 17 00:00:00 2001 From: Leo Romanovsky Date: Wed, 11 Mar 2026 17:05:41 -0400 Subject: [PATCH 12/12] fix(golang): restore golang.yml to main, re-apply only FFE test_flag_eval_metrics entry --- manifests/golang.yml | 52 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 48 insertions(+), 4 deletions(-) diff --git a/manifests/golang.yml b/manifests/golang.yml index 0dd3823bbfd..f1073d59d57 100644 --- a/manifests/golang.yml +++ b/manifests/golang.yml @@ -6,6 +6,7 @@ manifest: tests/ai_guard/test_ai_guard_sdk.py::Test_Full_Response_And_Tags: missing_feature tests/ai_guard/test_ai_guard_sdk.py::Test_RootSpanUserKeep: missing_feature tests/ai_guard/test_ai_guard_sdk.py::Test_SDK_Disabled: missing_feature + tests/ai_guard/test_ai_guard_sdk.py::Test_SensitiveDataScanning: missing_feature tests/apm_tracing_e2e/test_otel.py::Test_Otel_Span: - weblog_declaration: "*": missing_feature (missing /e2e_otel_span endpoint on weblog) @@ -36,6 +37,7 @@ manifest: tests/appsec/api_security/test_custom_data_classification.py::Test_API_Security_Custom_Data_Classification_Scanner: v2.4.0 tests/appsec/api_security/test_endpoint_discovery.py::Test_Endpoint_Discovery: missing_feature tests/appsec/api_security/test_endpoint_fallback.py: missing_feature + tests/appsec/api_security/test_endpoints.py: irrelevant (language not implementing this feature) tests/appsec/api_security/test_schemas.py::Test_Scanners: v2.0.0 tests/appsec/api_security/test_schemas.py::Test_Schema_Request_Cookies: v1.60.0 tests/appsec/api_security/test_schemas.py::Test_Schema_Request_FormUrlEncoded_Body: v1.60.0 @@ -176,6 +178,7 @@ manifest: tests/appsec/iast/source/test_multipart.py::TestMultipart: missing_feature tests/appsec/iast/source/test_parameter_name.py::TestParameterName: missing_feature tests/appsec/iast/source/test_parameter_value.py::TestParameterValue: missing_feature + tests/appsec/iast/source/test_parameter_value.py::TestParameterValue::test_source_reported: irrelevant tests/appsec/iast/source/test_path.py::TestPath: missing_feature tests/appsec/iast/source/test_path_parameter.py::TestPathParameter: missing_feature tests/appsec/iast/source/test_sql_row.py::TestSqlRow: missing_feature @@ -183,7 +186,7 @@ manifest: tests/appsec/iast/test_sampling_by_route_method_count.py::TestSamplingByRouteMethodCount: missing_feature tests/appsec/iast/test_security_controls.py::TestSecurityControls: missing_feature tests/appsec/iast/test_vulnerability_schema.py::TestIastVulnerabilitySchema: v0.0.0 # Compliant because no IAST support - tests/appsec/rasp/test_api10.py::Test_API10_all: bug (APPSEC-61152) + tests/appsec/rasp/test_api10.py::Test_API10_all: v2.7.0-dev tests/appsec/rasp/test_api10.py::Test_API10_downstream_request_tag: v2.5.0-dev tests/appsec/rasp/test_api10.py::Test_API10_downstream_ssrf_telemetry: v2.4.0 tests/appsec/rasp/test_api10.py::Test_API10_redirect: v2.7.0-dev @@ -591,7 +594,7 @@ manifest: tests/appsec/test_service_activation_metric.py::TestServiceActivationRemoteConfigurationConfigMetric: v2.4.0 tests/appsec/test_shell_execution.py::Test_ShellExecution: missing_feature tests/appsec/test_span_tags_headers.py: v2.4.0 - tests/appsec/test_span_tags_headers.py::Test_Headers_Event_Blocking: bug (APPSEC-61286) + tests/appsec/test_span_tags_headers.py::Test_Headers_Event_Blocking: v2.7.0-dev tests/appsec/test_suspicious_attacker_blocking.py::Test_Suspicious_Attacker_Blocking: v1.69.0 tests/appsec/test_trace_tagging.py::Test_TraceTaggingRules: v2.1.0-dev tests/appsec/test_trace_tagging.py::Test_TraceTaggingRulesRcCapability: v2.1.0-dev @@ -664,13 +667,15 @@ manifest: gin: v1.37.0 tests/appsec/waf/test_addresses.py::Test_gRPC: v1.36.0 tests/appsec/waf/test_blocking.py::Test_Blocking: v1.50.0-rc.1 - tests/appsec/waf/test_blocking.py::Test_Blocking::test_accept_full_html: bug (APPSEC-61196) + tests/appsec/waf/test_blocking.py::Test_Blocking::test_accept_full_html: + - declaration: bug (APPSEC-61196) + component_version: '<2.7.0' tests/appsec/waf/test_blocking.py::Test_Blocking::test_accept_partial_html: missing_feature (Support for partial html not implemented) tests/appsec/waf/test_blocking.py::Test_Blocking::test_html_template_v2: - declaration: missing_feature component_version: <1.52.0 - declaration: bug (APPSEC-61196) - component_version: '>=1.52.0' + component_version: '>=1.52.0 <2.7.0' tests/appsec/waf/test_blocking.py::Test_Blocking::test_json_template_v1: - declaration: missing_feature component_version: <1.52.0 @@ -769,8 +774,14 @@ manifest: tests/auto_inject/test_auto_inject_install.py::TestContainerAutoInjectInstallScriptAppsec: v2.0.0 tests/auto_inject/test_auto_inject_install.py::TestHostAutoInjectInstallScriptAppsec: v2.0.0 tests/auto_inject/test_auto_inject_install.py::TestSimpleInstallerAutoInjectManualAppsec: v2.0.0 + tests/debugger/test_debugger_capture_expressions.py::Test_Debugger_Line_Capture_Expressions: missing_feature (Not yet implemented) + tests/debugger/test_debugger_capture_expressions.py::Test_Debugger_Method_Capture_Expressions: missing_feature (Not yet implemented) tests/debugger/test_debugger_code_origins.py::Test_Debugger_Code_Origins: missing_feature (feature not implemented) tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay: missing_feature (feature not implemented) + tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay::test_exception_replay_firsthit: missing_feature (Implemented only for dotnet) + tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay::test_exception_replay_outofmemory: missing_feature (Implemented only for dotnet) + tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay::test_exception_replay_recursion_inlined: irrelevant (Test for specific bug in dotnet) + tests/debugger/test_debugger_exception_replay.py::Test_Debugger_Exception_Replay::test_exception_replay_stackoverflow: missing_feature (Implemented only for dotnet) tests/debugger/test_debugger_expression_language.py::Test_Debugger_Expression_Language: missing_feature (feature not implemented) tests/debugger/test_debugger_inproduct_enablement.py::Test_Debugger_InProduct_Enablement_Code_Origin: missing_feature tests/debugger/test_debugger_inproduct_enablement.py::Test_Debugger_InProduct_Enablement_Dynamic_Instrumentation: missing_feature @@ -778,13 +789,30 @@ manifest: tests/debugger/test_debugger_pii.py::Test_Debugger_PII_Redaction: missing_feature (feature not implemented) tests/debugger/test_debugger_pii.py::Test_Debugger_PII_Redaction_Excluded_Identifiers: missing_feature (feature not implemented) tests/debugger/test_debugger_probe_budgets.py::Test_Debugger_Probe_Budgets: missing_feature (feature not implemented) + tests/debugger/test_debugger_probe_budgets.py::Test_Debugger_Probe_Budgets::test_log_line_budgets: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots: missing_feature (feature not implemented) + tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots::test_process_tags_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots::test_process_tags_snapshot_svc: missing_feature + tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots::test_span_decoration_line_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots_With_SCM: missing_feature (feature not implemented) + tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Line_Probe_Snaphots_With_SCM::test_span_decoration_line_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots: v2.2.3 + tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots::test_mix_snapshot: missing_feature (Not yet implemented) + tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots::test_span_decoration_method_snapshot: missing_feature (Not yet implemented) + tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots::test_span_method_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots_With_SCM: v2.2.3 + tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots_With_SCM::test_mix_snapshot: missing_feature (Not yet implemented) + ? tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots_With_SCM::test_span_decoration_method_snapshot + : missing_feature (Not yet implemented) + tests/debugger/test_debugger_probe_snapshot.py::Test_Debugger_Method_Probe_Snaphots_With_SCM::test_span_method_snapshot: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_status.py::Test_Debugger_Line_Probe_Statuses: missing_feature (feature not implemented) + tests/debugger/test_debugger_probe_status.py::Test_Debugger_Line_Probe_Statuses::test_log_line_status: missing_feature (Not yet implemented) + tests/debugger/test_debugger_probe_status.py::Test_Debugger_Line_Probe_Statuses::test_metric_line_status: missing_feature (Not yet implemented) + tests/debugger/test_debugger_probe_status.py::Test_Debugger_Line_Probe_Statuses::test_span_decoration_line_status: missing_feature (Not yet implemented) tests/debugger/test_debugger_probe_status.py::Test_Debugger_Method_Probe_Statuses: v2.2.3 + tests/debugger/test_debugger_probe_status.py::Test_Debugger_Method_Probe_Statuses::test_metric_status: missing_feature (Not yet implemented) + tests/debugger/test_debugger_probe_status.py::Test_Debugger_Method_Probe_Statuses::test_span_decoration_method_status: missing_feature (Not yet implemented) + tests/debugger/test_debugger_probe_status.py::Test_Debugger_Method_Probe_Statuses::test_span_method_status: missing_feature (Not yet implemented) tests/debugger/test_debugger_symdb.py::Test_Debugger_SymDb: v2.2.3 tests/debugger/test_debugger_telemetry.py::Test_Debugger_Telemetry: missing_feature tests/docker_ssi/test_docker_ssi_appsec.py::TestDockerSSIAppsecFeatures::test_telemetry_source_ssi: v2.0.0 @@ -831,6 +859,7 @@ manifest: "*": irrelevant net-http: missing_feature (Endpoint not implemented) net-http-orchestrion: missing_feature (Endpoint not implemented) + - declaration: irrelevant (Localstack SQS does not support AWS Xray Header parsing) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_AWS_XRAY_HEADERS::test_consume_trace_equality: missing_feature (Expected to fail, Golang does not propagate context) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_AWS_XRAY_HEADERS::test_produce_trace_equality: missing_feature (Expected to fail, Golang does not propagate context) tests/integrations/crossed_integrations/test_sqs.py::Test_SQS_PROPAGATION_VIA_MESSAGE_ATTRIBUTES: @@ -844,8 +873,11 @@ manifest: tests/integrations/crossed_integrations/test_sqs.py::_BaseSQS::test_produce_trace_equality: missing_feature (Expected to fail, Golang does not propagate context) tests/integrations/test_cassandra.py::Test_Cassandra: missing_feature (Endpoint is not implemented on weblog) tests/integrations/test_db_integrations_sql.py::Test_MsSql: missing_feature + tests/integrations/test_db_integrations_sql.py::Test_MsSql::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_db_integrations_sql.py::Test_MySql: missing_feature + tests/integrations/test_db_integrations_sql.py::Test_MySql::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_db_integrations_sql.py::Test_Postgres: missing_feature + tests/integrations/test_db_integrations_sql.py::Test_Postgres::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_db_integrations_sql.py::_BaseDatadogDbIntegrationTestClass::test_db_jdbc_drive_classname: missing_feature (Apply only java) tests/integrations/test_dbm.py::Test_Dbm: missing_feature tests/integrations/test_dbm.py::Test_Dbm_Comment_Batch_Python_Aiomysql: irrelevant (These are python only tests.) @@ -885,6 +917,7 @@ manifest: "*": irrelevant net-http: missing_feature (Endpoint not implemented) net-http-orchestrion: missing_feature (Endpoint not implemented) + tests/integrations/test_dsm.py::Test_DsmRabbitmq::test_dsm_rabbitmq_dotnet_legacy: irrelevant (legacy dotnet behavior) tests/integrations/test_dsm.py::Test_DsmRabbitmq_FanoutExchange: - weblog_declaration: "*": irrelevant @@ -942,6 +975,7 @@ manifest: tests/integrations/test_inferred_proxy.py::Test_AWS_API_Gateway_Inferred_Span_Creation_v2: missing_feature tests/integrations/test_mongo.py::Test_Mongo: missing_feature (Endpoint is not implemented on weblog) tests/integrations/test_otel_drop_in.py::Test_Otel_Drop_In: missing_feature + tests/integrations/test_service_overrides.py::Test_SqlServiceNameSource: irrelevant (Only implemented for Java) tests/integrations/test_sql.py::Test_Sql: missing_feature (Endpoint is not implemented on weblog) tests/otel/test_context_propagation.py::Test_Otel_Context_Propagation_Default_Propagator_Api: - weblog_declaration: @@ -952,6 +986,7 @@ manifest: tests/otel_tracing_e2e/test_e2e.py::Test_OTelTracingE2E: irrelevant tests/parametric/test_128_bit_traceids.py::Test_128_Bit_Traceids: v1.50.0 tests/parametric/test_config_consistency.py::Test_Config_Dogstatsd: v1.72.0-dev + tests/parametric/test_config_consistency.py::Test_Config_Dogstatsd::test_dogstatsd_default: incomplete_test_app (PHP parameteric app can not access the dogstatsd default values, this logic is internal to the tracer) tests/parametric/test_config_consistency.py::Test_Config_RateLimit: v1.67.0 tests/parametric/test_config_consistency.py::Test_Config_RateLimit::test_setting_trace_rate_limit_strict: bug (APMAPI-1030) tests/parametric/test_config_consistency.py::Test_Config_Tags: v1.70.1 @@ -1037,10 +1072,13 @@ manifest: - declaration: missing_feature (Not implemented) component_version: <1.64.0 tests/parametric/test_headers_tracestate_dd.py::Test_Headers_Tracestate_DD::test_headers_tracestate_dd_propagate_propagatedtags: "missing_feature (\"False Bug: header[3,6]: can't guarantee the order of strings in the tracestate since they came from the map. BUG: header[4,5]: w3cTraceID shouldn't be present\")" + tests/parametric/test_library_tracestats.py::Test_Library_Tracestats::test_relative_error_TS008: missing_feature (relative error test is broken) tests/parametric/test_llm_observability/: incomplete_test_app tests/parametric/test_otel_api_interoperability.py: missing_feature tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars: v1.66.0 tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_dd_trace_otel_enabled_takes_precedence: irrelevant (does not support enabling opentelemetry via DD_TRACE_OTEL_ENABLED) + tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_dd_trace_sample_ignore_parent_false: missing_feature (dd_trace_sample_ignore_parent requires an RFC, this feature is not implemented in any language) + tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_dd_trace_sample_ignore_parent_true: missing_feature (dd_trace_sample_ignore_parent requires an RFC, this feature is not implemented in any language) tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_otel_log_level_env: missing_feature (DD_LOG_LEVEL is not supported in go) tests/parametric/test_otel_env_vars.py::Test_Otel_Env_Vars::test_otel_sdk_disabled_set: irrelevant (does not support enabling opentelemetry via DD_TRACE_OTEL_ENABLED) tests/parametric/test_otel_logs.py: '>=2.5.0' # Modified by easy win activation script @@ -1155,6 +1193,9 @@ manifest: tests/parametric/test_trace_sampling.py::Test_Trace_Sampling_With_W3C::test_distributed_headers_synthetics_sampling_decision: bug (APMAPI-1563) tests/parametric/test_tracer.py::Test_ProcessTags_ServiceName: missing_feature tests/parametric/test_tracer.py::Test_TracerSCITagging: v1.48.0 + tests/parametric/test_tracer.py::Test_TracerServiceNameSource::test_tracer_manual_service_name_sets_srv_src: irrelevant (Only implemented for Java) + tests/parametric/test_tracer.py::Test_TracerServiceNameSource::test_tracer_no_srv_src_when_service_not_manually_set: irrelevant (Only implemented for Java) + tests/parametric/test_tracer.py::Test_TracerUniversalServiceTagging::test_tracer_service_name_environment_variable: "missing_feature (FIXME: library test client sets empty string as the service name)" tests/parametric/test_tracer_flare.py::TestTracerFlareV1: '>=2.5.0' # Modified by easy win activation script tests/parametric/test_tracer_flare.py::TestTracerFlareV1::test_flare_log_level_order: missing_feature # Created by easy win activation script tests/parametric/test_tracer_flare.py::TestTracerFlareV1::test_no_tracer_flare_for_other_task_types: missing_feature # Created by easy win activation script @@ -1299,6 +1340,7 @@ manifest: tests/test_library_conf.py::Test_HeaderTags_Colon_Leading: v1.53.0 tests/test_library_conf.py::Test_HeaderTags_Colon_Trailing: v1.70.0 tests/test_library_conf.py::Test_HeaderTags_DynamicConfig: v1.70.0 + tests/test_library_conf.py::Test_HeaderTags_DynamicConfig::test_tracing_client_http_header_tags_apm_multiconfig: missing_feature (APM_TRACING_MULTICONFIG is not supported in any language yet) tests/test_library_conf.py::Test_HeaderTags_Long: v1.53.0 tests/test_library_conf.py::Test_HeaderTags_Short: v1.53.0 tests/test_library_conf.py::Test_HeaderTags_Whitespace_Header: v1.53.0 @@ -1318,6 +1360,7 @@ manifest: net-http: v2.4.0 net-http-orchestrion: v2.4.0 tests/test_rum_injection.py: irrelevant (RUM injection only supported for Java) + tests/test_sampling_rate_capping.py::Test_SamplingRateCappedIncrease: v2.7.0-dev tests/test_sampling_rates.py::Test_SampleRateFunction: v1.72.1 # real version unknown tests/test_sampling_rates.py::Test_SamplingDecisionAdded: v1.72.1 # real version unknown tests/test_sampling_rates.py::Test_SamplingDecisions: v1.72.1 # real version unknown @@ -1451,6 +1494,7 @@ manifest: tests/test_telemetry.py::Test_Telemetry::test_api_still_v1: irrelevant tests/test_telemetry.py::Test_Telemetry::test_app_dependencies_loaded: irrelevant tests/test_telemetry.py::Test_Telemetry::test_app_product_change: missing_feature (Weblog GET/enable_product and app-product-change event is not implemented yet.) + tests/test_telemetry.py::Test_Telemetry::test_telemetry_message_has_datadog_container_id: "irrelevant (cgroup in weblog is 0::/, so this test can't work)" tests/test_telemetry.py::Test_TelemetryEnhancedConfigReporting: missing_feature tests/test_telemetry.py::Test_TelemetrySCAEnvVar: missing_feature tests/test_telemetry.py::Test_TelemetryV2: v1.49.1