Skip to content

Increase red team unit test coverage#45876

Draft
slister1001 wants to merge 3 commits intoAzure:mainfrom
slister1001:redteam-coverage-90
Draft

Increase red team unit test coverage#45876
slister1001 wants to merge 3 commits intoAzure:mainfrom
slister1001:redteam-coverage-90

Conversation

@slister1001
Copy link
Member

Description

Please add an informative description that covers that changes made by the pull request and link all relevant issues.

If an SDK is being regenerated based on a new API spec, a link to the pull request containing these API spec changes should be included above.

All SDK Contribution checklist:

  • The pull request does not introduce [breaking changes]
  • CHANGELOG is updated for new features, bug fixes or other significant changes.
  • I have read the contribution guidelines.

General Guidelines and Best Practices

  • Title of the pull request is clear and informative.
  • There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.

Testing Guidelines

  • Pull request includes test coverage for the included changes.

slister1001 and others added 3 commits March 23, 2026 20:25
- Extract token usage (prompt_tokens, completion_tokens, total_tokens) from
  RAI service eval_result via sample.usage or result properties.metrics
- Add token_usage to score_metadata dict in RAIServiceScorer
- Save scores to PyRIT CentralMemory after creation (fail-safe)
- Propagate scorer token_usage through ResultProcessor to output item
  properties.metrics for downstream aggregation
- Add 5 unit tests covering token extraction, memory save, and error handling

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Match against canonical and legacy metric name aliases when extracting
token usage from result-level properties.metrics, consistent with how
score extraction already handles aliases via _SYNC_TO_LEGACY_METRIC_NAMES
and _LEGACY_TO_SYNC_METRIC_NAMES.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add 15 new test files and expand 2 existing test files for the
red_team module in azure-ai-evaluation. This brings test coverage
from ~65% to target 90% by covering all 13 previously untested
source files and expanding coverage for large partially-tested files.

New test files (953 new test methods):
- test_objective_utils.py (20 tests)
- test_metric_mapping.py (13 tests)
- test_logging_utils.py (21 tests)
- test_default_converter.py (12 tests)
- test_file_utils.py (68 tests)
- test_exception_utils.py (109 tests)
- test_agent_utils.py (19 tests)
- test_retry_utils.py (50 tests)
- test_progress_utils.py (58 tests)
- test_semantic_kernel_plugin.py (52 tests)
- test_agent_functions.py (30 tests)
- test_agent_tools.py (73 tests)
- test_evaluation_processor.py (53 tests)
- test_mlflow_integration.py (43 tests)
- test_orchestrator_manager.py (61 tests)

Expanded test files (+191 tests):
- test_red_team.py: 47 -> 126 tests
- test_result_processor.py: 29 -> 155 tests

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@github-actions github-actions bot added the Evaluation Issues related to the client library for Azure AI Evaluation label Mar 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Evaluation Issues related to the client library for Azure AI Evaluation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant