Fix AttributeError in _get_finish_reason_for_choice when finish_reason is a string#45873
Fix AttributeError in _get_finish_reason_for_choice when finish_reason is a string#45873happywhaler wants to merge 1 commit intoAzure:mainfrom
AttributeError in _get_finish_reason_for_choice when finish_reason is a string#45873Conversation
## Reproduction 1. Use Azure AI Inference SDK with tracing enabled 2. Call a Mistral model via Azure AI Foundry MaaS endpoint 3. Tracing code crashes on response processing ## Proposed Fix Apply the same pattern from `_get_finish_reasons()` to `_get_finish_reason_for_choice()`:
|
Thank you for your contribution @happywhaler! We will review the pull request and get back to you soon. |
There was a problem hiding this comment.
Pull request overview
This PR fixes tracing robustness in azure-ai-inference by handling finish_reason values that are returned as raw strings (instead of enums), preventing AttributeError during span/event creation for certain models.
Changes:
- Update
_get_finish_reason_for_choice()to return"none",finish_reason.value, or the rawfinish_reasonstring depending on the runtime type. - Align
_get_finish_reason_for_choice()behavior with the already-defensive_get_finish_reasons()implementation.
| def _get_finish_reason_for_choice(self, choice): | ||
| finish_reason = getattr(choice, "finish_reason", None) | ||
| if finish_reason is not None: | ||
| if finish_reason is None: | ||
| return "none" | ||
| elif hasattr(finish_reason, "value"): | ||
| return finish_reason.value | ||
|
|
||
| return "none" | ||
| elif isinstance(finish_reason, str): | ||
| return finish_reason | ||
| else: | ||
| return "none" |
There was a problem hiding this comment.
_get_finish_reason_for_choice now duplicates the same enum/string normalization logic already present in _get_finish_reasons. To avoid the two implementations drifting over time, consider extracting a small helper (e.g., _normalize_finish_reason(finish_reason)) and using it from both places (or have _get_finish_reasons call _get_finish_reason_for_choice per choice).
| if finish_reason is None: | ||
| return "none" | ||
| elif hasattr(finish_reason, "value"): | ||
| return finish_reason.value | ||
|
|
||
| return "none" | ||
| elif isinstance(finish_reason, str): | ||
| return finish_reason |
There was a problem hiding this comment.
This fix changes behavior for choices where finish_reason is already a str, but the existing tracing tests appear to only validate the serialized output (and likely exercise enum-backed finish_reason). Please add a focused test that constructs a minimal choice object with finish_reason="stop" (string) and asserts tracing emits "stop" without raising (covering the reported AttributeError regression).
Fix
AttributeErrorin_get_finish_reason_for_choicewhenfinish_reasonis a stringDescription
This PR fixes a bug in
_get_finish_reason_for_choice()that causes anAttributeErrorwhen the model returnsfinish_reasonas a string instead of an enum.Some models (e.g., Mistral Large 3 on Azure AI Foundry MaaS) return
finish_reasonas a raw string ("stop") rather than aCompletionsFinishReasonenum. The current implementation unconditionally calls.valueonfinish_reason, which fails with:Root Cause
In
tracing.py,_get_finish_reason_for_choice()assumesfinish_reasonis always an enum:However, the similar method
_get_finish_reasons()already handles both cases correctly:Fix
Apply the same defensive pattern to
_get_finish_reason_for_choice():Impact
Without this fix, tracing fails for any model that returns string
finish_reason, causing the entire API call to fail. Users must disable tracing (AZURE_SDK_TRACING_IMPLEMENTATION=noneortracing_enabled=False) as a workaround.