-
Notifications
You must be signed in to change notification settings - Fork 579
UN-3136 [FIX] Skip thinking config for Vertex AI pro models when disabled #1775
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary by CodeRabbitRelease Notes
WalkthroughBumps tool image tags and toolVersion values for classifier, structure, and text_extractor across configs and registry; updates backend sample.env structure image tag; adds VertexAILLMParameters._get_thinking_config and changes thinking emission logic in the Vertex AI adapter. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Test ResultsSummary
Runner Tests - Full Report
SDK1 Tests - Full Report
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@unstract/sdk1/src/unstract/sdk1/adapters/base1.py`:
- Around line 443-445: The current logic copies metadata into result_metadata
and then checks "thinking" in result_metadata which can reintroduce a stale
value; instead, use the thinking_config variable (the actual computed value)
when populating validated_data: only set validated_data["thinking"] =
thinking_config if thinking_config is not None (or if you want to allow falsy
values, test explicitly for the enabled/disabled condition), and remove reliance
on the presence of "thinking" in result_metadata so stale keys from
metadata_copy cannot leak through.
harini-venkataraman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a minor comment, LGTM otherwise
|
|
||
| # Pro models don't allow disabling thinking with budget_tokens=0 | ||
| model_name = metadata.get("model", "").lower() | ||
| if "pro" in model_name: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pk-zipstack Is this way of detection is reliable ?



What
Why
promodels because the thinking feature cannot be turned off forpromodelsHow
thinking_budgetwas sent to thelitellm.completion()by default when thinking config was not present.Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)
Database Migrations
Env Config
Relevant Docs
Related Issues or PRs
Dependencies Versions
Notes on Testing
Screenshots
Checklist
I have read and understood the Contribution Guidelines.