fix: strip reasoning parts from messages when switching to non-thinking models (#11571)#11590
Closed
01luyicheng wants to merge 2 commits intoanomalyco:devfrom
Closed
fix: strip reasoning parts from messages when switching to non-thinking models (#11571)#1159001luyicheng wants to merge 2 commits intoanomalyco:devfrom
01luyicheng wants to merge 2 commits intoanomalyco:devfrom
Conversation
added 2 commits
February 1, 2026 17:03
…ng models (anomalyco#11571) When switching from a model with extended thinking (e.g., Claude Opus) to a model without thinking support (e.g., GPT 5.2, Claude Sonnet without thinking) mid-session, reasoning parts from previous messages were still being sent to the API, causing validation errors. This fix ensures that reasoning parts are only included in providerOptions when the target model supports interleaved reasoning (capabilities.interleaved is an object with a field). For models that don't support interleaved reasoning, the reasoning parts are filtered out from the content array but not added to providerOptions, preventing the API error. Changes: - Modified normalizeMessages() in transform.ts to check if model supports interleaved reasoning before including reasoning in providerOptions - Reasoning parts are now correctly filtered for both thinking and non-thinking models
Contributor
|
The following comment was made by an LLM, it may be inaccurate: Potential Duplicate FoundPR #11572 - "fix: strip reasoning parts when switching to non-interleaved models" This appears to be a near-duplicate or very closely related PR that addresses the same issue. Both PRs are solving the problem of stripping reasoning parts when switching between models with different reasoning/interleaving capabilities. Other Related PRs (different scope but similar theme):
Recommendation: Check if PR #11572 is already merged or if it covers the same solution to #11571. If both are open, one should likely be closed as a duplicate. |
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
opencode-agent bot
added a commit
that referenced
this pull request
Feb 1, 2026
…ing to non-thinking models (#11571)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixed bug where switching from a thinking model to a non-thinking model mid-session caused API errors due to reasoning parts being sent to models that don't support them.
Problem
When switching from a model with extended thinking (e.g., Claude Opus with thinking enabled) to a model without thinking support (e.g., GPT 5.2, Claude Sonnet without thinking) mid-session, an error occurred:
The root cause was that
reasoningparts from the previous model's responses were stored in message history. When switching to a model that doesn't support interleaved reasoning (capabilities.interleaved === false), these parts were still sent to the API, causing validation errors.Solution
Modified
normalizeMessages()inpackages/opencode/src/provider/transform.tsto check if the target model supports interleaved reasoning before including reasoning in providerOptions.Key Changes:
model.capabilities.interleavedis an object with afieldpropertyproviderOptions.openaiCompatible[field]when the model supports interleaved reasoningTechnical Details
The fix ensures:
reasoning_contentorreasoning_details)Benefits
Testing
Issue
Resolves #11571 - Error switching from thinking model to non-thinking model mid-session