Skip to content

fix: strip reasoning parts from messages when switching to non-thinking models (#11571)#11590

Closed
01luyicheng wants to merge 2 commits intoanomalyco:devfrom
01luyicheng:fix/strip-reasoning-non-thinking-models
Closed

fix: strip reasoning parts from messages when switching to non-thinking models (#11571)#11590
01luyicheng wants to merge 2 commits intoanomalyco:devfrom
01luyicheng:fix/strip-reasoning-non-thinking-models

Conversation

@01luyicheng
Copy link
Contributor

Summary

Fixed bug where switching from a thinking model to a non-thinking model mid-session caused API errors due to reasoning parts being sent to models that don't support them.

Problem

When switching from a model with extended thinking (e.g., Claude Opus with thinking enabled) to a model without thinking support (e.g., GPT 5.2, Claude Sonnet without thinking) mid-session, an error occurred:

messages.1.content.0.thinking: each thinking block must contain thinking

The root cause was that reasoning parts from the previous model's responses were stored in message history. When switching to a model that doesn't support interleaved reasoning (capabilities.interleaved === false), these parts were still sent to the API, causing validation errors.

Solution

Modified normalizeMessages() in packages/opencode/src/provider/transform.ts to check if the target model supports interleaved reasoning before including reasoning in providerOptions.

Key Changes:

  1. Added support check: Only include reasoning in providerOptions when model.capabilities.interleaved is an object with a field property
  2. Always filter reasoning parts: Reasoning parts are now filtered out from the content array for all models
  3. Conditional reasoning inclusion: Reasoning text is only added to providerOptions.openaiCompatible[field] when the model supports interleaved reasoning

Technical Details

The fix ensures:

  • For models that support interleaved reasoning (e.g., Claude with thinking, Gemini 3): reasoning parts are extracted and placed in the appropriate providerOptions field (reasoning_content or reasoning_details)
  • For models that don't support interleaved reasoning (e.g., GPT 5.2, Claude Sonnet without thinking): reasoning parts are filtered from the content but NOT added to providerOptions, preventing API validation errors

Benefits

  • ✅ Seamless model switching between thinking and non-thinking models
  • ✅ No breaking changes to existing functionality
  • ✅ Proper handling of model capabilities
  • ✅ Minimal code change (restructured existing logic)

Testing

  • Code compiles successfully
  • Logic follows existing code patterns
  • No breaking changes to existing message handling

Issue

Resolves #11571 - Error switching from thinking model to non-thinking model mid-session

01luyicheng added 2 commits February 1, 2026 17:03
…ng models (anomalyco#11571)

When switching from a model with extended thinking (e.g., Claude Opus) to a model without thinking support (e.g., GPT 5.2, Claude Sonnet without thinking) mid-session, reasoning parts from previous messages were still being sent to the API, causing validation errors.

This fix ensures that reasoning parts are only included in providerOptions when the target model supports interleaved reasoning (capabilities.interleaved is an object with a field). For models that don't support interleaved reasoning, the reasoning parts are filtered out from the content array but not added to providerOptions, preventing the API error.

Changes:
- Modified normalizeMessages() in transform.ts to check if model supports interleaved reasoning before including reasoning in providerOptions
- Reasoning parts are now correctly filtered for both thinking and non-thinking models
@github-actions
Copy link
Contributor

github-actions bot commented Feb 1, 2026

The following comment was made by an LLM, it may be inaccurate:

Potential Duplicate Found

PR #11572 - "fix: strip reasoning parts when switching to non-interleaved models"
#11572

This appears to be a near-duplicate or very closely related PR that addresses the same issue. Both PRs are solving the problem of stripping reasoning parts when switching between models with different reasoning/interleaving capabilities.

Other Related PRs (different scope but similar theme):

Recommendation: Check if PR #11572 is already merged or if it covers the same solution to #11571. If both are open, one should likely be closed as a duplicate.

opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
opencode-agent bot added a commit that referenced this pull request Feb 1, 2026
@01luyicheng 01luyicheng closed this Feb 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG]: Error switching from thinking model to non-thinking model mid-session

1 participant