Skip to content

Commit dca8645

Browse files
mini2sctehannesrudolphroomote-v0[bot]roomote
authored
Roo to main (#961)
* Move more types to @roo-code/types (for the cli) (#10583) * Add some functionality to @roo-code/core for the cli (#10584) * Add some slash commands that are useful for cli development (#10586) * feat: add debug setting to settings page (#10580) * feat: add debug mode toggle to settings * Update About component with debug mode description * i18n: add debug mode strings to settings locales * Update debug mode description in all locales * fix: post state to webview after debugSetting update This addresses the review feedback that the debugSetting handler was not posting updated state back to the webview, which could cause the UI to stay stale until another state refresh occurred. * fix: clarify debug mode description to specify task header location Updated debugMode.description across all 18 locales to clarify that debug buttons appear in the task header, per review feedback. * fix: remove redundant postStateToWebview call after debug setting update * Update src/core/webview/webviewMessageHandler.ts Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: round-trip Gemini thought signatures for tool calls (#10590) * fix: make edit_file matching more resilient (#10585) * chore(gemini): stop overriding tool allow/deny lists (#10592) * fix(cerebras): ensure all tools have consistent strict mode values (#10589) Co-authored-by: Roo Code <roomote@roocode.com> * chore: disable edit_file tool for Gemini/Vertex (#10594) * Release v3.39.2 (#10595) * Changeset version bump (#10596) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Add a TUI (#10480) Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> Co-authored-by: Daniel <57051444+daniel-lxs@users.noreply.github.com> * Allow the cli release script to install locally for testing (#10597) * feat(ipc): add retry mechanism and error handling * More file organization for the cli (#10599) * refactor: rename roo-cli to cos-cli and update related references * Some cleanup in ExtensionHost (#10600) Co-authored-by: Roo Code <roomote@roocode.com> * refactor: optimize response rendering and tool call handling * Rename Roo Code Cloud Provider to Roo Code Router (#10560) Co-authored-by: Roo Code <roomote@roocode.com> * chore: bump version to v1.102.0 (#10604) * Update router name in types (#10605) * Update Roo Code Router service name (#10607) * chore: add changeset for v3.39.3 (#10608) * Update router name in types (#10610) * Changeset version bump (#10609) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * feat: add debug mode configuration and state management * fix(settings): add debug condition for ZgsmAI custom config * Basic settings search (#10619) * Prototype of a simpler searchable settings * Fix tests * UI improvements * Input tweaks * Update webview-ui/src/components/settings/SettingsSearch.tsx Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: remove duplicate Escape key handler dead code * Cleanup * Fix tests --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> * ux: UI improvements to search settings (#10633) * UI changs * Update webview-ui/src/components/marketplace/MarketplaceView.tsx Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * i18n --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * feat: display edit_file errors in UI after consecutive failures (#10581) * feat(chat): enhance reasoning block with animated thinking indicator and varied messages * perf: optimize message block cloning in presentAssistantMessage (#10616) * feat(chat): add random loading messages with localization * fix: correct Gemini 3 thought signature injection format via OpenRouter (#10640) * fix: encode hyphens in MCP tool names before sanitization (#10644) * fix: sanitize tool_use IDs to match API validation pattern (#10649) * fix(path): return empty string from getReadablePath when path is empty - ROO-437 (#10638) * ux: Standard stop button 🟥 (#10639) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: omit parallel_tool_calls when not explicitly enabled (COM-406) (#10671) * fix: use placeholder for empty tool result content to fix Gemini API validation (#10672) * ux: Further improve error display (#10692) * Ensures error details are shown for all errors (except diff, which has its own case) * More details * litellm is a proxy * ux: improve stop button visibility and streamline error handling (#10696) * Restores the send button in the message edit mode * Makes the stop button more prominent * fix: clear approval buttons when API request starts (ROO-526) (#10702) * chore: add changeset for v3.40.0 (#10705) * Changeset version bump (#10706) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * feat(gemini): add allowedFunctionNames support to prevent mode switch errors (#10708) Co-authored-by: Roo Code <roomote@roocode.com> * Release v3.40.1 (#10713) * Changeset version bump (#10714) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Release: v1.105.0 (#10722) * feat(providers): add gpt-5.2-codex model to openai-native provider (#10731) * feat(e2e): Enable E2E tests - 39 passing tests (#10720) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * Clear terminal output buffers to prevent memory leaks (#7666) * feat: add OpenAI Codex provider with OAuth subscription authentication (#10736) Co-authored-by: Roo Code <roomote@roocode.com> * fix(litellm): inject dummy thought signatures on ALL tool calls for Gemini (#10743) * fix(e2e): add alwaysAllow config for MCP time server tools (#10733) * Release v3.41.0 (#10746) * Changeset version bump (#10747) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * feat: clarify Slack and Linear are Cloud Team only features (#10748) Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Release: v1.106.0 (#10749) * refactor(chat): replace current task display with last user feedback * style(chat): adjust feedback text width calculation * fix: handle missing tool identity in OpenAI Native streams (#10719) * Feat/issue 5376 aggregate subtask costs (#10757) * feat(chat): add streaming state to task header interaction * feat: add settings tab titles to search index (#10761) Co-authored-by: Roo Code <roomote@roocode.com> * fix: filter Ollama models without native tool support (#10735) * fix: filter out empty text blocks from user messages for Gemini compatibility (#10728) * fix: flatten top-level anyOf/oneOf/allOf in MCP tool schemas (#10726) * fix: prevent duplicate tool_use IDs causing API 400 errors (#10760) * fix: truncate call_id to 64 chars for OpenAI Responses API (#10763) * fix: Gemini thought signature validation errors (#10694) Co-authored-by: Roo Code <roomote@roocode.com> * Release v3.41.1 (#10767) * Changeset version bump (#10768) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * feat: add button to open markdown in VSCode preview (#10773) Co-authored-by: Roo Code <roomote@roocode.com> * fix(openai-codex): reset invalid model selection (#10777) * fix: add openai-codex to providers that don't require API key (#10786) Co-authored-by: Roo Code <roomote@roocode.com> * fix(litellm): detect Gemini models with space-separated names for thought signature injection (#10787) * Release v3.41.2 (#10788) * Changeset version bump (#10790) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Roo Code Router fixes for the cli (#10789) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * Revert "feat(e2e): Enable E2E tests - 39 passing tests" (#10794) Co-authored-by: Hannes Rudolph <hrudolph@gmail.com> * Claude-like cli flags, auth fixes (#10797) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> * Release cli v0.0.47 (#10798) * Use a redirect instead of a fetch for cli auth (#10799) * chore(cli): prepare release v0.0.48 (#10800) * Fix thinking block word-breaking to prevent horizontal scroll (#10806) Co-authored-by: Roo Code <roomote@roocode.com> * chore: add changeset for v3.41.3 (#10822) * Removal of glm4 6 (#10815) Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Changeset version bump (#10823) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * feat: warn users when too many MCP tools are enabled (#10772) * feat: warn users when too many MCP tools are enabled - Add WarningRow component for displaying generic warnings with icon, title, message, and optional docs link - Add TooManyToolsWarning component that shows when users have more than 40 MCP tools enabled - Add MAX_MCP_TOOLS_THRESHOLD constant (40) - Add i18n translations for the warning message - Integrate warning into ChatView to display after task header - Add comprehensive tests for both components Closes ROO-542 * Moves constant to the right place * Move it to the backend * i18n * Add actionlink that takes you to MCP settings in this case * Add to MCP settings too * Bump max tools up to 60 since github itself has 50+ * DRY * Fix test --------- Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Bruno Bergher <bruno@roocode.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Support different cli output formats: text, json, streaming json (#10812) Co-authored-by: Roo Code <roomote@roocode.com> * chore(cli): prepare release v0.0.49 (#10825) * fix(cli): set integrationTest to true in ExtensionHost constructor (#10826) * fix(cli): fix quiet mode tests by capturing console before host creation (#10827) * refactor: unify user content tags to <user_message> (#10723) Co-authored-by: Roo Code <roomote@roocode.com> * feat(openai-codex): add ChatGPT subscription usage limits dashboard (#10813) * perf(webview): avoid resending taskHistory in state updates (#10842) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * Fix broken link on pricing page (#10847) * fix: update broken pricing link to /models page * Update apps/web-roo-code/src/app/pricing/page.tsx --------- Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Bruno Bergher <bruno@roocode.com> * Git worktree management (#10458) Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Hannes Rudolph <hrudolph@gmail.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> * feat: enable prompt caching for Cerebras zai-glm-4.7 model (#10670) Co-authored-by: Roo Code <roomote@roocode.com> * feat: add Kimi K2 thinking model to VertexAI provider (#9269) Co-authored-by: Roo Code <roomote@roocode.com> * feat: standardize model selectors across all providers (#10294) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> * chore: remove XML tool calling support (#10841) Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Fix broken link on pricing page (#10847) * fix: update broken pricing link to /models page * Update apps/web-roo-code/src/app/pricing/page.tsx --------- Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Bruno Bergher <bruno@roocode.com> * Pr 10853 (#10854) Co-authored-by: Roo Code <roomote@roocode.com> * Git worktree management (#10458) Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Hannes Rudolph <hrudolph@gmail.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> * feat: enable prompt caching for Cerebras zai-glm-4.7 model (#10670) Co-authored-by: Roo Code <roomote@roocode.com> (cherry picked from commit c7ce8aa) * feat: add Kimi K2 thinking model to VertexAI provider (#9269) Co-authored-by: Roo Code <roomote@roocode.com> (cherry picked from commit a060915) * feat: standardize model selectors across all providers (#10294) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> (cherry picked from commit e356d05) * fix: resolve race condition in context condensing prompt input (#10876) * Copy: update /slack page messaging (#10869) copy: update /slack page messaging - Update trial CTA to 'Start a free 14 day Team trial' - Replace 'humans' with 'your team' in value props subtitle - Shorten value prop titles for consistent one-line display - Improve Thread-aware and Open to all descriptions * fix: Handle mode selector empty state on workspace switch (#9674) * fix: handle mode selector empty state on workspace switch When switching between VS Code workspaces, if the current mode from workspace A is not available in workspace B, the mode selector would show an empty string. This fix adds fallback logic to automatically switch to the default "code" mode when the current mode is not found in the available modes list. Changes: - Import defaultModeSlug from @roo/modes - Add fallback logic in selectedMode useMemo to detect when current mode is not available and automatically switch to default mode - Add tests to verify the fallback behavior works correctly - Export defaultModeSlug in test mock for consistent behavior * fix: prevent infinite loop by moving fallback notification to useEffect * fix: prevent infinite loop by using ref to track notified invalid mode * refactor: clean up comments in ModeSelector fallback logic --------- Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> * Roo to main remove xml (#936) * Fix broken link on pricing page (#10847) * fix: update broken pricing link to /models page * Update apps/web-roo-code/src/app/pricing/page.tsx --------- Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Bruno Bergher <bruno@roocode.com> * Git worktree management (#10458) Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Hannes Rudolph <hrudolph@gmail.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> * feat: enable prompt caching for Cerebras zai-glm-4.7 model (#10670) Co-authored-by: Roo Code <roomote@roocode.com> * feat: add Kimi K2 thinking model to VertexAI provider (#9269) Co-authored-by: Roo Code <roomote@roocode.com> * feat: standardize model selectors across all providers (#10294) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> * chore: remove XML tool calling support (#10841) Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Pr 10853 (#10854) Co-authored-by: Roo Code <roomote@roocode.com> * feat(commit): enhance git diff handling for new repositories * feat(task): support fake_tool_call for Qwen model with XML tool call format * feat(prompts): add snapshots for custom instructions and system prompt variations --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Bruno Bergher <bruno@roocode.com> Co-authored-by: Chris Estreich <cestreich@gmail.com> Co-authored-by: Hannes Rudolph <hrudolph@gmail.com> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> Co-authored-by: MP <michael@roocode.com> * feat: remove Claude Code provider (#10883) * refactor: migrate context condensing prompt to customSupportPrompts and cleanup legacy code (#10881) * refactor: unify export path logic and default to Downloads (#10882) * Fix marketing site preview logic (#10886) * feat(web): redesign Slack page Featured Workflow section with YouTube… (#10880) Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> * feat: add size-based progress tracking for worktree file copying (#10871) * feat: add HubSpot tracking with consent-based loading (#10885) Co-authored-by: Roo Code <roomote@roocode.com> * fix(openai): prevent double emission of text/reasoning in native and codex handlers (#10888) * Fix padding on Roo Code Cloud upsell (#10889) Co-authored-by: Roo Code <roomote@roocode.com> * Open the worktreeinclude file after creating it (#10891) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: prevent task abortion when resuming via IPC/bridge (#10892) * fix: rename bot to 'Roomote' and fix spacing in Slack demo (#10898) * Fix: Enforce file restrictions for all editing tools (#10896) Co-authored-by: Roo Code <roomote@roocode.com> * chore: clean up XML legacy code and native-only comments (#10900) * feat: hide worktree feature from menus (#10899) * fix(condense): remove custom condensing model option (#10901) * fix(condense): remove custom condensing model option Remove the ability to specify a different model/API configuration for condensing conversations. Modern conversations include provider-specific data (tool calls, reasoning blocks, thought signatures) that only the originating model can properly understand and summarize. Changes: - Remove condensingApiHandler parameter from summarizeConversation() - Remove condensingApiConfigId from context management and Task - Remove API config dropdown for CONDENSE in settings UI - Update telemetry to remove usedCustomApiHandler parameter - Update related tests Users can still customize the CONDENSE prompt text; only model selection is removed. * fix: remove condensingApiConfigId from types and test fixtures --------- Co-authored-by: Roo Code <roomote@roocode.com> * test(prompts): update snapshots to fix indentation * Fix EXT-553: Remove percentage-based progress tracking for worktree file copying (#10905) * Fix EXT-553: Remove percentage-based progress tracking for worktree file copying - Removed totalBytes from CopyProgress interface - Removed Math.min() clamping that caused stuck-at-100% issue - Changed UI from progress bar to spinner with activity indicator - Shows 'item — X MB copied' instead of percentage - Updated all 18 locale files - Uses native cp with polling (no new dependencies) * fix: translate copyingProgress text in all 17 non-English locale files --------- Co-authored-by: Roo Code <roomote@roocode.com> * chore(prompts): clarify linked SKILL.md file handling (#10907) * Release v3.42.0 (#10910) * Changeset version bump (#10911) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * feat: Move condense prompt editor to Context Management tab (#10909) * feat: Update Z.AI models with new variants and pricing (#10860) Co-authored-by: erdemgoksel <erdemgoksel@MAU-BILISIM42> * fix: correct Gemini 3 pricing for Flash and Pro models (#10487) Co-authored-by: Roo Code <roomote@roocode.com> * feat: add pnpm install:vsix:nightly command (#10912) * Intelligent Context Condensation v2 (#10873) * feat(gemini): add tool call support for Gemini CLI * feat(condense): improve condensation with environment details, accurate token counts, and lazy evaluation (#10920) * docs: fix CLI README to use correct command syntax (#10923) Co-authored-by: Roo Code <roomote@roocode.com> * chore: remove diffEnabled and fuzzyMatchThreshold settings (#10298) * Remove Merge button from worktrees (#10924) Co-authored-by: Roo Code <roomote@roocode.com> * chore: remove POWER_STEERING experimental feature (#10926) - Remove powerSteering from experimentIds array and schema in packages/types - Remove POWER_STEERING from EXPERIMENT_IDS and experimentConfigsMap - Remove power steering conditional block from getEnvironmentDetails - Remove POWER_STEERING entry from all 18 locale settings.json files - Update related test files to remove power steering references * chore: remove MULTI_FILE_APPLY_DIFF experiment (#10925) * chore: remove MULTI_FILE_APPLY_DIFF experiment Remove the 'Enable concurrent file edits' experimental feature that allowed editing multiple files in a single apply_diff call. - Remove multiFileApplyDiff from experiment types and config - Delete MultiFileSearchReplaceDiffStrategy class and tests - Delete MultiApplyDiffTool wrapper and tests - Remove experiment-specific code paths in Task.ts, generateSystemPrompt.ts, and presentAssistantMessage.ts - Remove special handling in ExperimentalSettings.tsx - Remove translations from all 18 locale files The existing MultiSearchReplaceDiffStrategy continues to handle multiple SEARCH/REPLACE blocks within a single file. * fix: remove unused EXPERIMENT_IDS/experiments import from Task.ts Addresses review feedback: removes the unused imports from src/core/task/Task.ts that were left over after removing the MULTI_FILE_APPLY_DIFF experiment routing code. * fix: convert orphaned tool_results to text blocks after condensing (#10927) * fix: convert orphaned tool_results to text blocks after condensing When condensing occurs after assistant sends tool_uses but before user responds, the tool_use blocks get condensed away. User messages containing tool_results that reference condensed tool_use_ids become orphaned and get filtered out by getEffectiveApiHistory, causing user feedback to be lost. This fix enhances the existing check in addToApiConversationHistory to detect when the previous effective message is not an assistant and converts any tool_result blocks to text blocks, preventing them from being filtered as orphans. The conversion happens at the latest possible moment (message insertion) because: - Tool results are created before we know if condensing will occur - We need actual effective history state to make the decision - This is the last checkpoint before orphan filtering happens * Only include environment details in summary for automatic condensing For automatic condensing (during attemptApiRequest), environment details are included in the summary because the API request is already in progress and the next user message won't have fresh environment details injected. For manual condensing (via condenseContext button), environment details are NOT included because fresh details will be injected on the very next turn via getEnvironmentDetails() in recursivelyMakeClineRequests(). This uses the existing isAutomaticTrigger flag to differentiate behavior. --------- Co-authored-by: Hannes Rudolph <hrudolph@gmail.com> * refactor: remove legacy XML tool calling code (getToolDescription) (#10929) - Remove getToolDescription() method from MultiSearchReplaceDiffStrategy - Remove getToolDescription() from DiffStrategy interface - Remove unused ToolDescription type from shared/tools.ts - Remove unused eslint-disable directive - Update test mocks to remove getToolDescription references - Remove getToolDescription tests from multi-search-replace.spec.ts Native tools are now defined in src/core/prompts/tools/native-tools/ using the OpenAI function format. The removed code was dead since XML-style tool calling was replaced with native tool calling. * Fix duplicate model display for OpenAI Codex provider (#10930) Co-authored-by: Roo Code <roomote@roocode.com> * Skip thoughtSignature blocks during markdown export #10199 (#10932) * fix: use json-stream-stringify for pretty-printing MCP config files (#9864) Co-authored-by: Roo Code <roomote@roocode.com> * fix: auto-migrate v1 condensing prompt and handle invalid providers on import (#10931) * chore: add changeset for v3.43.0 (#10933) * Changeset version bump (#10934) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Replace hyphen encoding with fuzzy matching for MCP tool names (#10775) * fix: truncate AWS Bedrock toolUseId to 64 characters (#10902) * feat: remove MCP SERVERS section from system prompt (#10895) Co-authored-by: Roo Code <roomote@roocode.com> * feat(tools): enhance Gemini CLI with new tool descriptions and formatting * feat(task): improve smart mistake detector with error source tracking and refined auto-switching * feat: update Fireworks provider with new models (#10679) Fixes #10674 Added new models: - MiniMax M2.1 (minimax-m2p1) - DeepSeek V3.2 (deepseek-v3p2) - GLM-4.7 (glm-4p7) - Llama 3.3 70B Instruct (llama-v3p3-70b-instruct) - Llama 4 Maverick Instruct (llama4-maverick-instruct-basic) - Llama 4 Scout Instruct (llama4-scout-instruct-basic) All models include correct pricing, context windows, and capabilities. * ux: Improve subtask visibility and navigation in history and chat views (#10864) * Taskheader * Subtask messages * View subtask * subtasks in history items * i18n * Table * Lighter visuals * bug * fix: Align tests with implementation behavior * refactor: extract CircularProgress component from TaskHeader - Created reusable CircularProgress component for displaying percentage as a ring - Moved inline SVG calculation from TaskHeader.tsx to dedicated component - Added comprehensive tests for CircularProgress component (14 tests) - Component supports customizable size, strokeWidth, and className - Includes proper accessibility attributes (progressbar role, aria-valuenow) * chore: update StandardTooltip default delay to 600ms As mentioned in the PR description, increased the tooltip delay to 600ms for less intrusive tooltips. The delay is still configurable via the delay prop for components that need a different value. --------- Co-authored-by: Roo Code <roomote@roocode.com> * fix(types): remove unsupported Fireworks model tool fields (#10937) fix(types): remove unsupported tool capability fields from Fireworks model metadata Co-authored-by: Roo Code <roomote@roocode.com> * ux: improve worktree selector and creation UX (#10940) * Delete modal * Restructured * Much more prominent * UI * i18n * Fixes i18n * Remove mergeresultmodel * i18n * tests * knip * code review * feat: add wildcard support for MCP alwaysAllow configuration (#10948) Co-authored-by: Roo Code <roomote@roocode.com> * fix: restore opaque background to settings section headers (#10951) Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Update and improve zh-TW Traditional Chinese locale and docs (#10953) * chore: remove POWER_STEERING experiment remnants (#10980) * fix: record truncation event when condensation fails but truncation succeeds (#10984) * feat: new_task tool creates checkpoint the same way write_to_file does (#10982) * fix: VS Code LM token counting returns 0 outside requests, breaking context condensing (EXT-620) (#10983) - Modified VsCodeLmHandler.internalCountTokens() to create temporary cancellation tokens when needed - Token counting now works both during and outside of active requests - Added 4 new tests to verify the fix and prevent regression - Resolves issue where VS Code LM API users experienced context overflow errors * fix: prevent nested condensing from including previously-condensed content (#10985) * fix: use --force by default when deleting worktrees (#10986) Co-authored-by: Roo Code <roomote@roocode.com> * chore: add changeset for v3.44.0 (#10987) * Changeset version bump (#10989) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> * Fix LiteLLM tool ID validation errors for Bedrock proxy (#10990) * Enable parallel tool calling with new_task isolation safeguards (#10979) Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> Co-authored-by: Hannes Rudolph <hrudolph@gmail.com> * Add quality checks to marketing site deployment workflows (#10959) Co-authored-by: cte <cestreich@gmail.com> --------- Co-authored-by: Chris Estreich <cestreich@gmail.com> Co-authored-by: Hannes Rudolph <hrudolph@gmail.com> Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <roomote@roocode.com> Co-authored-by: Matt Rubens <mrubens@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Daniel <57051444+daniel-lxs@users.noreply.github.com> Co-authored-by: Bruno Bergher <bruno@roocode.com> Co-authored-by: Archimedes <84040360+ArchimedesCrypto@users.noreply.github.com> Co-authored-by: Patrick Decat <pdecat@gmail.com> Co-authored-by: T <taltas@users.noreply.github.com> Co-authored-by: Seb Duerr <sebastian.duerr@cerebras.net> Co-authored-by: daniel-lxs <ricciodaniel98@gmail.com> Co-authored-by: MP <michael@roocode.com> Co-authored-by: Erdem <74366943+ErdemGKSL@users.noreply.github.com> Co-authored-by: erdemgoksel <erdemgoksel@MAU-BILISIM42> Co-authored-by: rossdonald <49425722+rossdonald@users.noreply.github.com> Co-authored-by: Michaelzag <michael@michaelzag.com> Co-authored-by: Thanh Nguyen <74597207+ThanhNguyxn@users.noreply.github.com> Co-authored-by: Peter Dave Hello <3691490+PeterDaveHello@users.noreply.github.com>
1 parent aac7982 commit dca8645

23 files changed

Lines changed: 1157 additions & 258 deletions

File tree

.github/workflows/website-deploy.yml.disabled

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,10 @@ on:
88
- 'apps/web-roo-code/**'
99
workflow_dispatch:
1010

11+
concurrency:
12+
group: deploy-roocode-com
13+
cancel-in-progress: true
14+
1115
env:
1216
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
1317
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
@@ -36,8 +40,17 @@ jobs:
3640
uses: actions/checkout@v4
3741
- name: Setup Node.js and pnpm
3842
uses: ./.github/actions/setup-node-pnpm
43+
- name: Run lint
44+
run: pnpm lint
45+
working-directory: apps/web-roo-code
46+
- name: Run type check
47+
run: pnpm check-types
48+
working-directory: apps/web-roo-code
49+
- name: Run build
50+
run: pnpm build
51+
working-directory: apps/web-roo-code
3952
- name: Install Vercel CLI
40-
run: npm install --global vercel@canary
53+
run: npm install --global vercel@latest
4154
- name: Pull Vercel Environment Information
4255
run: npx vercel pull --yes --environment=production --token=${{ secrets.VERCEL_TOKEN }}
4356
- name: Build Project Artifacts

.github/workflows/website-preview.yml.disabled

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,10 @@ on:
1111
- "apps/web-roo-code/**"
1212
workflow_dispatch:
1313

14+
concurrency:
15+
group: preview-roocode-com-${{ github.ref }}
16+
cancel-in-progress: true
17+
1418
env:
1519
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
1620
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
@@ -39,8 +43,17 @@ jobs:
3943
uses: actions/checkout@v4
4044
- name: Setup Node.js and pnpm
4145
uses: ./.github/actions/setup-node-pnpm
46+
- name: Run lint
47+
run: pnpm lint
48+
working-directory: apps/web-roo-code
49+
- name: Run type check
50+
run: pnpm check-types
51+
working-directory: apps/web-roo-code
52+
- name: Run build
53+
run: pnpm build
54+
working-directory: apps/web-roo-code
4255
- name: Install Vercel CLI
43-
run: npm install --global vercel@canary
56+
run: npm install --global vercel@latest
4457
- name: Pull Vercel Environment Information
4558
run: npx vercel pull --yes --environment=preview --token=${{ secrets.VERCEL_TOKEN }}
4659
- name: Build Project Artifacts

src/api/providers/__tests__/lite-llm.spec.ts

Lines changed: 202 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -718,4 +718,206 @@ describe("LiteLLMHandler", () => {
718718
})
719719
})
720720
})
721+
722+
describe("tool ID normalization", () => {
723+
it("should truncate tool IDs longer than 64 characters", async () => {
724+
const optionsWithBedrock: ApiHandlerOptions = {
725+
...mockOptions,
726+
litellmModelId: "bedrock/anthropic.claude-3-sonnet",
727+
}
728+
handler = new LiteLLMHandler(optionsWithBedrock)
729+
730+
vi.spyOn(handler as any, "fetchModel").mockResolvedValue({
731+
id: "bedrock/anthropic.claude-3-sonnet",
732+
info: { ...litellmDefaultModelInfo, maxTokens: 8192 },
733+
})
734+
735+
// Create a tool ID longer than 64 characters
736+
const longToolId = "toolu_" + "a".repeat(70) // 76 characters total
737+
738+
const systemPrompt = "You are a helpful assistant"
739+
const messages: Anthropic.Messages.MessageParam[] = [
740+
{ role: "user", content: "Hello" },
741+
{
742+
role: "assistant",
743+
content: [
744+
{ type: "text", text: "I'll help you with that." },
745+
{ type: "tool_use", id: longToolId, name: "read_file", input: { path: "test.txt" } },
746+
],
747+
},
748+
{
749+
role: "user",
750+
content: [{ type: "tool_result", tool_use_id: longToolId, content: "file contents" }],
751+
},
752+
]
753+
754+
const mockStream = {
755+
async *[Symbol.asyncIterator]() {
756+
yield {
757+
choices: [{ delta: { content: "Response" } }],
758+
usage: { prompt_tokens: 100, completion_tokens: 20 },
759+
}
760+
},
761+
}
762+
763+
mockCreate.mockReturnValue({
764+
withResponse: vi.fn().mockResolvedValue({ data: mockStream }),
765+
})
766+
767+
const generator = handler.createMessage(systemPrompt, messages)
768+
for await (const _chunk of generator) {
769+
// Consume
770+
}
771+
772+
// Verify that tool IDs are truncated to 64 characters or less
773+
const createCall = mockCreate.mock.calls[0][0]
774+
const assistantMessage = createCall.messages.find(
775+
(msg: any) => msg.role === "assistant" && msg.tool_calls && msg.tool_calls.length > 0,
776+
)
777+
const toolMessage = createCall.messages.find((msg: any) => msg.role === "tool")
778+
779+
expect(assistantMessage).toBeDefined()
780+
expect(assistantMessage.tool_calls[0].id.length).toBeLessThanOrEqual(64)
781+
782+
expect(toolMessage).toBeDefined()
783+
expect(toolMessage.tool_call_id.length).toBeLessThanOrEqual(64)
784+
})
785+
786+
it("should not modify tool IDs that are already within 64 characters", async () => {
787+
const optionsWithBedrock: ApiHandlerOptions = {
788+
...mockOptions,
789+
litellmModelId: "bedrock/anthropic.claude-3-sonnet",
790+
}
791+
handler = new LiteLLMHandler(optionsWithBedrock)
792+
793+
vi.spyOn(handler as any, "fetchModel").mockResolvedValue({
794+
id: "bedrock/anthropic.claude-3-sonnet",
795+
info: { ...litellmDefaultModelInfo, maxTokens: 8192 },
796+
})
797+
798+
// Create a tool ID within 64 characters
799+
const shortToolId = "toolu_01ABC123" // Well under 64 characters
800+
801+
const systemPrompt = "You are a helpful assistant"
802+
const messages: Anthropic.Messages.MessageParam[] = [
803+
{ role: "user", content: "Hello" },
804+
{
805+
role: "assistant",
806+
content: [
807+
{ type: "text", text: "I'll help you with that." },
808+
{ type: "tool_use", id: shortToolId, name: "read_file", input: { path: "test.txt" } },
809+
],
810+
},
811+
{
812+
role: "user",
813+
content: [{ type: "tool_result", tool_use_id: shortToolId, content: "file contents" }],
814+
},
815+
]
816+
817+
const mockStream = {
818+
async *[Symbol.asyncIterator]() {
819+
yield {
820+
choices: [{ delta: { content: "Response" } }],
821+
usage: { prompt_tokens: 100, completion_tokens: 20 },
822+
}
823+
},
824+
}
825+
826+
mockCreate.mockReturnValue({
827+
withResponse: vi.fn().mockResolvedValue({ data: mockStream }),
828+
})
829+
830+
const generator = handler.createMessage(systemPrompt, messages)
831+
for await (const _chunk of generator) {
832+
// Consume
833+
}
834+
835+
// Verify that tool IDs are unchanged
836+
const createCall = mockCreate.mock.calls[0][0]
837+
const assistantMessage = createCall.messages.find(
838+
(msg: any) => msg.role === "assistant" && msg.tool_calls && msg.tool_calls.length > 0,
839+
)
840+
const toolMessage = createCall.messages.find((msg: any) => msg.role === "tool")
841+
842+
expect(assistantMessage).toBeDefined()
843+
expect(assistantMessage.tool_calls[0].id).toBe(shortToolId)
844+
845+
expect(toolMessage).toBeDefined()
846+
expect(toolMessage.tool_call_id).toBe(shortToolId)
847+
})
848+
849+
it("should maintain uniqueness with hash suffix when truncating", async () => {
850+
const optionsWithBedrock: ApiHandlerOptions = {
851+
...mockOptions,
852+
litellmModelId: "bedrock/anthropic.claude-3-sonnet",
853+
}
854+
handler = new LiteLLMHandler(optionsWithBedrock)
855+
856+
vi.spyOn(handler as any, "fetchModel").mockResolvedValue({
857+
id: "bedrock/anthropic.claude-3-sonnet",
858+
info: { ...litellmDefaultModelInfo, maxTokens: 8192 },
859+
})
860+
861+
// Create two tool IDs that differ only near the end
862+
const longToolId1 = "toolu_" + "a".repeat(60) + "_suffix1"
863+
const longToolId2 = "toolu_" + "a".repeat(60) + "_suffix2"
864+
865+
const systemPrompt = "You are a helpful assistant"
866+
const messages: Anthropic.Messages.MessageParam[] = [
867+
{ role: "user", content: "Hello" },
868+
{
869+
role: "assistant",
870+
content: [
871+
{ type: "text", text: "I'll help." },
872+
{ type: "tool_use", id: longToolId1, name: "read_file", input: { path: "test1.txt" } },
873+
{ type: "tool_use", id: longToolId2, name: "read_file", input: { path: "test2.txt" } },
874+
],
875+
},
876+
{
877+
role: "user",
878+
content: [
879+
{ type: "tool_result", tool_use_id: longToolId1, content: "file1 contents" },
880+
{ type: "tool_result", tool_use_id: longToolId2, content: "file2 contents" },
881+
],
882+
},
883+
]
884+
885+
const mockStream = {
886+
async *[Symbol.asyncIterator]() {
887+
yield {
888+
choices: [{ delta: { content: "Response" } }],
889+
usage: { prompt_tokens: 100, completion_tokens: 20 },
890+
}
891+
},
892+
}
893+
894+
mockCreate.mockReturnValue({
895+
withResponse: vi.fn().mockResolvedValue({ data: mockStream }),
896+
})
897+
898+
const generator = handler.createMessage(systemPrompt, messages)
899+
for await (const _chunk of generator) {
900+
// Consume
901+
}
902+
903+
// Verify that truncated tool IDs are unique (hash suffix ensures this)
904+
const createCall = mockCreate.mock.calls[0][0]
905+
const assistantMessage = createCall.messages.find(
906+
(msg: any) => msg.role === "assistant" && msg.tool_calls && msg.tool_calls.length > 0,
907+
)
908+
909+
expect(assistantMessage).toBeDefined()
910+
expect(assistantMessage.tool_calls).toHaveLength(2)
911+
912+
const id1 = assistantMessage.tool_calls[0].id
913+
const id2 = assistantMessage.tool_calls[1].id
914+
915+
// Both should be truncated to 64 characters
916+
expect(id1.length).toBeLessThanOrEqual(64)
917+
expect(id2.length).toBeLessThanOrEqual(64)
918+
919+
// They should be different (hash suffix ensures uniqueness)
920+
expect(id1).not.toBe(id2)
921+
})
922+
})
721923
})

src/api/providers/lite-llm.ts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ import { ApiHandlerOptions } from "../../shared/api"
99

1010
import { ApiStream, ApiStreamUsageChunk } from "../transform/stream"
1111
import { convertToOpenAiMessages } from "../transform/openai-format"
12+
import { sanitizeOpenAiCallId } from "../../utils/tool-id"
1213

1314
import type { SingleCompletionHandler, ApiHandlerCreateMessageMetadata } from "../index"
1415
import { RouterProvider } from "./router-provider"
@@ -115,7 +116,9 @@ export class LiteLLMHandler extends RouterProvider implements SingleCompletionHa
115116
): ApiStream {
116117
const { id: modelId, info } = await this.fetchModel()
117118

118-
const openAiMessages = convertToOpenAiMessages(messages)
119+
const openAiMessages = convertToOpenAiMessages(messages, {
120+
normalizeToolCallId: sanitizeOpenAiCallId,
121+
})
119122

120123
// Prepare messages with cache control if enabled and supported
121124
let systemMessage: OpenAI.Chat.ChatCompletionMessageParam

src/core/assistant-message/presentAssistantMessage.ts

Lines changed: 19 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,11 @@ export async function presentAssistantMessage(cline: Task) {
131131
break
132132
}
133133

134-
if (cline.didAlreadyUseTool) {
134+
// Get parallel tool calling state from experiments
135+
const mcpState = await cline.providerRef.deref()?.getState()
136+
const mcpParallelToolCallsEnabled = mcpState?.experiments?.multipleNativeToolCalls ?? false
137+
138+
if (!mcpParallelToolCallsEnabled && cline.didAlreadyUseTool) {
135139
const toolCallId = mcpBlock.id
136140
const errorMessage = `MCP tool [${mcpBlock.name}] was not executed because a tool has already been used in this message. Only one tool may be used per message.`
137141

@@ -206,9 +210,11 @@ export async function presentAssistantMessage(cline: Task) {
206210
}
207211

208212
hasToolResult = true
209-
cline.didAlreadyUseTool = true
210-
if (editTools.includes(block.name) && block.partial === false) {
211-
updateCospecMetadata(cline, block?.params?.path)
213+
// Only set didAlreadyUseTool when parallel tool calling is disabled
214+
if (!mcpParallelToolCallsEnabled) {
215+
cline.didAlreadyUseTool = true
216+
if (editTools.includes(block.name) && block.partial === false)
217+
updateCospecMetadata(cline, block?.params?.path)
212218
}
213219
}
214220

@@ -449,7 +455,10 @@ export async function presentAssistantMessage(cline: Task) {
449455
break
450456
}
451457

452-
if (cline.didAlreadyUseTool) {
458+
// Get parallel tool calling state from experiments (stateExperiments already fetched above)
459+
const parallelToolCallsEnabled = stateExperiments?.multipleNativeToolCalls ?? false
460+
461+
if (!parallelToolCallsEnabled && cline.didAlreadyUseTool) {
453462
// Ignore any content after a tool has already been used.
454463
// For native tool calling, we must send a tool_result for every tool_use to avoid API errors
455464
const errorMessage = `Tool [${block.name}] was not executed because a tool has already been used in this message. Only one tool may be used per message. You must assess the first tool's result before proceeding to use the next tool.`
@@ -559,9 +568,11 @@ export async function presentAssistantMessage(cline: Task) {
559568
}
560569

561570
hasToolResult = true
562-
cline.didAlreadyUseTool = true
563-
if (editTools.includes(block.name) && block.partial === false) {
564-
updateCospecMetadata(cline, block?.params?.path)
571+
// Only set didAlreadyUseTool when parallel tool calling is disabled
572+
if (!parallelToolCallsEnabled) {
573+
cline.didAlreadyUseTool = true
574+
if (editTools.includes(block.name) && block.partial === false)
575+
updateCospecMetadata(cline, block?.params?.path)
565576
}
566577
}
567578

src/core/condense/__tests__/condense.spec.ts

Lines changed: 1 addition & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -250,7 +250,7 @@ Line 2
250250
it("should not summarize messages that already contain a recent summary with no new messages", async () => {
251251
const messages: ApiMessage[] = [
252252
{ role: "user", content: "First message with /command" },
253-
{ role: "assistant", content: "Previous summary", isSummary: true },
253+
{ role: "user", content: "Previous summary", isSummary: true },
254254
]
255255

256256
const result = await summarizeConversation(messages, mockApiHandler, "System prompt", taskId, false)
@@ -413,20 +413,5 @@ Line 2
413413
expect(result[1]).toEqual(messages[4])
414414
expect(result[2]).toEqual(messages[5])
415415
})
416-
417-
it("should prepend first user message when summary starts with assistant", () => {
418-
const messages: ApiMessage[] = [
419-
{ role: "user", content: "Original first message" },
420-
{ role: "assistant", content: "Summary content", isSummary: true },
421-
{ role: "user", content: "After summary" },
422-
]
423-
424-
const result = getMessagesSinceLastSummary(messages)
425-
426-
// Should prepend original first message for Bedrock compatibility
427-
expect(result[0]).toEqual(messages[0]) // Original first user message
428-
expect(result[1]).toEqual(messages[1]) // The summary
429-
expect(result[2]).toEqual(messages[2])
430-
})
431416
})
432417
})

0 commit comments

Comments
 (0)