Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a new ResponsesApiFormat ChatFormat that bridges OpenAI Responses API requests/responses to the internal chat completions hub, implements streaming/tool-call accumulation and native delegation, and updates native streaming state to carry usage. ChangesResponses API Format Bridge
Sequence DiagramsequenceDiagram
participant Client as Responses API<br/>Client
participant Format as ResponsesApiFormat
participant Hub as Chat Completions<br/>Hub
participant Native as Native OpenAI<br/>Handlers
Client->>Format: ResponsesApiRequest (instructions, items, tools)
Format->>Format: validate_request()
Format->>Format: to_hub() -> ChatCompletionRequest
alt Native Delegation
Format->>Native: call_native(request)
Native->>Native: transform_request()
Native->>Hub: send native request
Hub-->>Native: ChatCompletionResponse / stream
Native->>Native: transform_response()
Native-->>Format: transformed response / stream events
else Direct Hub Call
Format->>Hub: send ChatCompletionRequest
Hub-->>Format: ChatCompletionResponse / stream
end
Format->>Format: from_hub() -> ResponsesApiResponse
Format-->>Client: ResponsesApiResponse (output_text, function_calls, usage)
alt Streaming
Hub->>Format: ChatCompletionChunk stream
Format->>Format: merge_streaming_tool_call() / accumulate text deltas
Format-->>Client: ResponsesApiStreamEvent (output_item_added / content_part_added / content_block_delta)
Format-->>Client: ResponsesApiStreamEvent (output_item_done / content_block_done)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 5 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Review rate limit: 3/5 reviews remaining, refill in 23 minutes and 5 seconds. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/gateway/formats/openai/responses.rs (1)
27-30: ⚡ Quick winAdd
///docs to the new public types.
ResponsesApiFormatis publicly re-exported, andResponsesBridgeStateis public as well, but neither has a doc comment.Minimal doc-comment patch
+/// Bridges OpenAI Responses API requests and streams through the hub chat format. pub struct ResponsesApiFormat; #[derive(Debug, Clone, Default)] +/// Stateful data used while converting hub streaming chunks into Responses API events. pub struct ResponsesBridgeState {As per coding guidelines "Use /// for doc comments on public items in Rust".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/formats/openai/responses.rs` around lines 27 - 30, Add /// doc comments to the new public types: place a short descriptive triple-slash comment above the ResponsesApiFormat declaration explaining its role as the API response format adapter, and above the ResponsesBridgeState struct explaining what state it holds and when it is used; ensure the comments are concise, use proper Rust doc style (///), describe public behavior or purpose, and mention any important invariants or usage notes for ResponsesApiFormat and ResponsesBridgeState.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/gateway/formats/openai/responses.rs`:
- Around line 204-258: The stream assigns different numeric indices to the same
logical output as items (tools/text) are added, because tool_output_index(state,
...) recomputes index from current presence; fix by making the output index
stable and stored in the per-item state: when handling a tool call (functions:
merge_streaming_tool_call, streaming_function_call_output_item, and where you
currently call tool_output_index), check the tool_state for a stored
output_index Option, and if None compute the index once (using the same
logic/tool_output_index helper) and persist it to tool_state.output_index before
emitting any
OutputItemAdded/OutputTextDelta/FunctionCallArgumentsDelta/Output... events;
replace subsequent calls to tool_output_index with reading
tool_state.output_index so all later delta/done/completed events use the same
stable index. Apply the same pattern in the other referenced blocks (around
lines ~292-305, ~749-786, ~788-823).
- Around line 71-98: The bridge currently accepts requests with
previous_response_id but to_hub() only forwards the current input, so follow-ups
with FunctionCallOutput become orphaned tool messages; fix this by updating
ensure_request_is_bridgeable() to reject any request that has
previous_response_id (and/or any ResponsesInput item that represents a
FunctionCallOutput continuation) by returning an Err when
req.previous_response_id.is_some() (and the analogous predicate for input
items), so to_hub(), responses_input_item_to_hub_message, and BridgeContext are
never invoked for these unresolved continuations; reference
ensure_request_is_bridgeable, to_hub, BridgeContext, previous_response_id, and
responses_input_item_to_hub_message when making the change.
---
Nitpick comments:
In `@src/gateway/formats/openai/responses.rs`:
- Around line 27-30: Add /// doc comments to the new public types: place a short
descriptive triple-slash comment above the ResponsesApiFormat declaration
explaining its role as the API response format adapter, and above the
ResponsesBridgeState struct explaining what state it holds and when it is used;
ensure the comments are concise, use proper Rust doc style (///), describe
public behavior or purpose, and mention any important invariants or usage notes
for ResponsesApiFormat and ResponsesBridgeState.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 157487af-ca4d-4937-bd88-a5868193fa8e
📒 Files selected for processing (4)
src/gateway/formats/mod.rssrc/gateway/formats/openai/mod.rssrc/gateway/formats/openai/responses.rssrc/gateway/traits/native.rs
| fn to_hub(req: &Self::Request) -> Result<(ChatCompletionRequest, BridgeContext)> { | ||
| ensure_request_is_bridgeable(req)?; | ||
|
|
||
| let mut messages = Vec::new(); | ||
| if let Some(instructions) = req.instructions.as_ref().filter(|text| !text.is_empty()) { | ||
| messages.push(ChatMessage { | ||
| role: "system".into(), | ||
| content: Some(MessageContent::Text(instructions.clone())), | ||
| name: None, | ||
| tool_calls: None, | ||
| tool_call_id: None, | ||
| }); | ||
| } | ||
|
|
||
| match &req.input { | ||
| ResponsesInput::Text(text) => messages.push(ChatMessage { | ||
| role: "user".into(), | ||
| content: Some(MessageContent::Text(text.clone())), | ||
| name: None, | ||
| tool_calls: None, | ||
| tool_call_id: None, | ||
| }), | ||
| ResponsesInput::Items(items) => { | ||
| for item in items { | ||
| messages.push(responses_input_item_to_hub_message(item)?); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Reject previous_response_id tool-result continuations until the bridge can reconstruct history.
to_hub() only forwards the current input, while previous_response_id is just echoed into BridgeContext. A follow-up request containing FunctionCallOutput therefore turns into a standalone role="tool" message with no preceding assistant tool_calls message in the Chat Completions payload, which will break multi-turn tool-calling flows.
Either resolve previous_response_id into prior messages before building the hub request, or mark these requests as non-bridgeable in ensure_request_is_bridgeable() for now.
Temporary guard if history reconstruction is not ready yet
fn ensure_request_is_bridgeable(request: &ResponsesApiRequest) -> Result<()> {
+ let has_function_call_output = matches!(
+ &request.input,
+ ResponsesInput::Items(items)
+ if items.iter().any(|item| matches!(item, ResponsesInputItem::FunctionCallOutput { .. }))
+ );
+
+ if request.previous_response_id.is_some() || has_function_call_output {
+ return Err(GatewayError::Bridge(
+ "Responses API continuation/tool-result turns cannot be bridged through Chat Completions yet"
+ .into(),
+ ));
+ }
+
if request.background.unwrap_or(false) {
return Err(GatewayError::Bridge(
"Responses API background mode cannot be bridged through Chat Completions".into(),Also applies to: 393-417, 420-437, 564-610
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/gateway/formats/openai/responses.rs` around lines 71 - 98, The bridge
currently accepts requests with previous_response_id but to_hub() only forwards
the current input, so follow-ups with FunctionCallOutput become orphaned tool
messages; fix this by updating ensure_request_is_bridgeable() to reject any
request that has previous_response_id (and/or any ResponsesInput item that
represents a FunctionCallOutput continuation) by returning an Err when
req.previous_response_id.is_some() (and the analogous predicate for input
items), so to_hub(), responses_input_item_to_hub_message, and BridgeContext are
never invoked for these unresolved continuations; reference
ensure_request_is_bridgeable, to_hub, BridgeContext, previous_response_id, and
responses_input_item_to_hub_message when making the change.
Summary by CodeRabbit
New Features
Tests