Clarify where the API key files should live.#9759
Closed
dpage wants to merge 9 commits intopgadmin-org:masterfrom
Closed
Clarify where the API key files should live.#9759dpage wants to merge 9 commits intopgadmin-org:masterfrom
dpage wants to merge 9 commits intopgadmin-org:masterfrom
Conversation
…y in the AI Assistant. Fixes pgadmin-org#9734
- Anthropic: preserve separators between text blocks in streaming to match _parse_response() behavior. - Docker: validate that the API URL points to a loopback address to constrain the request surface. - Docker/OpenAI: raise LLMClientError on empty streams instead of yielding blank LLMResponse objects, matching non-streaming behavior. - SQL extraction: strip trailing semicolons before joining blocks to avoid double semicolons in output. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ing.
- Use distinct 3-tuple ('complete', text, messages) for completion events
to avoid ambiguity with ('tool_use', [...]) 2-tuples in chat streaming.
- Pass conversation history from request into chat_with_database_stream()
so follow-up NLQ turns retain context.
- Add re.IGNORECASE to SQL fence regex for case-insensitive matching.
- Render MarkdownContent as block element instead of span to avoid
invalid DOM when response contains paragraphs, lists, or tables.
- Keep stop notice as a separate message instead of appending to partial
markdown, preventing it from being swallowed by open code fences.
- Snapshot streamingIdRef before setMessages in error handler to avoid
race condition where ref is cleared before React executes the updater.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix critical NameError: use self._api_url instead of undefined API_URL in anthropic and openai streaming _process_stream() methods. - Match sync path auth handling: conditionally set API key headers in streaming paths for both anthropic and openai providers. - Remove unconditional temperature from openai streaming payload to match sync path compatibility approach. - Add URL scheme validation to OllamaClient.__init__ to prevent unsafe local/resource access via non-http schemes. - Guard ollama streaming finalizer: raise error when stream drops without a done frame and no content was received. - Update chat.py type hint and docstring for 3-tuple completion event. - Serialize and return filtered conversation history in the complete SSE event so the client can round-trip it on follow-up turns. - Store and send conversation history from NLQChatPanel, clear on conversation reset. - Fix JSON-fallback SQL render path: clear content when SQL was extracted without fenced blocks so ChatMessage uses sql-only renderer. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adding block scoping to the error case introduced an unmatched brace that prevented the switch statement from closing properly, causing an eslint parse error. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Replace compaction module imports with inline history deserialization
and filtering since compaction.py is on a different branch.
- Add rstrip(';') to SQL extraction test to match production code,
fixing double-semicolon assertion failure.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The rstrip(';') applied to each block before joining means single
blocks and the last block in multi-block joins no longer have
trailing semicolons. Update expected values to match.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Truncated content from a dropped connection should not be treated as a complete response, even if partial text was streamed. Always raise when final_data is None, matching CodeRabbit's recommendation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (17)
WalkthroughThis PR adds streaming support to the LLM chat infrastructure across multiple providers (Anthropic, OpenAI, Docker, Ollama), refactors the NLQ chat flow to consume streamed responses with incremental markdown rendering, clarifies API key file path documentation, and updates frontend components and themes accordingly. Changes
Sequence Diagram(s)sequenceDiagram
participant Browser
participant pgAdminServer as pgAdmin<br/>Server
participant LLMChat as LLM Chat<br/>Module
participant Provider as LLM<br/>Provider
participant LLMBackend as LLM<br/>Backend API
Browser->>pgAdminServer: POST /nlq_chat_stream<br/>(user_message, sid, did)
pgAdminServer->>LLMChat: chat_with_database_stream()<br/>stream generator
LLMChat->>LLMChat: Build message history<br/>& system prompt
LLMChat->>Provider: chat_stream(messages, tools)
loop Streaming Response
Provider->>LLMBackend: Streaming HTTP POST
LLMBackend-->>Provider: SSE chunks<br/>(text deltas, tool calls)
Provider->>Provider: Parse & aggregate<br/>content/tool_calls
Provider-->>LLMChat: Yield text chunks
LLMChat->>LLMChat: Accumulate streamed text
LLMChat-->>pgAdminServer: Yield text delta
pgAdminServer-->>Browser: SSE event<br/>(text content)
end
loop Tool Use (if needed)
LLMChat->>LLMChat: Detect tool_use event
LLMChat-->>pgAdminServer: Yield tool_use tuple
pgAdminServer->>pgAdminServer: Execute tool<br/>(database query)
LLMChat->>LLMChat: Append tool result<br/>to history
LLMChat->>Provider: Resume chat_stream
end
Provider-->>LLMChat: Yield final LLMResponse<br/>(complete content, usage)
LLMChat-->>pgAdminServer: Yield ('complete',<br/>final_text, history)
pgAdminServer->>pgAdminServer: Extract SQL<br/>from markdown
pgAdminServer-->>Browser: SSE complete event<br/>(SQL, explanation)
Browser->>Browser: Render streamed<br/>markdown incrementally<br/>with code blocks
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Contributor
Author
|
Urgh - based on the wrong branch. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
#9758
Summary by CodeRabbit
Bug Fixes
New Features
Documentation