You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added support for custom LLM provider URLs for OpenAI and Anthropic, allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM. #9703
- Add configurable API URL fields for OpenAI and Anthropic providers
- Make API keys optional when using custom URLs (for local providers)
- Auto-clear model dropdown when provider settings change
- Refresh button uses current unsaved form values
- Update documentation and release notes
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: docs/en_US/ai_tools.rst
+11-4Lines changed: 11 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,15 +48,18 @@ button and select *AI*).
48
48
Select your preferred LLM provider from the dropdown:
49
49
50
50
**Anthropic**
51
-
Use Claude models from Anthropic. Requires an Anthropic API key.
51
+
Use Claude models from Anthropic, or any Anthropic-compatible API provider.
52
52
53
-
* **API Key File**: Path to a file containing your Anthropic API key (obtain from https://console.anthropic.com/).
53
+
* **API URL**: Custom API endpoint URL (leave empty for default: https://api.anthropic.com/v1).
54
+
* **API Key File**: Path to a file containing your Anthropic API key (obtain from https://console.anthropic.com/). Optional when using a custom URL with a provider that does not require authentication.
54
55
* **Model**: Select from available Claude models (e.g., claude-sonnet-4-20250514).
55
56
56
57
**OpenAI**
57
-
Use GPT models from OpenAI. Requires an OpenAI API key.
58
+
Use GPT models from OpenAI, or any OpenAI-compatible API provider (e.g.,
59
+
LiteLLM, LM Studio, EXO, or other local inference servers).
58
60
59
-
* **API Key File**: Path to a file containing your OpenAI API key (obtain from https://platform.openai.com/).
61
+
* **API URL**: Custom API endpoint URL (leave empty for default: https://api.openai.com/v1). Include the ``/v1`` path prefix if required by your provider.
62
+
* **API Key File**: Path to a file containing your OpenAI API key (obtain from https://platform.openai.com/). Optional when using a custom URL with a provider that does not require authentication.
60
63
* **Model**: Select from available GPT models (e.g., gpt-4).
61
64
62
65
**Ollama**
@@ -72,6 +75,10 @@ Select your preferred LLM provider from the dropdown:
72
75
* **API URL**: The URL of the Docker Model Runner API (default: http://localhost:12434).
73
76
* **Model**: Select from available models or enter a custom model name.
74
77
78
+
.. note:: You can also use the *OpenAI* provider with a custom API URL for any
79
+
OpenAI-compatible endpoint, including Docker Model Runner and other local
80
+
inference servers.
81
+
75
82
After configuring your provider, click *Save* to apply the changes.
Copy file name to clipboardExpand all lines: docs/en_US/release_notes_9_14.rst
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,6 +21,7 @@ New features
21
21
************
22
22
23
23
|`Issue #4011 <https://github.com/pgadmin-org/pgadmin4/issues/4011>`_ - Added support to download binary data from result grid.
24
+
|`Issue #9703 <https://github.com/pgadmin-org/pgadmin4/issues/9703>`_ - Added support for custom LLM provider URLs for OpenAI and Anthropic, allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM.
0 commit comments