Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughTop-level ACME/TLS and server options were consolidated into new global/options blocks in the Caddyfiles; a multi-service Docker Compose stack and environment files were added; an interactive installer ( Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Operator
participant Installer as setup.sh
participant DockerCompose as "docker compose"
participant Migrator
participant API
participant Proxy
participant Postgres as plane-db
participant Redis as plane-redis
participant MinIO as plane-minio
participant MQ as plane-mq
Operator->>Installer: run installer / choose action
Installer->>DockerCompose: download docker-compose + plane.env
Installer->>DockerCompose: docker compose up (all services)
DockerCompose->>Postgres: start
DockerCompose->>Redis: start
DockerCompose->>MinIO: start
DockerCompose->>MQ: start
DockerCompose->>Migrator: start (one-off)
Migrator-->>DockerCompose: exit (success/fail)
DockerCompose->>API: start
API->>Postgres: migrate/connect
API->>Redis: connect
API->>MinIO: connect
Installer->>API: poll readiness endpoint
Installer->>Proxy: ensure proxy started / routes configured
Operator-->>Installer: view logs / backup / upgrade
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/api/plane/settings/common.py (1)
293-311:⚠️ Potential issue | 🟠 MajorUse a separate, lower default for
DATA_UPLOAD_MAX_MEMORY_SIZE.Django uses
DATA_UPLOAD_MAX_MEMORY_SIZEto limit request body parsing before accessingrequest.bodyorrequest.POST. Coupling it withFILE_SIZE_LIMIT(10GB) exposes authentication, password management, and other lightweight endpoints to unnecessarily high memory demands, widening the DoS surface. These endpoints should not admit 10GB payloads.Proposed fix
FILE_SIZE_LIMIT = int(os.environ.get("FILE_SIZE_LIMIT", 10737418240)) # 10GB default -DATA_UPLOAD_MAX_MEMORY_SIZE = int(os.environ.get("FILE_SIZE_LIMIT", 10737418240)) # 10GB default +DATA_UPLOAD_MAX_MEMORY_SIZE = int( + os.environ.get("DATA_UPLOAD_MAX_MEMORY_SIZE", 5242880) +) # 5MB default🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/plane/settings/common.py` around lines 293 - 311, The current code reuses FILE_SIZE_LIMIT for DATA_UPLOAD_MAX_MEMORY_SIZE, exposing lightweight endpoints to huge in-memory uploads; change DATA_UPLOAD_MAX_MEMORY_SIZE to read its own environment variable (e.g., os.environ.get("DATA_UPLOAD_MAX_MEMORY_SIZE")) and parse it to int with a safe, much smaller default (for example 2_621_440 for 2.5MB or 10_485_760 for 10MB) instead of the 10GB FILE_SIZE_LIMIT; update the symbol DATA_UPLOAD_MAX_MEMORY_SIZE accordingly and ensure FILE_SIZE_LIMIT remains unchanged.
🧹 Nitpick comments (7)
packages/shared-state/package.json (1)
36-36: Minor: Consider usingcatalog:for zod.The
zoddependency uses^3.22.2instead ofcatalog:. If managed in the workspace catalog, consider updating for consistency. As per coding guidelines: "Usecatalog:for external dependencies in imports."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared-state/package.json` at line 36, The package.json dependency entry for "zod" currently uses a caret version ("zod": "^3.22.2"); update this to use the workspace catalog specifier instead (e.g., replace the version string with "catalog:zod" or the appropriate catalog pin) so the dependency is resolved from the workspace catalog; locate the "zod" entry in packages/shared-state/package.json to make this change.packages/decorators/package.json (1)
25-32: Consider usingcatalog:for external dependencies.Several dependencies use pinned or caret versions instead of
catalog::
@types/express:4.17.23@types/ws:^8.5.10reflect-metadata:^0.2.2If these versions are intentionally pinned for compatibility reasons (e.g., decorator metadata requirements), this is acceptable. Otherwise, consider aligning with the monorepo convention. As per coding guidelines: "Use
catalog:for external dependencies in imports."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/decorators/package.json` around lines 25 - 32, The devDependencies in package.json currently pin external packages (`@types/express`, `@types/ws`, reflect-metadata) to specific versions; change those entries to use "catalog:" to align with the monorepo convention unless the pinning is intentional for compatibility (e.g., decorator metadata). Edit the devDependencies block to replace the version strings for "@types/express", "@types/ws", and "reflect-metadata" with "catalog:" (or add a short comment in the PR explaining why a specific version must remain pinned if so).packages/logger/package.json (1)
25-34: Consider usingcatalog:for runtime dependencies.The
express-winstonandwinstondependencies use caret versions instead ofcatalog:. If these are managed centrally in the workspace catalog, consider updating for consistency. As per coding guidelines: "Usecatalog:for external dependencies in imports."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/logger/package.json` around lines 25 - 34, The dependencies block in package.json pins "express-winston" and "winston" with caret versions instead of using the workspace catalog; update the runtime dependencies to use the catalog alias by replacing the caret versions for "express-winston" and "winston" with "catalog:" so they are resolved from the central workspace catalog (edit the "dependencies" section in package.json, targeting the "express-winston" and "winston" entries).apps/web/core/components/inbox/inbox-status-icon.tsx (1)
13-38: Hardcoded hex colors reduce theme consistency and maintainability.The styling changes replace semantic design tokens (e.g.,
text-warning-primary,bg-success-subtle) with hardcoded hex values. This creates several concerns:
- Inconsistent approach: Some statuses still use semantic classes (
text-danger-primary,text-placeholder) while others use hex values- Theme compatibility: Hardcoded colors won't adapt if the app supports dark mode or theming
- Maintainability: Color changes now require editing multiple components rather than updating design tokens
If specific colors are required, consider defining them as design tokens in your Tailwind config and using semantic class names.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/core/components/inbox/inbox-status-icon.tsx` around lines 13 - 38, ICON_PROPERTIES currently uses hardcoded hex colors for several EInboxIssueStatus entries which breaks theming and consistency; update each status entry (e.g., EInboxIssueStatus.PENDING, DECLINED, SNOOZED, ACCEPTED, DUPLICATE) so their textColor and bgColor callbacks return semantic Tailwind token class names (like text-warning-primary / bg-warning-subtle, text-danger-primary / bg-danger-subtle, text-success-primary / bg-success-subtle, text-secondary, text-placeholder, etc.) instead of hex strings, and ensure the SNOOZED branch still uses the existing semantic tokens; if needed add the missing design tokens to the Tailwind config and replace any remaining hex literals in ICON_PROPERTIES so all statuses consistently use token class names.apps/web/core/components/pages/navigation-pane/root.tsx (1)
77-88: Consider adding bounds validation for tab index.The
handleTabChangecallback directly accessesPAGE_NAVIGATION_PANE_TAB_KEYS[index]without bounds checking. If an invalid index is passed,updatedTabwould beundefined, potentially causing issues with query parameter updates.Similarly, if
activeTabisn't found inPAGE_NAVIGATION_PANE_TAB_KEYS,selectedIndexwould be-1, which may cause unexpected behavior inTab.Group.🛡️ Proposed defensive check
const handleTabChange = useCallback( (index: number) => { + if (index < 0 || index >= PAGE_NAVIGATION_PANE_TAB_KEYS.length) return; const updatedTab = PAGE_NAVIGATION_PANE_TAB_KEYS[index]; const isUpdatedTabInfo = updatedTab === "info";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/core/components/pages/navigation-pane/root.tsx` around lines 77 - 88, handleTabChange currently indexes PAGE_NAVIGATION_PANE_TAB_KEYS without validation and elsewhere selectedIndex can be -1 when activeTab is missing; add bounds checks to prevent undefined tabs: in handleTabChange (and any caller) validate that index is within 0..PAGE_NAVIGATION_PANE_TAB_KEYS.length-1 and either clamp or return early, derive updatedTab only when valid (fallback to a safe default like PAGE_NAVIGATION_PANE_TAB_KEYS[0] if needed) before calling updateQueryParams, and ensure selectedIndex used by Tab.Group is never -1 by mapping missing activeTab to the default index; reference symbols: handleTabChange, PAGE_NAVIGATION_PANE_TAB_KEYS, PAGE_NAVIGATION_PANE_TABS_QUERY_PARAM, PAGE_NAVIGATION_PANE_VERSION_QUERY_PARAM, selectedIndex, activeTab, Tab.Group.apps/api/plane/db/models/workspace.py.backup (1)
1-455: Please drop the committed backup copy of the model.This file is already drifting from
workspace.pyin the same PR (navigation_project_limitstill defaults to10here), so keeping a second schema copy underdb/modelsmakes it too easy for later changes to land in the wrong place.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/plane/db/models/workspace.py.backup` around lines 1 - 455, Remove the committed backup model file; delete the duplicate db models file containing Workspace/WorkspaceUserProperties (the file with the outdated navigation_project_limit default of 10 and duplicate classes like Workspace, WorkspaceUserProperties, WorkspaceMember, etc.), ensure only the canonical workspace.py remains in the repo, run git rm on the backup file and commit the removal, and confirm there are no imports or references pointing to the removed backup copy.apps/api/plane/settings/common.py.backup (1)
1-471: Please drop the committed.backupsettings module.This file already diverges from
apps/api/plane/settings/common.pyin the same PR, so it becomes a stale second source of truth for deployment-sensitive settings. The same cleanup should apply to the other new.backupmodules here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/plane/settings/common.py.backup` around lines 1 - 471, There is a committed backup settings module (.backup) that diverges from the canonical settings and acts as a stale second source of truth; remove the backup settings module file (the one containing variables like SECRET_KEY, DEBUG, IS_SELF_MANAGED, and CORS settings) from the PR and repository, and ensure only the main settings module (common.py) remains. Delete the .backup file from git, remove any other similarly named `.backup` settings modules introduced in this PR, update the branch so only the canonical settings are present, and re-run checks to confirm no references remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/Dockerfile.api`:
- Around line 35-39: Remove the insecure "--trusted-host" flags from the pip
invocation (the "pip install -r requirements.txt" command) and instead ensure
the image has proper CA certificates installed; update the Dockerfile to delete
the three --trusted-host entries and add installation of the system CA bundle
(e.g., install the distribution package that provides ca-certificates) before
running pip so TLS verification succeeds without bypassing certificate checks.
In `@apps/api/plane/api/rate_limit.py`:
- Around line 12-14: The rate defaults are insecure and inconsistent: change
ApiKeyRateThrottle.rate from "10000/minute" to use
os.environ.get("API_KEY_RATE_LIMIT", "60/minute") and update
ServiceTokenRateThrottle to stop hardcoding its rate and instead read from an
environment variable (e.g., os.environ.get("SERVICE_TOKEN_RATE_LIMIT",
"60/minute")), keeping their scope names (ApiKeyRateThrottle,
ServiceTokenRateThrottle) unchanged so both throttle classes honor the
documented 60/minute default and allow env overrides.
In `@apps/api/plane/authentication/rate_limit.py`:
- Line 6: The auth throttles are far too permissive and the
EmailVerificationThrottle docstring is inconsistent; remove the unused import os
and tighten the rate limits by updating the throttle classes: change
EmailVerificationThrottle to enforce 3/hour (and update its docstring to match
"3 requests per hour per user"), and reduce the AuthenticationThrottle from
10000/minute to a sensible auth rate (e.g., 5–10/minute) to restore brute-force
protection; locate and edit the classes EmailVerificationThrottle and
AuthenticationThrottle (and any rate attribute or RATE constant they reference)
to set the new numeric rate strings and remove the unused import.
In `@apps/api/plane/db/models/workspace.py`:
- Line 329: The model field navigation_project_limit on WorkspaceUserProperties
was changed from default=10 to default=0 but no migration was added; create a
Django migration that uses migrations.AlterField to set navigation_project_limit
to models.IntegerField(default=0, help_text="0 means unlimited") and, if you
need existing rows updated, include a migrations.RunPython forward function to
backfill any rows where navigation_project_limit is the old value (set them to
0) and a no-op reverse. Name the migration following your app’s sequence and
import migrations.AlterField and migrations.RunPython so the DB default and
existing data are updated.
In `@apps/api/requirements/test.txt`:
- Line 12: The PR changed the test dependency in apps/api/requirements/test.txt
from requests==2.33.0 to requests==2.32.4 without explanation; update the PR
description or add a short note in the changelog/commit message explaining the
rationale (e.g., test-suite incompatibility, transitive dependency conflict, or
a failing integration) and include evidence such as failing test names or error
messages, the affected test environment, and any steps taken to verify 2.32.4
resolves the issue so reviewers can validate the downgrade of requests==2.32.4.
In `@apps/proxy/Caddyfile.aio.ce`:
- Line 8: The Caddy config line setting trusted_proxies uses an overly
permissive default ("trusted_proxies static {$TRUSTED_PROXIES:0.0.0.0/0}") which
allows IP header spoofing; change the default to use private_ranges instead so
only RFC1918/local ranges are trusted. Update the trusted_proxies declaration
(the line referencing trusted_proxies and the env var $TRUSTED_PROXIES) to
default to private_ranges and ensure any documentation or env examples reflect
the new default.
In `@apps/proxy/Caddyfile.ce`:
- Line 8: The Caddyfile sets trusted_proxies with an overly permissive default
("trusted_proxies static {$TRUSTED_PROXIES:0.0.0.0/0}"); change this to a
restrictive default or require explicit configuration by replacing the default
value with a safer option such as "private_ranges" or removing the permissive
fallback so the variable must be set; update the trusted_proxies line
(referencing trusted_proxies and the {$TRUSTED_PROXIES} variable) accordingly
and add a comment or validation to ensure deploys do not use a 0.0.0.0/0
default.
In `@apps/web/core/components/analytics/overview/active-project-item.tsx`:
- Line 26: The completion badge currently applies identical semantic tokens for
background and text (the conditional that uses percentage to pick
"bg-success-primary text-success-primary" or "bg-danger-primary
text-danger-primary"), causing poor contrast; update the class selection in
active-project-item.tsx so the background uses the primary semantic token but
the text uses a contrasting foreground token (e.g., "bg-success-primary
text-success-foreground" and "bg-danger-primary text-danger-foreground" or a
similar readable text token from your design system) so the conditional that
references percentage uses mixed bg/text tokens rather than identical ones.
In `@apps/web/core/components/inbox/content/inbox-issue-header.tsx`:
- Around line 299-312: The "previous" and "next" native buttons rendering
ChevronUpIcon/ChevronDownIcon lack accessible names; update the two button
elements that call handleInboxIssueNavigation("prev") and
handleInboxIssueNavigation("next") to include clear accessible labels (e.g.,
aria-label="Previous issue" and aria-label="Next issue" or aria-labelledby
references) so screen readers announce the action; ensure the labels are concise
and match the buttons' behavior.
- Around line 319-323: The accept/decline Button components use
focus:text-success-primary and focus:text-danger-primary which cause the
label/icon to blend into the solid background on focus; update the Button
className for both the accept (uses CircleCheck) and decline buttons to remove
the focus:text-success-primary / focus:text-danger-primary rules and instead use
a visible focus foreground (e.g., focus:text-on-color or no focus text color
change) so the label and prependIcon remain visible on focus; keep the existing
background/focus background classes (bg-success-primary/bg-danger-primary and
their focus variants) and ensure the prependIcon stays unchanged so icons remain
visible and accessible.
In `@apps/web/core/components/inbox/content/inbox-issue-mobile-header.tsx`:
- Around line 105-118: The previous/next arrow buttons in InboxIssueMobileHeader
are icon-only and lack accessible names; update the two button elements that
call handleInboxIssueNavigation("prev") and handleInboxIssueNavigation("next")
to include descriptive accessible labels (e.g., aria-label="Previous issue" and
aria-label="Next issue" or use visually hidden text) so screen readers can
announce their purpose, ensuring the labels match the ChevronUpIcon and
ChevronDownIcon actions and remain localized if applicable.
In `@apps/web/core/components/profile/overview/activity.tsx`:
- Around line 53-62: The img and fallback avatar can render empty alt text and
an empty initial when activity.actor_detail.display_name is missing; update the
avatar rendering (references: activity.actor_detail, getFileURL, the <img> and
the fallback div) to always provide a meaningful accessible label and visible
fallback: set the img alt to activity.actor_detail.display_name || "User avatar"
(or similar descriptive text) and for the fallback initial render the first
character of display_name if present or a default glyph like "?" when not, and
ensure the fallback has an appropriate aria-label/role to announce it to
assistive tech.
In `@apps/web/core/constants/plans.tsx`:
- Around line 1303-1309: The planHighlights entries use inconsistent casing for
time tracking ("Full time tracking" vs "Full Time Tracking"); update the strings
in the planHighlights object (keys free, pro, one, etc.) to use a consistent
casing—pick one style (e.g., "Full Time Tracking") and replace the mismatched
occurrences so all highlight strings are uniform across free, one, pro,
business, enterprise where applicable.
In `@apps/web/core/constants/plans.tsx.backup`:
- Around line 1-1311: This file is a development backup and should be removed
from the PR—delete the backup file containing symbols like PLANE_PLANS,
PLANS_COMPARISON_LIST, PLANS_LIST, ComingSoonBadge and ForumIcon, remove it from
git (unstage/remove and commit the deletion) so it no longer appears in the
commit set, and ensure no other .backup artifacts are included (or add a rule to
ignore such backup files if needed).
In `@apps/web/core/hooks/use-navigation-preferences.ts`:
- Around line 121-132: The memo incorrectly rewrites the sentinel 0 (meaning
“unlimited”) into Number.MAX_SAFE_INTEGER and sets showLimitedProjects true;
change the logic in the branch using storePreferences (and the same logic around
151-163) to preserve the raw stored navigation_project_limit (including 0) for
limitedProjectsCount, compute showLimitedProjects as true only when the stored
limit is a positive number, and keep navigationMode from
navigation_control_preference (falling back to
DEFAULT_PROJECT_PREFERENCES.navigationMode) — in short: do not convert 0 to
MAX_SAFE_INTEGER, return the stored value
(storePreferences.navigation_project_limit) and derive showLimitedProjects as
storePreferences.navigation_project_limit > 0.
In `@packages/services/package.json`:
- Around line 9-14: The package.json has inconsistent type entry points: the
top-level "types" field points to "./dist/index.d.ts" while exports["."].types
points to "./dist/index-BliaS-AT.d.ts"; update exports["."].types to
"./dist/index.d.ts" so the "types" field and exports["."].types match (ensure
you modify the exports block entry for "." to use "./dist/index.d.ts" rather
than the hashed filename).
---
Outside diff comments:
In `@apps/api/plane/settings/common.py`:
- Around line 293-311: The current code reuses FILE_SIZE_LIMIT for
DATA_UPLOAD_MAX_MEMORY_SIZE, exposing lightweight endpoints to huge in-memory
uploads; change DATA_UPLOAD_MAX_MEMORY_SIZE to read its own environment variable
(e.g., os.environ.get("DATA_UPLOAD_MAX_MEMORY_SIZE")) and parse it to int with a
safe, much smaller default (for example 2_621_440 for 2.5MB or 10_485_760 for
10MB) instead of the 10GB FILE_SIZE_LIMIT; update the symbol
DATA_UPLOAD_MAX_MEMORY_SIZE accordingly and ensure FILE_SIZE_LIMIT remains
unchanged.
---
Nitpick comments:
In `@apps/api/plane/db/models/workspace.py.backup`:
- Around line 1-455: Remove the committed backup model file; delete the
duplicate db models file containing Workspace/WorkspaceUserProperties (the file
with the outdated navigation_project_limit default of 10 and duplicate classes
like Workspace, WorkspaceUserProperties, WorkspaceMember, etc.), ensure only the
canonical workspace.py remains in the repo, run git rm on the backup file and
commit the removal, and confirm there are no imports or references pointing to
the removed backup copy.
In `@apps/api/plane/settings/common.py.backup`:
- Around line 1-471: There is a committed backup settings module (.backup) that
diverges from the canonical settings and acts as a stale second source of truth;
remove the backup settings module file (the one containing variables like
SECRET_KEY, DEBUG, IS_SELF_MANAGED, and CORS settings) from the PR and
repository, and ensure only the main settings module (common.py) remains. Delete
the .backup file from git, remove any other similarly named `.backup` settings
modules introduced in this PR, update the branch so only the canonical settings
are present, and re-run checks to confirm no references remain.
In `@apps/web/core/components/inbox/inbox-status-icon.tsx`:
- Around line 13-38: ICON_PROPERTIES currently uses hardcoded hex colors for
several EInboxIssueStatus entries which breaks theming and consistency; update
each status entry (e.g., EInboxIssueStatus.PENDING, DECLINED, SNOOZED, ACCEPTED,
DUPLICATE) so their textColor and bgColor callbacks return semantic Tailwind
token class names (like text-warning-primary / bg-warning-subtle,
text-danger-primary / bg-danger-subtle, text-success-primary /
bg-success-subtle, text-secondary, text-placeholder, etc.) instead of hex
strings, and ensure the SNOOZED branch still uses the existing semantic tokens;
if needed add the missing design tokens to the Tailwind config and replace any
remaining hex literals in ICON_PROPERTIES so all statuses consistently use token
class names.
In `@apps/web/core/components/pages/navigation-pane/root.tsx`:
- Around line 77-88: handleTabChange currently indexes
PAGE_NAVIGATION_PANE_TAB_KEYS without validation and elsewhere selectedIndex can
be -1 when activeTab is missing; add bounds checks to prevent undefined tabs: in
handleTabChange (and any caller) validate that index is within
0..PAGE_NAVIGATION_PANE_TAB_KEYS.length-1 and either clamp or return early,
derive updatedTab only when valid (fallback to a safe default like
PAGE_NAVIGATION_PANE_TAB_KEYS[0] if needed) before calling updateQueryParams,
and ensure selectedIndex used by Tab.Group is never -1 by mapping missing
activeTab to the default index; reference symbols: handleTabChange,
PAGE_NAVIGATION_PANE_TAB_KEYS, PAGE_NAVIGATION_PANE_TABS_QUERY_PARAM,
PAGE_NAVIGATION_PANE_VERSION_QUERY_PARAM, selectedIndex, activeTab, Tab.Group.
In `@packages/decorators/package.json`:
- Around line 25-32: The devDependencies in package.json currently pin external
packages (`@types/express`, `@types/ws`, reflect-metadata) to specific versions;
change those entries to use "catalog:" to align with the monorepo convention
unless the pinning is intentional for compatibility (e.g., decorator metadata).
Edit the devDependencies block to replace the version strings for
"@types/express", "@types/ws", and "reflect-metadata" with "catalog:" (or add a
short comment in the PR explaining why a specific version must remain pinned if
so).
In `@packages/logger/package.json`:
- Around line 25-34: The dependencies block in package.json pins
"express-winston" and "winston" with caret versions instead of using the
workspace catalog; update the runtime dependencies to use the catalog alias by
replacing the caret versions for "express-winston" and "winston" with "catalog:"
so they are resolved from the central workspace catalog (edit the "dependencies"
section in package.json, targeting the "express-winston" and "winston" entries).
In `@packages/shared-state/package.json`:
- Line 36: The package.json dependency entry for "zod" currently uses a caret
version ("zod": "^3.22.2"); update this to use the workspace catalog specifier
instead (e.g., replace the version string with "catalog:zod" or the appropriate
catalog pin) so the dependency is resolved from the workspace catalog; locate
the "zod" entry in packages/shared-state/package.json to make this change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7b26f2bc-99c4-4117-b36b-711b2137578b
📒 Files selected for processing (45)
apps/api/Dockerfile.apiapps/api/plane/api/rate_limit.pyapps/api/plane/api/rate_limit.py.backupapps/api/plane/authentication/rate_limit.pyapps/api/plane/authentication/rate_limit.py.backupapps/api/plane/db/models/workspace.pyapps/api/plane/db/models/workspace.py.backupapps/api/plane/settings/common.pyapps/api/plane/settings/common.py.backupapps/api/requirements/test.txtapps/live/package.jsonapps/proxy/Caddyfile.aio.ceapps/proxy/Caddyfile.ceapps/web/core/components/analytics/overview/active-project-item.tsxapps/web/core/components/inbox/content/inbox-issue-header.tsxapps/web/core/components/inbox/content/inbox-issue-mobile-header.tsxapps/web/core/components/inbox/inbox-status-icon.tsxapps/web/core/components/issues/issue-detail-widgets/sub-issues/issues-list/properties.tsxapps/web/core/components/pages/navigation-pane/root.tsxapps/web/core/components/pages/navigation-pane/tabs-list.tsxapps/web/core/components/profile/overview/activity.tsxapps/web/core/constants/plans.tsxapps/web/core/constants/plans.tsx.backupapps/web/core/hooks/use-navigation-preferences.tspackages/constants/package.jsonpackages/decorators/package.jsonpackages/editor/package.jsonpackages/hooks/package.jsonpackages/hooks/tsdown.config.tspackages/i18n/package.jsonpackages/i18n/tsdown.config.tspackages/logger/package.jsonpackages/propel/package.jsonpackages/propel/src/icons/misc/index.tspackages/services/package.jsonpackages/services/tsdown.config.tspackages/shared-state/package.jsonpackages/shared-state/tsdown.config.tspackages/tailwind-config/package.jsonpackages/types/package.jsonpackages/ui/package.jsonpackages/ui/src/progress/circular-progress-indicator.tsxpackages/ui/tsdown.config.tspackages/utils/package.jsonpackages/utils/tsdown.config.ts
💤 Files with no reviewable changes (8)
- packages/hooks/tsdown.config.ts
- packages/services/tsdown.config.ts
- packages/ui/tsdown.config.ts
- packages/i18n/tsdown.config.ts
- packages/shared-state/tsdown.config.ts
- packages/propel/src/icons/misc/index.ts
- packages/utils/tsdown.config.ts
- apps/web/core/components/issues/issue-detail-widgets/sub-issues/issues-list/properties.tsx
apps/api/Dockerfile.api
Outdated
| pip install -r requirements.txt \ | ||
| --trusted-host pypi.org \ | ||
| --trusted-host pypi.python.org \ | ||
| --trusted-host files.pythonhosted.org \ | ||
| --compile --no-cache-dir --default-timeout=100 --retries 5 \ |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What does pip's --trusted-host option do, and does it disable normal HTTPS certificate validation for that host?
💡 Result:
The pip --trusted-host option marks a specified host (or host:port) as trusted, allowing pip to connect to it even if it lacks valid HTTPS or has certificate issues. Yes, it disables normal HTTPS certificate validation for that host by treating it as secure, bypassing SSL verification errors like CERTIFICATE_VERIFY_FAILED. This is used for insecure HTTP connections or self-signed/invalid certs on HTTPS, but reduces security by exposing to man-in-the-middle attacks. Use example: pip install --trusted-host example.com package It can be set via command line, PIP_TRUSTED_HOST env var, or pip.conf/ini.
Citations:
- 1: https://pip.pypa.io/en/stable/cli/pip/?highlight=trusted-host
- 2: https://pip.pypa.io/en/stable/topics/https-certificates.html
- 3: pip ignores --trusted-host pypa/pip#5363
- 4: Move PackageFinder's --trusted-host logic to PipSession pypa/pip#6903
- 5: https://codegive.com/blog/python_pip_trusted_host.php
Remove the --trusted-host flags from pip install.
These flags disable HTTPS certificate validation, which weakens build-time supply-chain security by allowing pip to bypass SSL verification for PyPI hosts. Since pypi.org, pypi.python.org, and files.pythonhosted.org have valid certificates, these flags are unnecessary. If certificate issues exist, install the required CA certificates in the image instead of trusting hosts blindly.
Proposed fix
- pip install -r requirements.txt \
- --trusted-host pypi.org \
- --trusted-host pypi.python.org \
- --trusted-host files.pythonhosted.org \
- --compile --no-cache-dir --default-timeout=100 --retries 5 \
+ pip install -r requirements.txt \
+ --compile --no-cache-dir --default-timeout=100 --retries 5 \📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pip install -r requirements.txt \ | |
| --trusted-host pypi.org \ | |
| --trusted-host pypi.python.org \ | |
| --trusted-host files.pythonhosted.org \ | |
| --compile --no-cache-dir --default-timeout=100 --retries 5 \ | |
| pip install -r requirements.txt \ | |
| --compile --no-cache-dir --default-timeout=100 --retries 5 \ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/Dockerfile.api` around lines 35 - 39, Remove the insecure
"--trusted-host" flags from the pip invocation (the "pip install -r
requirements.txt" command) and instead ensure the image has proper CA
certificates installed; update the Dockerfile to delete the three --trusted-host
entries and add installation of the system CA bundle (e.g., install the
distribution package that provides ca-certificates) before running pip so TLS
verification succeeds without bypassing certificate checks.
apps/api/plane/api/rate_limit.py
Outdated
| class ApiKeyRateThrottle(SimpleRateThrottle): | ||
| scope = "api_key" | ||
| rate = os.environ.get("API_KEY_RATE_LIMIT", "60/minute") | ||
| rate = os.environ.get("API_KEY_RATE_LIMIT", "10000/minute") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect which request handlers use these throttles.
rg -n -C2 --glob '!**/*.backup' '\b(ApiKeyRateThrottle|ServiceTokenRateThrottle)\b'Repository: makeplane/plane
Length of output: 1659
🏁 Script executed:
cat -n apps/api/plane/api/rate_limit.pyRepository: makeplane/plane
Length of output: 3587
🏁 Script executed:
# Check if there are any comments or documentation about rate limits
rg -n "rate|limit|RATE|LIMIT" apps/api/plane/api/rate_limit.py -iRepository: makeplane/plane
Length of output: 845
🏁 Script executed:
# Look for environment variable documentation
fd -i readme -o -i config -o -i env | head -20Repository: makeplane/plane
Length of output: 175
🏁 Script executed:
# Check for any GitHub issues or comments about rate limiting
rg -i "rate.*limit|throttl" --type md --type txt . | grep -i "api.*key\|service.*token" | head -20Repository: makeplane/plane
Length of output: 162
🏁 Script executed:
# Search all documentation for rate limit mentions
rg -i "rate.*limit|throttle" --type md . | head -40Repository: makeplane/plane
Length of output: 162
🏁 Script executed:
# Check if there are any other env var defaults documented
rg -i "API_KEY_RATE_LIMIT|SERVICE.*TOKEN.*RATE" . --type mdRepository: makeplane/plane
Length of output: 162
🏁 Script executed:
# Look for any comments in code about why these specific values were chosen
rg -B5 -A5 "10000" apps/api/plane/api/rate_limit.pyRepository: makeplane/plane
Length of output: 733
🏁 Script executed:
# Check for environment configuration examples or documentation
fd -i ".env" -o -i "example" -o -i "config" | grep -E "env|config|example" | head -20Repository: makeplane/plane
Length of output: 175
🏁 Script executed:
# Look for any changelog or migration notes about rate limit changes
rg -i "rate.limit" --type md . | grep -E "change|update|migration|default"Repository: makeplane/plane
Length of output: 162
🏁 Script executed:
# Check if ServiceTokenRateThrottle has any environment variable option elsewhere
rg -i "SERVICE_TOKEN_RATE" .Repository: makeplane/plane
Length of output: 41
Code defaults contradict documentation and undermine rate limit security.
The defaults in rate_limit.py are set to 10000/minute but the deployment documentation (deployments/aio/community/README.md) specifies API_KEY_RATE_LIMIT should default to 60/minute. At 10000/minute (~166 req/sec per token), these limits provide minimal protection against leaked or compromised credentials. Additionally, ServiceTokenRateThrottle hardcodes the rate with no environment override, unlike ApiKeyRateThrottle.
Revert both to documented and secure defaults:
Proposed fix
class ApiKeyRateThrottle(SimpleRateThrottle):
scope = "api_key"
- rate = os.environ.get("API_KEY_RATE_LIMIT", "10000/minute")
+ rate = os.environ.get("API_KEY_RATE_LIMIT", "60/minute")
@@
class ServiceTokenRateThrottle(SimpleRateThrottle):
scope = "service_token"
- rate = "10000/minute"
+ rate = os.environ.get("SERVICE_TOKEN_RATE_LIMIT", "300/minute")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/plane/api/rate_limit.py` around lines 12 - 14, The rate defaults are
insecure and inconsistent: change ApiKeyRateThrottle.rate from "10000/minute" to
use os.environ.get("API_KEY_RATE_LIMIT", "60/minute") and update
ServiceTokenRateThrottle to stop hardcoding its rate and instead read from an
environment variable (e.g., os.environ.get("SERVICE_TOKEN_RATE_LIMIT",
"60/minute")), keeping their scope names (ApiKeyRateThrottle,
ServiceTokenRateThrottle) unchanged so both throttle classes honor the
documented 60/minute default and allow env overrides.
| display_properties = models.JSONField(default=get_default_display_properties) | ||
| rich_filters = models.JSONField(default=dict) | ||
| navigation_project_limit = models.IntegerField(default=10) | ||
| navigation_project_limit = models.IntegerField(default=0, help_text="0 means unlimited") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify whether this PR includes both:
# 1) a migration altering WorkspaceUserProperties.navigation_project_limit, and
# 2) a data backfill for existing rows if all current users should become unlimited.
# Expected: at least one migration touching the field, and a RunPython/SQL update if existing data must change.
rg -n -C3 --glob 'apps/api/**/migrations/*.py' \
'navigation_project_limit|WorkspaceUserProperties|AlterField|RunPython|UPDATE'Repository: makeplane/plane
Length of output: 50372
🏁 Script executed:
#!/bin/bash
# More targeted search for navigation_project_limit in migrations
rg -n 'navigation_project_limit' apps/api/plane/db/migrations/
# Also check the current model definition
echo "=== Model definition ==="
fd 'workspace.py' apps/api/plane/db/models/ --exec cat -n {} \; | grep -A2 -B2 'navigation_project_limit'Repository: makeplane/plane
Length of output: 878
🏁 Script executed:
cat -n apps/api/plane/db/migrations/0110_workspaceuserproperties_navigation_control_preference_and_more.pyRepository: makeplane/plane
Length of output: 926
🏁 Script executed:
# Find migrations numbered 0111 and higher that touch WorkspaceUserProperties or navigation_project_limit
rg -n 'navigation_project_limit|workspaceuserproperties' apps/api/plane/db/migrations/ | grep -E '011[1-9]|012[0-9]|013[0-9]|014[0-9]'Repository: makeplane/plane
Length of output: 623
🏁 Script executed:
echo "=== Migration 0113 ==="
sed -n '45,65p' apps/api/plane/db/migrations/0113_webhook_version.py
echo ""
echo "=== Migration 0118 ==="
cat -n apps/api/plane/db/migrations/0118_remove_workspaceuserproperties_product_tour_and_more.py | head -100Repository: makeplane/plane
Length of output: 3682
🏁 Script executed:
# Search more thoroughly for the AlterField on navigation_project_limit
find apps/api/plane/db/migrations -name '*.py' -exec grep -l 'navigation_project_limit' {} \;
# Also list all migration files to see what numbers exist
ls -1 apps/api/plane/db/migrations/*.py | tail -20Repository: makeplane/plane
Length of output: 1605
🏁 Script executed:
# Check earlier migrations that mention navigation_project_limit
echo "=== Migration 0102 ==="
grep -n -A2 -B2 'navigation_project_limit' apps/api/plane/db/migrations/0102_page_sort_order_pagelog_entity_type_and_more.py
echo ""
echo "=== Migration 0103 ==="
grep -n -A2 -B2 'navigation_project_limit' apps/api/plane/db/migrations/0103_fileasset_asset_entity_type_idx_and_more.pyRepository: makeplane/plane
Length of output: 105
🏁 Script executed:
# Check if there are any uncommitted/untracked migration files
ls -la apps/api/plane/db/migrations/ | grep -E '012[1-9]|013'
# Also check git status for the migrations directory
git status apps/api/plane/db/migrations/ | head -30Repository: makeplane/plane
Length of output: 592
🏁 Script executed:
# Verify the exact model definition at line 329
sed -n '325,335p' apps/api/plane/db/models/workspace.pyRepository: makeplane/plane
Length of output: 611
Missing migration for changing the field default—existing rows won't be updated.
The model default changed from 10 to 0, but there's no migration file to apply this change to the database schema. Existing WorkspaceUserProperties rows will retain their current navigation_project_limit values (the old default or whatever was previously set). Create an AlterField migration to change the database default. If you also need to backfill existing rows to the new default, add a RunPython operation in the same migration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/plane/db/models/workspace.py` at line 329, The model field
navigation_project_limit on WorkspaceUserProperties was changed from default=10
to default=0 but no migration was added; create a Django migration that uses
migrations.AlterField to set navigation_project_limit to
models.IntegerField(default=0, help_text="0 means unlimited") and, if you need
existing rows updated, include a migrations.RunPython forward function to
backfill any rows where navigation_project_limit is the old value (set them to
0) and a no-op reverse. Name the migration following your app’s sequence and
import migrations.AlterField and migrations.RunPython so the DB default and
existing data are updated.
apps/api/requirements/test.txt
Outdated
| coverage==7.2.7 | ||
| httpx==0.24.1 | ||
| requests==2.33.0 No newline at end of file | ||
| requests==2.32.4 No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if requests 2.33.0 exists, verify 2.32.4 security, and check base.txt consistency
echo "=== Checking available requests versions on PyPI ==="
curl -s https://pypi.org/pypi/requests/json | jq -r '.releases | keys[]' | grep -E '^2\.(32|33)' | sort -V | tail -10
echo -e "\n=== Checking latest stable version ==="
curl -s https://pypi.org/pypi/requests/json | jq -r '.info.version'
echo -e "\n=== Checking for security advisories for requests package ==="
gh api graphql -f query='
{
securityVulnerabilities(first: 10, ecosystem: PIP, package: "requests") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
echo -e "\n=== Checking base.txt for requests version ==="
rg -n "^requests==" apps/api/requirements/base.txtRepository: makeplane/plane
Length of output: 478
🏁 Script executed:
# Check for CVEs and security issues for requests 2.32.4
echo "=== Checking security advisories for requests package ==="
echo "Attempting to fetch from known vulnerability databases..."
# Try checking via curl to common sources
echo -e "\n--- Attempting safety-db check ---"
curl -s 'https://api.github.com/repos/pyupio/safety-db/contents/insecure_full.json' 2>/dev/null | head -20 || echo "Safety DB unavailable"
# Check git log for this specific change
echo -e "\n=== Checking git history for requests version changes ==="
git log -p --all -- apps/api/requirements/test.txt | head -100
# Check if requests is imported elsewhere to understand why it might be downgraded
echo -e "\n=== Checking usage of requests in test code ==="
rg "import requests|from requests" apps/api/ --type py | head -20Repository: makeplane/plane
Length of output: 1099
🏁 Script executed:
# Let me check the actual contents of base.txt to understand the dependency chain
echo "=== Content of base.txt ==="
cat -n apps/api/requirements/base.txt | grep -i requests
echo -e "\n=== Full base.txt for context ==="
cat apps/api/requirements/base.txt
echo -e "\n=== Check test.txt dependencies ==="
head -n 20 apps/api/requirements/test.txt
echo -e "\n=== Check if there are any comments or explanations in test.txt ==="
cat -n apps/api/requirements/test.txtRepository: makeplane/plane
Length of output: 1907
Provide justification for the requests version downgrade.
The downgrade from requests==2.33.0 to 2.32.4 is intentional and both versions are valid. However, the PR description is empty with no explanation for this change. Since requests==2.33.0 is the current latest stable version on PyPI, downgrading to an older patch version requires documented rationale.
Please clarify why version 2.32.4 is necessary for the test environment—whether this is due to test suite compatibility issues, dependency conflicts, or other concerns.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/requirements/test.txt` at line 12, The PR changed the test
dependency in apps/api/requirements/test.txt from requests==2.33.0 to
requests==2.32.4 without explanation; update the PR description or add a short
note in the changelog/commit message explaining the rationale (e.g., test-suite
incompatibility, transitive dependency conflict, or a failing integration) and
include evidence such as failing test names or error messages, the affected test
environment, and any steps taken to verify 2.32.4 resolves the issue so
reviewers can validate the downgrade of requests==2.32.4.
apps/web/core/constants/plans.tsx
Outdated
| planHighlights: { | ||
| free: ["Unlimited users", "Pages", "Unlimited projects", "Unlimited cycles and modules"], | ||
| one: ["Unlimited users", "OIDC and SAML", "Active cycles", "Full time tracking"], | ||
| pro: ["Unlimited users", "Custom work items + Properties", "Work item templates", "Full Time Tracking"], | ||
| business: ["Unlimited users", "RBAC", "Project Templates", "Baselines And Deviations", "Custom Reports"], | ||
| enterprise: ["Unlimited users", "Private + managed deployments", "GAC", "LDAP support", "Databases + Formulas"], | ||
| }, |
There was a problem hiding this comment.
Inconsistent casing in time tracking highlight text.
Line 1305 uses "Full time tracking" while Line 1306 uses "Full Time Tracking". Consider using consistent casing across all highlight strings for UI consistency.
📝 Suggested fix
planHighlights: {
free: ["Unlimited users", "Pages", "Unlimited projects", "Unlimited cycles and modules"],
- one: ["Unlimited users", "OIDC and SAML", "Active cycles", "Full time tracking"],
+ one: ["Unlimited users", "OIDC and SAML", "Active cycles", "Full Time Tracking"],
pro: ["Unlimited users", "Custom work items + Properties", "Work item templates", "Full Time Tracking"],
business: ["Unlimited users", "RBAC", "Project Templates", "Baselines And Deviations", "Custom Reports"],
enterprise: ["Unlimited users", "Private + managed deployments", "GAC", "LDAP support", "Databases + Formulas"],
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| planHighlights: { | |
| free: ["Unlimited users", "Pages", "Unlimited projects", "Unlimited cycles and modules"], | |
| one: ["Unlimited users", "OIDC and SAML", "Active cycles", "Full time tracking"], | |
| pro: ["Unlimited users", "Custom work items + Properties", "Work item templates", "Full Time Tracking"], | |
| business: ["Unlimited users", "RBAC", "Project Templates", "Baselines And Deviations", "Custom Reports"], | |
| enterprise: ["Unlimited users", "Private + managed deployments", "GAC", "LDAP support", "Databases + Formulas"], | |
| }, | |
| planHighlights: { | |
| free: ["Unlimited users", "Pages", "Unlimited projects", "Unlimited cycles and modules"], | |
| one: ["Unlimited users", "OIDC and SAML", "Active cycles", "Full Time Tracking"], | |
| pro: ["Unlimited users", "Custom work items + Properties", "Work item templates", "Full Time Tracking"], | |
| business: ["Unlimited users", "RBAC", "Project Templates", "Baselines And Deviations", "Custom Reports"], | |
| enterprise: ["Unlimited users", "Private + managed deployments", "GAC", "LDAP support", "Databases + Formulas"], | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/core/constants/plans.tsx` around lines 1303 - 1309, The
planHighlights entries use inconsistent casing for time tracking ("Full time
tracking" vs "Full Time Tracking"); update the strings in the planHighlights
object (keys free, pro, one, etc.) to use a consistent casing—pick one style
(e.g., "Full Time Tracking") and replace the mismatched occurrences so all
highlight strings are uniform across free, one, pro, business, enterprise where
applicable.
This reverts commit e745bd2.
|
|
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (5)
setup.sh (3)
219-222: Separate declaration from assignment forAPP_RELEASE.Same SC2155 pattern - if
checkLatestReleasefails but returns output, the error is masked.Proposed fix
if [ "$APP_RELEASE" == "stable" ]; then - export APP_RELEASE=$(checkLatestRelease) + local latest + latest=$(checkLatestRelease) + export APP_RELEASE="$latest" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@setup.sh` around lines 219 - 222, The current inline export/assignment of APP_RELEASE via command substitution masks failures from checkLatestRelease; instead, call checkLatestRelease into a local variable (e.g., latest_release = $(checkLatestRelease)), verify its exit status or non-empty output, then export APP_RELEASE from that variable; update the block that references APP_RELEASE and checkLatestRelease to perform the assignment and error check separately before exporting so failures are not hidden.
58-67: Declare and assign separately to capture return value.Per ShellCheck SC2155, combining
localdeclaration with command substitution masks the command's return value. Ifcurlor the parsing fails, the error is silently ignored.Proposed fix
function checkLatestRelease(){ echo "Checking for the latest release..." >&2 - local latest_release=$(curl -sSL https://api.github.com/repos/$GH_REPO/releases/latest | grep -o '"tag_name": "[^"]*"' | sed 's/"tag_name": "//;s/"//g') + local latest_release + latest_release=$(curl -sSL https://api.github.com/repos/$GH_REPO/releases/latest | grep -o '"tag_name": "[^"]*"' | sed 's/"tag_name": "//;s/"//g') if [ -z "$latest_release" ]; then echo "Failed to check for the latest release. Exiting..." >&2 exit 1 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@setup.sh` around lines 58 - 67, In checkLatestRelease, avoid combining declaration and command substitution for latest_release because that hides the command's exit status; instead declare local latest_release first, then run the curl/parse command into latest_release in a separate step (or use the pattern if ! latest_release=$(...); then handle the failure), and check the command's exit code or the conditional result before proceeding so failures from curl/grep/sed are detected and trigger the existing error handling; reference the function name checkLatestRelease and the variable latest_release when making the change.
140-146: Inconsistent character escaping between macOS and Linux.The macOS branch escapes
|but uses|as the sed delimiter, while the Linux branch escapes/and uses/as the delimiter. Values containing both|and/may not be handled correctly on both platforms.Consider using a consistent delimiter (like
#or@) that's unlikely to appear in env values:Proposed fix for consistent escaping
else if [ "$OS_NAME" == "Darwin" ]; then - value=$(echo "$value" | sed 's/|/\\|/g') - sed -i '' "s|^$key=.*|$key=$value|g" "$file" + value=$(echo "$value" | sed 's/#/\\#/g') + sed -i '' "s#^$key=.*#$key=$value#g" "$file" else - value=$(echo "$value" | sed 's/\//\\\//g') - sed -i "s/^$key=.*/$key=$value/g" "$file" + value=$(echo "$value" | sed 's/#/\\#/g') + sed -i "s#^$key=.*#$key=$value#g" "$file" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@setup.sh` around lines 140 - 146, The macOS and Linux branches use different sed delimiters and escape rules causing inconsistent behavior for values containing both '|' and '/'; update both branches to use a single safe delimiter (e.g., '#') and escape that delimiter in the value before running sed. Specifically, in the block referencing OS_NAME and variables value, key, file, replace the differing sed invocations with a consistent "s#^$key=.*#$key=$value#g" form for both platforms, and ensure you pre-escape any '#' in value (e.g., transform value to escape the chosen delimiter) while keeping the macOS-specific sed -i '' versus Linux sed -i difference intact. Ensure references to OS_NAME, value, key, and file are used so the change applies to the exact replacement logic shown.setup_lib.sh (1)
62-78: SECRET_KEY generation has reduced entropy and potential duplicate risk.Two concerns with the SECRET_KEY handling:
The character set
a-z0-9(36 chars) provides less entropy than Django's recommended character set which includes uppercase letters and special characters. A 50-char key from this set gives ~259 bits, while including uppercase would give ~297 bits.The key is appended unconditionally. If
apps/api/.env.exampleever adds aSECRET_KEYentry, this will create duplicate keys in the generated.envfile. Consider checking ifSECRET_KEYalready exists before appending.Proposed fix to improve entropy and prevent duplicates
# Generate SECRET_KEY for Django if [ -f "./apps/api/.env" ]; then echo -e "\n${YELLOW}Generating Django SECRET_KEY...${NC}" - SECRET_KEY=$(tr -dc 'a-z0-9' < /dev/urandom | head -c50) + SECRET_KEY=$(tr -dc 'a-zA-Z0-9!@#$%^&*(-_=+)' < /dev/urandom | head -c50) if [ -z "$SECRET_KEY" ]; then echo -e "${RED}Error: Failed to generate SECRET_KEY.${NC}" echo -e "${RED}Ensure 'tr' and 'head' commands are available on your system.${NC}" success=false else - echo -e "SECRET_KEY=\"$SECRET_KEY\"" >> ./apps/api/.env - echo -e "${GREEN}✓${NC} Added SECRET_KEY to apps/api/.env" + if ! grep -q "^SECRET_KEY=" ./apps/api/.env; then + echo -e "SECRET_KEY=\"$SECRET_KEY\"" >> ./apps/api/.env + echo -e "${GREEN}✓${NC} Added SECRET_KEY to apps/api/.env" + else + echo -e "${YELLOW}⚠${NC} SECRET_KEY already exists in apps/api/.env, skipping" + fi fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@setup_lib.sh` around lines 62 - 78, The SECRET_KEY generation uses a limited charset and always appends a new key, risking reduced entropy and duplicate entries; update the generation to produce a higher-entropy key (e.g., use openssl rand -base64 or a Python one-liner that leverages Django's get_random_secret_key to include uppercase/special chars) and fall back to the current tr/head method only if openssl/Python are unavailable, then before writing check apps/api/.env for an existing SECRET_KEY entry (inspect the file for a line starting with SECRET_KEY=) and only append the new SECRET_KEY if none exists; reference the SECRET_KEY variable and the apps/api/.env handling in the current block so the change replaces the unconditional tr/head generation and unconditional echo append.plane-app/docker-compose.yaml (1)
204-216: Pin MinIO image version instead of using:latest.Using
minio/minio:latestis inconsistent with other services in this file, which all use pinned versions (e.g.,postgres:15.7-alpine,valkey/valkey:7.2.11-alpine,rabbitmq:3.13.6-management-alpine). Pinning the version prevents unexpected updates that could introduce breaking changes or security issues.Proposed fix
# Comment this if you using any external s3 compatible storage plane-minio: - image: minio/minio:latest + image: minio/minio:RELEASE.2025-09-07T16-13-09Z command: server /export --console-address ":9090"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane-app/docker-compose.yaml` around lines 204 - 216, The plane-minio service currently uses an unpinned image tag ("minio/minio:latest"); update the image line in the plane-minio service to a specific pinned MinIO tag instead of :latest (replace the "image: minio/minio:latest" entry for the plane-minio service with a fixed version tag consistent with the project’s other pinned images) so future pulls are deterministic and avoid unexpected upgrades.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plane-app/docker-compose.yaml`:
- Around line 45-60: The YAML anchors x-live-env and x-app-env contain hardcoded
secret defaults (LIVE_SERVER_SECRET_KEY and SECRET_KEY) which must be removed;
update the anchors so they do not provide secret default values (e.g., remove
the default values for LIVE_SERVER_SECRET_KEY and SECRET_KEY or set them to an
empty/placeholder that forces consumers to supply env vars) and eliminate the
redundant LIVE_SERVER_SECRET_KEY entry by keeping it in only one anchor if
needed; ensure the services that rely on these values (look for references to
x-live-env, x-app-env, LIVE_SERVER_SECRET_KEY, SECRET_KEY) will fail loudly or
validate presence at startup if the env vars are absent.
In `@plane-app/plane.env`:
- Line 45: TRUSTED_PROXIES is set to 0.0.0.0/0 which trusts all IPs; restrict it
to actual proxy IP ranges in production (replace TRUSTED_PROXIES=0.0.0.0/0 with
a comma-separated list of your proxy CIDRs or leave unset) and add a clear
comment in plane.env (and the deployment README) warning that this must be
limited to known proxy IPs to avoid X-Forwarded-For spoofing; reference the
TRUSTED_PROXIES variable when making the change.
- Around line 56-57: Remove the hardcoded SECRET_KEY value from plane.env and
replace it with a non-secret placeholder (e.g.
SECRET_KEY=please-set-a-unique-secret-in-env) or leave it empty, and update
documentation to require users to set a secure random SECRET_KEY in their
environment; ensure the application reads SECRET_KEY from the environment
variable (the SECRET_KEY entry in plane.env) rather than relying on a committed
value so no production secret is checked into the repo.
In `@setup.sh`:
- Around line 565-577: The cleanup uses rm -rf with variables that may be empty
(BACKUP_FOLDER, SERVICE_FOLDER); change all unsafe uses to parameter-expansion
that fails when empty (e.g., use ${BACKUP_FOLDER:?} and ${SERVICE_FOLDER:?}) and
quote them, and apply the same protection to cd and tar invocations; update the
three places: rm -rf $BACKUP_FOLDER/$SERVICE_FOLDER, cd "$BACKUP_FOLDER", and rm
-rf "$SERVICE_FOLDER/" (as well as tar -czf and any path concatenation) to use
the ${VAR:?} form and quoted expansions so the script will abort rather than
operate on unintended paths when BACKUP_FOLDER or SERVICE_FOLDER is unset.
- Around line 242-244: The cd calls can fail and leave the script in the wrong
directory; update the download function (download) and the backup function
(backup_container_dir) to check each cd return value (e.g., after cd
"$SCRIPT_DIR" and other cd uses) and abort or return on failure; implement this
by replacing bare cd calls with cd "... " || { echo "Failed to cd to ..."; exit
1 } (or return non-zero in functions) so the docker-compose.yaml archive
operations and subsequent commands run from the expected directory.
---
Nitpick comments:
In `@plane-app/docker-compose.yaml`:
- Around line 204-216: The plane-minio service currently uses an unpinned image
tag ("minio/minio:latest"); update the image line in the plane-minio service to
a specific pinned MinIO tag instead of :latest (replace the "image:
minio/minio:latest" entry for the plane-minio service with a fixed version tag
consistent with the project’s other pinned images) so future pulls are
deterministic and avoid unexpected upgrades.
In `@setup_lib.sh`:
- Around line 62-78: The SECRET_KEY generation uses a limited charset and always
appends a new key, risking reduced entropy and duplicate entries; update the
generation to produce a higher-entropy key (e.g., use openssl rand -base64 or a
Python one-liner that leverages Django's get_random_secret_key to include
uppercase/special chars) and fall back to the current tr/head method only if
openssl/Python are unavailable, then before writing check apps/api/.env for an
existing SECRET_KEY entry (inspect the file for a line starting with
SECRET_KEY=) and only append the new SECRET_KEY if none exists; reference the
SECRET_KEY variable and the apps/api/.env handling in the current block so the
change replaces the unconditional tr/head generation and unconditional echo
append.
In `@setup.sh`:
- Around line 219-222: The current inline export/assignment of APP_RELEASE via
command substitution masks failures from checkLatestRelease; instead, call
checkLatestRelease into a local variable (e.g., latest_release =
$(checkLatestRelease)), verify its exit status or non-empty output, then export
APP_RELEASE from that variable; update the block that references APP_RELEASE and
checkLatestRelease to perform the assignment and error check separately before
exporting so failures are not hidden.
- Around line 58-67: In checkLatestRelease, avoid combining declaration and
command substitution for latest_release because that hides the command's exit
status; instead declare local latest_release first, then run the curl/parse
command into latest_release in a separate step (or use the pattern if !
latest_release=$(...); then handle the failure), and check the command's exit
code or the conditional result before proceeding so failures from curl/grep/sed
are detected and trigger the existing error handling; reference the function
name checkLatestRelease and the variable latest_release when making the change.
- Around line 140-146: The macOS and Linux branches use different sed delimiters
and escape rules causing inconsistent behavior for values containing both '|'
and '/'; update both branches to use a single safe delimiter (e.g., '#') and
escape that delimiter in the value before running sed. Specifically, in the
block referencing OS_NAME and variables value, key, file, replace the differing
sed invocations with a consistent "s#^$key=.*#$key=$value#g" form for both
platforms, and ensure you pre-escape any '#' in value (e.g., transform value to
escape the chosen delimiter) while keeping the macOS-specific sed -i '' versus
Linux sed -i difference intact. Ensure references to OS_NAME, value, key, and
file are used so the change applies to the exact replacement logic shown.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 04892ef4-087e-44d4-b8aa-ea8db895daca
📒 Files selected for processing (4)
plane-app/docker-compose.yamlplane-app/plane.envsetup.shsetup_lib.sh
|
|
||
| # If SSL Cert to be generated, set CERT_EMAIl="email <EMAIL_ADDRESS>" | ||
| CERT_ACME_CA=https://acme-v02.api.letsencrypt.org/directory | ||
| TRUSTED_PROXIES=0.0.0.0/0 |
There was a problem hiding this comment.
TRUSTED_PROXIES=0.0.0.0/0 trusts all IPs.
This setting trusts every IP address as a valid proxy, which can allow IP spoofing via X-Forwarded-For headers in production. Consider documenting that this should be restricted to actual proxy IPs in production deployments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plane-app/plane.env` at line 45, TRUSTED_PROXIES is set to 0.0.0.0/0 which
trusts all IPs; restrict it to actual proxy IP ranges in production (replace
TRUSTED_PROXIES=0.0.0.0/0 with a comma-separated list of your proxy CIDRs or
leave unset) and add a clear comment in plane.env (and the deployment README)
warning that this must be limited to known proxy IPs to avoid X-Forwarded-For
spoofing; reference the TRUSTED_PROXIES variable when making the change.
| function download() { | ||
| local LOCAL_BUILD=$1 | ||
| cd $SCRIPT_DIR |
There was a problem hiding this comment.
Add error handling for cd commands.
If cd $SCRIPT_DIR fails, the script continues in an unexpected directory, which could cause the docker-compose.yaml archive operation to target wrong paths. Similar issues exist at lines 572 and 578.
Proposed fix
function download() {
local LOCAL_BUILD=$1
- cd $SCRIPT_DIR
+ cd "$SCRIPT_DIR" || { echo "Failed to change to script directory"; exit 1; }
TS=$(date +%s)Also fix in backup_container_dir:
- cd "$BACKUP_FOLDER"
+ cd "$BACKUP_FOLDER" || return 1
tar -czf "${SERVICE_FOLDER}.tar.gz" "$SERVICE_FOLDER/"
...
- cd - > /dev/null
+ cd - > /dev/null || return 1📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function download() { | |
| local LOCAL_BUILD=$1 | |
| cd $SCRIPT_DIR | |
| function download() { | |
| local LOCAL_BUILD=$1 | |
| cd "$SCRIPT_DIR" || { echo "Failed to change to script directory"; exit 1; } |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 244-244: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@setup.sh` around lines 242 - 244, The cd calls can fail and leave the script
in the wrong directory; update the download function (download) and the backup
function (backup_container_dir) to check each cd return value (e.g., after cd
"$SCRIPT_DIR" and other cd uses) and abort or return on failure; implement this
by replacing bare cd calls with cd "... " || { echo "Failed to cd to ..."; exit
1 } (or return non-zero in functions) so the docker-compose.yaml archive
operations and subsequent commands run from the expected directory.
| if [ $cp_status -ne 0 ]; then | ||
| echo "Error: Failed to copy $SERVICE_FOLDER data" | ||
| rm -rf $BACKUP_FOLDER/$SERVICE_FOLDER | ||
| return 1 | ||
| fi | ||
|
|
||
| # Create tar.gz of the data | ||
| cd "$BACKUP_FOLDER" | ||
| tar -czf "${SERVICE_FOLDER}.tar.gz" "$SERVICE_FOLDER/" | ||
| local tar_status=$? | ||
| if [ $tar_status -eq 0 ]; then | ||
| rm -rf "$SERVICE_FOLDER/" | ||
| fi |
There was a problem hiding this comment.
Protect rm -rf against empty variables.
If $BACKUP_FOLDER or $SERVICE_FOLDER is empty or unset, rm -rf could delete unintended paths. Use ${var:?} to fail safely if variables are empty.
Proposed fix
if [ $cp_status -ne 0 ]; then
echo "Error: Failed to copy $SERVICE_FOLDER data"
- rm -rf $BACKUP_FOLDER/$SERVICE_FOLDER
+ rm -rf "${BACKUP_FOLDER:?}/${SERVICE_FOLDER:?}"
return 1
fi
# Create tar.gz of the data
- cd "$BACKUP_FOLDER"
+ cd "$BACKUP_FOLDER" || return 1
tar -czf "${SERVICE_FOLDER}.tar.gz" "$SERVICE_FOLDER/"
local tar_status=$?
if [ $tar_status -eq 0 ]; then
- rm -rf "$SERVICE_FOLDER/"
+ rm -rf "${SERVICE_FOLDER:?}/"
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ $cp_status -ne 0 ]; then | |
| echo "Error: Failed to copy $SERVICE_FOLDER data" | |
| rm -rf $BACKUP_FOLDER/$SERVICE_FOLDER | |
| return 1 | |
| fi | |
| # Create tar.gz of the data | |
| cd "$BACKUP_FOLDER" | |
| tar -czf "${SERVICE_FOLDER}.tar.gz" "$SERVICE_FOLDER/" | |
| local tar_status=$? | |
| if [ $tar_status -eq 0 ]; then | |
| rm -rf "$SERVICE_FOLDER/" | |
| fi | |
| if [ $cp_status -ne 0 ]; then | |
| echo "Error: Failed to copy $SERVICE_FOLDER data" | |
| rm -rf "${BACKUP_FOLDER:?}/${SERVICE_FOLDER:?}" | |
| return 1 | |
| fi | |
| # Create tar.gz of the data | |
| cd "$BACKUP_FOLDER" || return 1 | |
| tar -czf "${SERVICE_FOLDER}.tar.gz" "$SERVICE_FOLDER/" | |
| local tar_status=$? | |
| if [ $tar_status -eq 0 ]; then | |
| rm -rf "${SERVICE_FOLDER:?}/" | |
| fi |
🧰 Tools
🪛 Shellcheck (0.11.0)
[warning] 567-567: Use "${var:?}" to ensure this never expands to / .
(SC2115)
[warning] 572-572: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
[warning] 576-576: Use "${var:?}" to ensure this never expands to / .
(SC2115)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@setup.sh` around lines 565 - 577, The cleanup uses rm -rf with variables that
may be empty (BACKUP_FOLDER, SERVICE_FOLDER); change all unsafe uses to
parameter-expansion that fails when empty (e.g., use ${BACKUP_FOLDER:?} and
${SERVICE_FOLDER:?}) and quote them, and apply the same protection to cd and tar
invocations; update the three places: rm -rf $BACKUP_FOLDER/$SERVICE_FOLDER, cd
"$BACKUP_FOLDER", and rm -rf "$SERVICE_FOLDER/" (as well as tar -czf and any
path concatenation) to use the ${VAR:?} form and quoted expansions so the script
will abort rather than operate on unintended paths when BACKUP_FOLDER or
SERVICE_FOLDER is unset.
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
plane-app/docker-compose.yaml (1)
37-98: Consider adding health check conditions todepends_on.The backend services use
depends_onto establish startup order, but withoutcondition: service_healthy, containers start as soon as dependencies are running—not when they're ready to accept connections. This can cause transient startup failures ifapiattempts to connect beforeplane-dborplane-redisare accepting queries.♻️ Example: Add health checks and conditions
plane-db: container_name: plane-db image: postgres:15.7-alpine restart: always command: postgres -c 'max_connections=1000' volumes: - pgdata:/var/lib/postgresql/data env_file: - .env environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_DB: ${POSTGRES_DB} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} PGDATA: /var/lib/postgresql/data + healthcheck: + test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"] + interval: 10s + timeout: 5s + retries: 5 plane-redis: container_name: plane-redis image: valkey/valkey:7.2.11-alpine restart: always volumes: - redisdata:/data + healthcheck: + test: ["CMD", "valkey-cli", "ping"] + interval: 10s + timeout: 5s + retries: 5Then update
depends_onin services likeapi:api: ... depends_on: - - plane-db - - plane-redis + plane-db: + condition: service_healthy + plane-redis: + condition: service_healthy🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plane-app/docker-compose.yaml` around lines 37 - 98, The docker-compose services (api, worker, beat-worker, migrator) rely on depends_on but currently start as soon as plane-db and plane-redis containers are running, not when they are ready; add healthcheck blocks for plane-db and plane-redis that probe readiness (e.g., pg_isready or a TCP/curl probe for Postgres and redis-cli PING for Redis) and then change the depends_on entries in the api, worker, beat-worker and migrator service definitions to use condition: service_healthy for the respective plane-db and plane-redis dependencies so those services wait until the DB/Redis report healthy before starting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plane-app.backup_20260330_144144/docker-compose.yaml`:
- Around line 171-172: The comment warns that simply removing the plane-db or
plane-minio services breaks other services because api, worker, beat-worker, and
migrator still depend on plane-db and environment defaults (PGHOST,
DATABASE_URL, USE_MINIO, AWS_S3_ENDPOINT_URL) point to plane-minio; fix by
adding explicit docker-compose profiles or override files and adjust service
dependencies/envs: add a "local-db" and "local-minio" profile or create
docker-compose.override.yaml examples that remove plane-db/plane-minio while
updating api/worker/beat-worker/migrator to not depend_on plane-db and to set
PGHOST/DATABASE_URL/USE_MINIO/AWS_S3_ENDPOINT_URL from external env variables
when those profiles are disabled; alternatively change the top-level comments to
instruct users to set PGHOST/DATABASE_URL and USE_MINIO/AWS_S3_ENDPOINT_URL when
they remove plane-db/plane-minio and show a sample env or override file.
- Line 5: The compose file currently supplies safe defaults for secrets (e.g.,
POSTGRES_PASSWORD, RABBITMQ_DEFAULT_PASS, MINIO_ACCESS_KEY, MINIO_SECRET_KEY)
and other critical envs (DATABASE_URL, AMQP_URL, SECRET_KEY,
LIVE_SERVER_SECRET_KEY), allowing containers to start with predictable
credentials; change each of those variable interpolations to the "fail-fast"
form so Docker Compose will error if they are missing (use the shell-style
parameter expansion that throws when unset, e.g., ${VAR:?missing VAR}) for
POSTGRES_PASSWORD, RABBITMQ_DEFAULT_PASS, MINIO_ACCESS_KEY, MINIO_SECRET_KEY,
DATABASE_URL, AMQP_URL, SECRET_KEY and LIVE_SERVER_SECRET_KEY so the stack
refuses to boot without explicit secrets.
- Around line 1-255: This committed timestamped backup compose
(plane-app.backup_20260330_144144/docker-compose.yaml) must be removed — delete
that backup directory and file from the repository, add a rule to .gitignore to
prevent committing timestamped backup folders (e.g., plane-app.backup_*), and
ensure the installer references the canonical compose (the setup.sh usage of
plane-app/docker-compose.yaml remains unchanged); commit the deletion and
.gitignore update so only the single deployment definition is tracked.
- Around line 65-68: The compose file currently includes Docker Swarm-only
fields (the deploy: block with replicas and restart_policy) for services like
web, space, admin, and live; remove these deploy: sections from each service or
relocate them into your Swarm-specific manifest
(./deployments/swarm/community/swarm.sh) and instead use Compose-compatible
options (e.g., add a service-level restart: unless-stopped or restart: always)
if restart behavior is required; ensure all occurrences of
deploy/replicas/restart_policy are cleaned up across the file or mirrored only
in the separate Swarm compose file so the top-level docker-compose.yaml is
Compose-compatible for docker compose up -d.
In `@plane-app/docker-compose.yaml`:
- Around line 161-163: The port bindings under the "ports:" block use
environment variables LISTEN_HTTP_PORT and LISTEN_HTTPS_PORT without defaults,
which can cause docker-compose to fail if those vars are missing; update the
bindings to use shell-style defaults (e.g., ${LISTEN_HTTP_PORT:-8080} and
${LISTEN_HTTPS_PORT:-8443}) so the ports provide sensible fallbacks, ensuring
the "ports" entries remain valid when LISTEN_HTTP_PORT or LISTEN_HTTPS_PORT are
undefined.
- Around line 143-145: The docker-compose service plane-minio currently uses an
unpinned, archived image "minio/minio"; change the image entry for the
plane-minio service (look for the service name "plane-minio" and the "image:
minio/minio" line) to point to the recommended registry and a specific version
tag—e.g. use "quay.io/minio/minio:<version>" (pick a known release tag or
RELEASE-YYYYMMDD) or another pinned MinIO release—so the image is not pulled as
latest and comes from the supported registry.
- Around line 99-107: The docker-compose service "live" is missing environment
configuration required by apps/live/src/env.ts: add either env_file:
./apps/live/.env or an environment: block supplying API_BASE_URL and
LIVE_SERVER_SECRET_KEY for the "live" service (container_name: plane-live,
build: Dockerfile.live) so the validation in apps/live/src/env.ts passes at
startup; ensure the chosen file or variables match the names expected
(API_BASE_URL, LIVE_SERVER_SECRET_KEY).
---
Nitpick comments:
In `@plane-app/docker-compose.yaml`:
- Around line 37-98: The docker-compose services (api, worker, beat-worker,
migrator) rely on depends_on but currently start as soon as plane-db and
plane-redis containers are running, not when they are ready; add healthcheck
blocks for plane-db and plane-redis that probe readiness (e.g., pg_isready or a
TCP/curl probe for Postgres and redis-cli PING for Redis) and then change the
depends_on entries in the api, worker, beat-worker and migrator service
definitions to use condition: service_healthy for the respective plane-db and
plane-redis dependencies so those services wait until the DB/Redis report
healthy before starting.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 41eaa7ce-24d0-4858-a615-5f9adafaf691
📒 Files selected for processing (5)
plane-app.backup_20260330_144144/docker-compose.yamlplane-app.backup_20260330_144144/plane.envplane-app/docker-compose.yamlplane-app/plane.envplane.env
✅ Files skipped from review due to trivial changes (3)
- plane.env
- plane-app.backup_20260330_144144/plane.env
- plane-app/plane.env
| x-db-env: &db-env | ||
| PGHOST: ${PGHOST:-plane-db} | ||
| PGDATABASE: ${PGDATABASE:-plane} | ||
| POSTGRES_USER: ${POSTGRES_USER:-plane} | ||
| POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-plane} | ||
| POSTGRES_DB: ${POSTGRES_DB:-plane} | ||
| POSTGRES_PORT: ${POSTGRES_PORT:-5432} | ||
| PGDATA: ${PGDATA:-/var/lib/postgresql/data} | ||
|
|
||
| x-redis-env: &redis-env | ||
| REDIS_HOST: ${REDIS_HOST:-plane-redis} | ||
| REDIS_PORT: ${REDIS_PORT:-6379} | ||
| REDIS_URL: ${REDIS_URL:-redis://plane-redis:6379/} | ||
|
|
||
| x-minio-env: &minio-env | ||
| MINIO_ROOT_USER: ${AWS_ACCESS_KEY_ID:-access-key} | ||
| MINIO_ROOT_PASSWORD: ${AWS_SECRET_ACCESS_KEY:-secret-key} | ||
|
|
||
| x-aws-s3-env: &aws-s3-env | ||
| AWS_REGION: ${AWS_REGION:-} | ||
| AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID:-access-key} | ||
| AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY:-secret-key} | ||
| AWS_S3_ENDPOINT_URL: ${AWS_S3_ENDPOINT_URL:-http://plane-minio:9000} | ||
| AWS_S3_BUCKET_NAME: ${AWS_S3_BUCKET_NAME:-uploads} | ||
|
|
||
| x-proxy-env: &proxy-env | ||
| APP_DOMAIN: ${APP_DOMAIN:-localhost} | ||
| FILE_SIZE_LIMIT: ${FILE_SIZE_LIMIT:-5242880} | ||
| CERT_EMAIL: ${CERT_EMAIL} | ||
| CERT_ACME_CA: ${CERT_ACME_CA} | ||
| CERT_ACME_DNS: ${CERT_ACME_DNS} | ||
| LISTEN_HTTP_PORT: ${LISTEN_HTTP_PORT:-80} | ||
| LISTEN_HTTPS_PORT: ${LISTEN_HTTPS_PORT:-443} | ||
| BUCKET_NAME: ${AWS_S3_BUCKET_NAME:-uploads} | ||
| SITE_ADDRESS: ${SITE_ADDRESS:-:80} | ||
|
|
||
| x-mq-env: &mq-env # RabbitMQ Settings | ||
| RABBITMQ_HOST: ${RABBITMQ_HOST:-plane-mq} | ||
| RABBITMQ_PORT: ${RABBITMQ_PORT:-5672} | ||
| RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER:-plane} | ||
| RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASSWORD:-plane} | ||
| RABBITMQ_DEFAULT_VHOST: ${RABBITMQ_VHOST:-plane} | ||
| RABBITMQ_VHOST: ${RABBITMQ_VHOST:-plane} | ||
|
|
||
| x-live-env: &live-env | ||
| API_BASE_URL: ${API_BASE_URL:-http://api:8000} | ||
| LIVE_SERVER_SECRET_KEY: ${LIVE_SERVER_SECRET_KEY:-2FiJk1U2aiVPEQtzLehYGlTSnTnrs7LW} | ||
|
|
||
| x-app-env: &app-env | ||
| WEB_URL: ${WEB_URL:-http://localhost} | ||
| DEBUG: ${DEBUG:-0} | ||
| CORS_ALLOWED_ORIGINS: ${CORS_ALLOWED_ORIGINS} | ||
| GUNICORN_WORKERS: 1 | ||
| USE_MINIO: ${USE_MINIO:-1} | ||
| DATABASE_URL: ${DATABASE_URL:-postgresql://plane:plane@plane-db/plane} | ||
| SECRET_KEY: ${SECRET_KEY:-60gp0byfz2dvffa45cxl20p1scy9xbpf6d8c5y0geejgkyp1b5} | ||
| AMQP_URL: ${AMQP_URL:-amqp://plane:plane@plane-mq:5672/plane} | ||
| API_KEY_RATE_LIMIT: ${API_KEY_RATE_LIMIT:-60/minute} | ||
| MINIO_ENDPOINT_SSL: ${MINIO_ENDPOINT_SSL:-0} | ||
| LIVE_SERVER_SECRET_KEY: ${LIVE_SERVER_SECRET_KEY:-2FiJk1U2aiVPEQtzLehYGlTSnTnrs7LW} | ||
|
|
||
| services: | ||
| web: | ||
| image: artifacts.plane.so/makeplane/plane-frontend:${APP_RELEASE:-v1.2.3} | ||
| deploy: | ||
| replicas: ${WEB_REPLICAS:-1} | ||
| restart_policy: | ||
| condition: any | ||
| depends_on: | ||
| - api | ||
| - worker | ||
|
|
||
| space: | ||
| image: artifacts.plane.so/makeplane/plane-space:${APP_RELEASE:-v1.2.3} | ||
| deploy: | ||
| replicas: ${SPACE_REPLICAS:-1} | ||
| restart_policy: | ||
| condition: any | ||
| depends_on: | ||
| - api | ||
| - worker | ||
| - web | ||
|
|
||
| admin: | ||
| image: artifacts.plane.so/makeplane/plane-admin:${APP_RELEASE:-v1.2.3} | ||
| deploy: | ||
| replicas: ${ADMIN_REPLICAS:-1} | ||
| restart_policy: | ||
| condition: any | ||
| depends_on: | ||
| - api | ||
| - web | ||
|
|
||
| live: | ||
| image: artifacts.plane.so/makeplane/plane-live:${APP_RELEASE:-v1.2.3} | ||
| environment: | ||
| <<: [*live-env, *redis-env] | ||
| deploy: | ||
| replicas: ${LIVE_REPLICAS:-1} | ||
| restart_policy: | ||
| condition: any | ||
| depends_on: | ||
| - api | ||
| - web | ||
|
|
||
| api: | ||
| image: artifacts.plane.so/makeplane/plane-backend:${APP_RELEASE:-v1.2.3} | ||
| command: ./bin/docker-entrypoint-api.sh | ||
| deploy: | ||
| replicas: ${API_REPLICAS:-1} | ||
| restart_policy: | ||
| condition: any | ||
| volumes: | ||
| - logs_api:/code/plane/logs | ||
| environment: | ||
| <<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env] | ||
| depends_on: | ||
| - plane-db | ||
| - plane-redis | ||
| - plane-mq | ||
|
|
||
| worker: | ||
| image: artifacts.plane.so/makeplane/plane-backend:${APP_RELEASE:-v1.2.3} | ||
| command: ./bin/docker-entrypoint-worker.sh | ||
| deploy: | ||
| replicas: ${WORKER_REPLICAS:-1} | ||
| restart_policy: | ||
| condition: any | ||
| volumes: | ||
| - logs_worker:/code/plane/logs | ||
| environment: | ||
| <<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env] | ||
| depends_on: | ||
| - api | ||
| - plane-db | ||
| - plane-redis | ||
| - plane-mq | ||
|
|
||
| beat-worker: | ||
| image: artifacts.plane.so/makeplane/plane-backend:${APP_RELEASE:-v1.2.3} | ||
| command: ./bin/docker-entrypoint-beat.sh | ||
| deploy: | ||
| replicas: ${BEAT_WORKER_REPLICAS:-1} | ||
| restart_policy: | ||
| condition: any | ||
| volumes: | ||
| - logs_beat-worker:/code/plane/logs | ||
| environment: | ||
| <<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env] | ||
| depends_on: | ||
| - api | ||
| - plane-db | ||
| - plane-redis | ||
| - plane-mq | ||
|
|
||
| migrator: | ||
| image: artifacts.plane.so/makeplane/plane-backend:${APP_RELEASE:-v1.2.3} | ||
| command: ./bin/docker-entrypoint-migrator.sh | ||
| deploy: | ||
| replicas: 1 | ||
| restart_policy: | ||
| condition: on-failure | ||
| volumes: | ||
| - logs_migrator:/code/plane/logs | ||
| environment: | ||
| <<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env] | ||
| depends_on: | ||
| - plane-db | ||
| - plane-redis | ||
|
|
||
| # Comment this if you already have a database running | ||
| plane-db: | ||
| image: postgres:15.7-alpine | ||
| command: postgres -c 'max_connections=1000' | ||
| deploy: | ||
| replicas: 1 | ||
| restart_policy: | ||
| condition: any | ||
| environment: | ||
| <<: *db-env | ||
| volumes: | ||
| - pgdata:/var/lib/postgresql/data | ||
|
|
||
| plane-redis: | ||
| image: valkey/valkey:7.2.11-alpine | ||
| deploy: | ||
| replicas: 1 | ||
| restart_policy: | ||
| condition: any | ||
| volumes: | ||
| - redisdata:/data | ||
|
|
||
| plane-mq: | ||
| image: rabbitmq:3.13.6-management-alpine | ||
| deploy: | ||
| replicas: 1 | ||
| restart_policy: | ||
| condition: any | ||
| environment: | ||
| <<: *mq-env | ||
| volumes: | ||
| - rabbitmq_data:/var/lib/rabbitmq | ||
|
|
||
| # Comment this if you using any external s3 compatible storage | ||
| plane-minio: | ||
| image: minio/minio:latest | ||
| command: server /export --console-address ":9090" | ||
| deploy: | ||
| replicas: 1 | ||
| restart_policy: | ||
| condition: any | ||
| environment: | ||
| <<: *minio-env | ||
| volumes: | ||
| - uploads:/export | ||
|
|
||
| # Comment this if you already have a reverse proxy running | ||
| proxy: | ||
| image: artifacts.plane.so/makeplane/plane-proxy:${APP_RELEASE:-v1.2.3} | ||
| deploy: | ||
| replicas: 1 | ||
| restart_policy: | ||
| condition: any | ||
| environment: | ||
| <<: *proxy-env | ||
| ports: | ||
| - target: 80 | ||
| published: ${LISTEN_HTTP_PORT:-80} | ||
| protocol: tcp | ||
| mode: host | ||
| - target: 443 | ||
| published: ${LISTEN_HTTPS_PORT:-443} | ||
| protocol: tcp | ||
| mode: host | ||
| volumes: | ||
| - proxy_config:/config | ||
| - proxy_data:/data | ||
| depends_on: | ||
| - web | ||
| - api | ||
| - space | ||
| - admin | ||
| - live | ||
|
|
||
| volumes: | ||
| pgdata: | ||
| redisdata: | ||
| uploads: | ||
| logs_api: | ||
| logs_worker: | ||
| logs_beat-worker: | ||
| logs_migrator: | ||
| rabbitmq_data: | ||
| proxy_config: | ||
| proxy_data: |
There was a problem hiding this comment.
Remove the timestamped backup compose file from the repo.
setup.sh:5-20 points the installer at plane-app/docker-compose.yaml, and setup.sh:588-603 only backs up runtime data under plane-app/backup/.... This plane-app.backup_20260330_144144/ copy looks like a local snapshot rather than an installer-managed artifact, so keeping it checked in creates a second deployment definition that will drift.
🧰 Tools
🪛 Checkov (3.2.510)
[medium] 193-194: Basic Auth Credentials
(CKV_SECRET_4)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plane-app.backup_20260330_144144/docker-compose.yaml` around lines 1 - 255,
This committed timestamped backup compose
(plane-app.backup_20260330_144144/docker-compose.yaml) must be removed — delete
that backup directory and file from the repository, add a rule to .gitignore to
prevent committing timestamped backup folders (e.g., plane-app.backup_*), and
ensure the installer references the canonical compose (the setup.sh usage of
plane-app/docker-compose.yaml remains unchanged); commit the deletion and
.gitignore update so only the single deployment definition is tracked.
| PGHOST: ${PGHOST:-plane-db} | ||
| PGDATABASE: ${PGDATABASE:-plane} | ||
| POSTGRES_USER: ${POSTGRES_USER:-plane} | ||
| POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-plane} |
There was a problem hiding this comment.
Fail fast when required secrets are missing.
This stack currently boots with known defaults for the Postgres password, RabbitMQ password, MinIO access keys, DATABASE_URL, AMQP_URL, SECRET_KEY, and LIVE_SERVER_SECRET_KEY. Since the proxy later publishes 80/443, a missing or partial env file becomes a predictable-credential deployment instead of a hard failure.
Also applies to: 16-17, 21-22, 41-41, 47-47, 55-57, 60-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plane-app.backup_20260330_144144/docker-compose.yaml` at line 5, The compose
file currently supplies safe defaults for secrets (e.g., POSTGRES_PASSWORD,
RABBITMQ_DEFAULT_PASS, MINIO_ACCESS_KEY, MINIO_SECRET_KEY) and other critical
envs (DATABASE_URL, AMQP_URL, SECRET_KEY, LIVE_SERVER_SECRET_KEY), allowing
containers to start with predictable credentials; change each of those variable
interpolations to the "fail-fast" form so Docker Compose will error if they are
missing (use the shell-style parameter expansion that throws when unset, e.g.,
${VAR:?missing VAR}) for POSTGRES_PASSWORD, RABBITMQ_DEFAULT_PASS,
MINIO_ACCESS_KEY, MINIO_SECRET_KEY, DATABASE_URL, AMQP_URL, SECRET_KEY and
LIVE_SERVER_SECRET_KEY so the stack refuses to boot without explicit secrets.
| deploy: | ||
| replicas: ${WEB_REPLICAS:-1} | ||
| restart_policy: | ||
| condition: any |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Inspect how the installer launches the stack.
fd -a '^setup\.sh$' .
rg -n -C2 'docker\s+(compose|stack)\s+' setup.shRepository: makeplane/plane
Length of output: 86
🏁 Script executed:
cat -n /home/jailuser/git/setup.shRepository: makeplane/plane
Length of output: 28021
🏁 Script executed:
cat -n plane-app.backup_20260330_144144/docker-compose.yaml | head -100Repository: makeplane/plane
Length of output: 3971
🏁 Script executed:
rg -i 'docker\s+stack\s+deploy' --type sh --type yamlRepository: makeplane/plane
Length of output: 152
🏁 Script executed:
grep -r "docker stack" . --include="*.md" --include="*.sh" --include="*.yaml" --include="*.yml" 2>/dev/null | head -20Repository: makeplane/plane
Length of output: 393
Remove or relocate the deploy: configuration for Docker Compose deployments.
The standard installation path uses docker compose up -d (setup.sh), where deploy.replicas and deploy.restart_policy are ignored. These settings only apply to Docker Swarm mode (docker stack deploy), which has a separate deployment script at ./deployments/swarm/community/swarm.sh.
If supporting both deployment methods, consolidate the docker-compose.yaml into Compose-compatible configuration or maintain separate files. Otherwise, remove the deploy: sections and use Compose-specific options like restart_policy at the service level if restart behavior is needed.
This pattern affects multiple services across the file (web, space, admin, live, and likely others).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plane-app.backup_20260330_144144/docker-compose.yaml` around lines 65 - 68,
The compose file currently includes Docker Swarm-only fields (the deploy: block
with replicas and restart_policy) for services like web, space, admin, and live;
remove these deploy: sections from each service or relocate them into your
Swarm-specific manifest (./deployments/swarm/community/swarm.sh) and instead use
Compose-compatible options (e.g., add a service-level restart: unless-stopped or
restart: always) if restart behavior is required; ensure all occurrences of
deploy/replicas/restart_policy are cleaned up across the file or mirrored only
in the separate Swarm compose file so the top-level docker-compose.yaml is
Compose-compatible for docker compose up -d.
| # Comment this if you already have a database running | ||
| plane-db: |
There was a problem hiding this comment.
The “comment this out” guidance yields a broken external-service setup.
Line 171 says plane-db can be removed, but api, worker, beat-worker, and migrator still depend on plane-db, and earlier defaults still point PGHOST / DATABASE_URL at that hostname. Line 204 has the same problem for MinIO: removing plane-minio still leaves the app defaulting to USE_MINIO=1 and AWS_S3_ENDPOINT_URL=http://plane-minio:9000. This needs profiles/overrides or more explicit setup steps, not a one-line toggle.
Also applies to: 204-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plane-app.backup_20260330_144144/docker-compose.yaml` around lines 171 - 172,
The comment warns that simply removing the plane-db or plane-minio services
breaks other services because api, worker, beat-worker, and migrator still
depend on plane-db and environment defaults (PGHOST, DATABASE_URL, USE_MINIO,
AWS_S3_ENDPOINT_URL) point to plane-minio; fix by adding explicit docker-compose
profiles or override files and adjust service dependencies/envs: add a
"local-db" and "local-minio" profile or create docker-compose.override.yaml
examples that remove plane-db/plane-minio while updating
api/worker/beat-worker/migrator to not depend_on plane-db and to set
PGHOST/DATABASE_URL/USE_MINIO/AWS_S3_ENDPOINT_URL from external env variables
when those profiles are disabled; alternatively change the top-level comments to
instruct users to set PGHOST/DATABASE_URL and USE_MINIO/AWS_S3_ENDPOINT_URL when
they remove plane-db/plane-minio and show a sample env or override file.
| live: | ||
| container_name: plane-live | ||
| build: | ||
| context: . | ||
| dockerfile: ./apps/live/Dockerfile.live | ||
| args: | ||
| DOCKER_BUILDKIT: 1 | ||
| restart: always | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check what environment variables the live service expects
# Search for environment variable usage in the live app
echo "=== Searching for env var usage in live app ==="
fd -t f -e ts -e js -e tsx . apps/live --exec rg -n 'process\.env\.' {} \; 2>/dev/null || echo "No files found"
# Check if there's a .env.example for the live app
echo -e "\n=== Checking for live app env example ==="
fd -t f '.env' apps/live 2>/dev/null || echo "No .env files found in apps/live"
# Check backup compose for live service config
echo -e "\n=== Checking backup compose for live service ==="
cat plane-app.backup_20260330_144144/docker-compose.yaml 2>/dev/null | grep -A 20 'live:' || echo "Backup file not found"Repository: makeplane/plane
Length of output: 765
🏁 Script executed:
# Check if apps/live directory exists and its contents
echo "=== Checking if apps/live exists ==="
ls -la apps/live 2>/dev/null || echo "apps/live directory does not exist"
# Check if Dockerfile.live exists
echo -e "\n=== Checking if Dockerfile.live exists ==="
find . -name "Dockerfile.live" -o -name "*live*" -path "*/apps/*" -type f 2>/dev/null | head -20
# Check the full docker-compose.yaml structure around live service
echo -e "\n=== Full live service definition in current docker-compose.yaml ==="
rg -A 15 '^\s+live:' plane-app/docker-compose.yamlRepository: makeplane/plane
Length of output: 1336
🏁 Script executed:
# Read the .env.example file to see what environment variables are expected
echo "=== apps/live/.env.example ==="
cat apps/live/.env.example
# Read Dockerfile.live to see how it uses environment variables
echo -e "\n=== apps/live/Dockerfile.live (first 50 lines) ==="
head -50 apps/live/Dockerfile.live
# Search for environment variable usage in the src directory
echo -e "\n=== Searching for env var usage in apps/live/src ==="
rg 'process\.env\.|import.*(config|env)' apps/live/src -n | head -30Repository: makeplane/plane
Length of output: 2997
🏁 Script executed:
# Read the env.ts file to see how environment validation works
echo "=== apps/live/src/env.ts ==="
cat apps/live/src/env.ts
# Check if there are any anchors or references to live-env in the current compose file
echo -e "\n=== Checking for live-env references in current docker-compose.yaml ==="
rg 'live-env|live:' plane-app/docker-compose.yaml -B 2 -A 10
# Check if there's any .env file or setup at the root
echo -e "\n=== Checking for root .env files ==="
ls -la | grep -E '\.env|docker-compose'Repository: makeplane/plane
Length of output: 2010
🏁 Script executed:
# Check root .env.example
echo "=== Root .env.example ==="
cat .env.example
# Check plane.env
echo -e "\n=== plane.env ==="
cat plane.env
# Check if docker-compose.yml (not plane-app/docker-compose.yaml) has live service
echo -e "\n=== Checking docker-compose.yml for live service ==="
rg 'live:' docker-compose.yml -B 2 -A 10Repository: makeplane/plane
Length of output: 3291
🏁 Script executed:
# Check the full docker-compose.yaml structure to see all services and their env configs
echo "=== Full docker-compose.yaml structure (showing service definitions) ==="
rg '^\s+[a-z-]+:$' plane-app/docker-compose.yaml -A 1
# Check api service to see how it's configured (for comparison)
echo -e "\n=== API service configuration ==="
rg 'api:' plane-app/docker-compose.yaml -A 15 | head -30Repository: makeplane/plane
Length of output: 2204
🏁 Script executed:
# Confirm: Check if apps/live/.env exists in the repo
echo "=== Checking if apps/live/.env exists ==="
ls -la apps/live/.env 2>/dev/null || echo "apps/live/.env does not exist (expected)"
# Verify: Show that docker-compose automatically loads .env from root for services that don't specify env_file
echo -e "\n=== Confirming docker-compose environment inheritance ==="
rg 'env_file' plane-app/docker-compose.yamlRepository: makeplane/plane
Length of output: 284
Missing environment configuration for live service.
The live service has no env_file or environment section. The apps/live/src/env.ts file validates that API_BASE_URL and LIVE_SERVER_SECRET_KEY are required (with no defaults), and the service will fail at startup without these variables. Compare to the api service, which includes env_file: ./apps/api/.env.
Proposed fix
live:
container_name: plane-live
build:
context: .
dockerfile: ./apps/live/Dockerfile.live
args:
DOCKER_BUILDKIT: 1
restart: always
+ env_file:
+ - ./apps/api/.env
+ environment:
+ API_BASE_URL: ${API_BASE_URL:-http://api:8000}
+ depends_on:
+ - api🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plane-app/docker-compose.yaml` around lines 99 - 107, The docker-compose
service "live" is missing environment configuration required by
apps/live/src/env.ts: add either env_file: ./apps/live/.env or an environment:
block supplying API_BASE_URL and LIVE_SERVER_SECRET_KEY for the "live" service
(container_name: plane-live, build: Dockerfile.live) so the validation in
apps/live/src/env.ts passes at startup; ensure the chosen file or variables
match the names expected (API_BASE_URL, LIVE_SERVER_SECRET_KEY).
| plane-minio: | ||
| container_name: plane-minio | ||
| image: minio/minio |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the latest stable MinIO Docker image version tag?
💡 Result:
The latest stable MinIO Docker image version tag on Docker Hub (minio/minio) is RELEASE.2025-09-07T16-13-09Z. This is the most recent RELEASE.* tag listed, with 'latest' pointing to the same digest, last pushed 7 months ago. Official docs recommend quay.io/minio/minio:latest for stable deployments, but Docker Hub repo is archived and unchanged since then. Recent releases appear under MinIO's AIStor at dl.min.io, but no corresponding Docker tags found on Docker Hub.
Citations:
- 1: https://hub.docker.com/r/minio/minio/tags
- 2: https://hub.docker.com/r/minio/minio
- 3: https://hub.docker.com/r/minio/minio/
- 4: https://dl.min.io/aistor/minio/release/notes/
Pin MinIO image to a specific version or migrate to the recommended registry.
Using minio/minio without a version tag defaults to latest, which can introduce breaking changes during redeployment. Additionally, the Docker Hub minio/minio repository is archived; official documentation recommends using quay.io/minio/minio instead. Other services in this file (Postgres, Valkey, RabbitMQ) correctly pin their versions.
🐛 Recommended fix: Use official registry with specific version
plane-minio:
container_name: plane-minio
- image: minio/minio
+ image: quay.io/minio/minio:RELEASE.2025-09-07T16-13-09Z
restart: always📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| plane-minio: | |
| container_name: plane-minio | |
| image: minio/minio | |
| plane-minio: | |
| container_name: plane-minio | |
| image: quay.io/minio/minio:RELEASE.2025-09-07T16-13-09Z |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plane-app/docker-compose.yaml` around lines 143 - 145, The docker-compose
service plane-minio currently uses an unpinned, archived image "minio/minio";
change the image entry for the plane-minio service (look for the service name
"plane-minio" and the "image: minio/minio" line) to point to the recommended
registry and a specific version tag—e.g. use "quay.io/minio/minio:<version>"
(pick a known release tag or RELEASE-YYYYMMDD) or another pinned MinIO
release—so the image is not pulled as latest and comes from the supported
registry.
| ports: | ||
| - ${LISTEN_HTTP_PORT}:80 | ||
| - ${LISTEN_HTTPS_PORT}:443 |
There was a problem hiding this comment.
Add defaults to port bindings for robustness.
${LISTEN_HTTP_PORT} and ${LISTEN_HTTPS_PORT} lack default values. If these variables are not defined in the .env file, docker-compose will bind to empty strings and fail. Adding defaults ensures the compose file works out of the box.
🛡️ Proposed fix to add default ports
ports:
- - ${LISTEN_HTTP_PORT}:80
- - ${LISTEN_HTTPS_PORT}:443
+ - ${LISTEN_HTTP_PORT:-80}:80
+ - ${LISTEN_HTTPS_PORT:-443}:443📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ports: | |
| - ${LISTEN_HTTP_PORT}:80 | |
| - ${LISTEN_HTTPS_PORT}:443 | |
| ports: | |
| - ${LISTEN_HTTP_PORT:-80}:80 | |
| - ${LISTEN_HTTPS_PORT:-443}:443 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plane-app/docker-compose.yaml` around lines 161 - 163, The port bindings
under the "ports:" block use environment variables LISTEN_HTTP_PORT and
LISTEN_HTTPS_PORT without defaults, which can cause docker-compose to fail if
those vars are missing; update the bindings to use shell-style defaults (e.g.,
${LISTEN_HTTP_PORT:-8080} and ${LISTEN_HTTPS_PORT:-8443}) so the ports provide
sensible fallbacks, ensuring the "ports" entries remain valid when
LISTEN_HTTP_PORT or LISTEN_HTTPS_PORT are undefined.
Description
Type of Change
Screenshots and Media (if applicable)
Test Scenarios
References
Summary by CodeRabbit
New Features
Chores