Skip to content

[WPB-23644] refactor user type in wire-subsystems#5074

Merged
fisx merged 17 commits intodevelopfrom
WPB-23644-fix-subsystems-dep-graph
Mar 12, 2026
Merged

[WPB-23644] refactor user type in wire-subsystems#5074
fisx merged 17 commits intodevelopfrom
WPB-23644-fix-subsystems-dep-graph

Conversation

@fisx
Copy link
Copy Markdown
Contributor

@fisx fisx commented Mar 2, 2026

Historically, we kept track of the user type "app" only in postgres. By now we have duplicated this information into cassandra, so a lot of code becomes more simple to write. This PR is fixing a few spots that I've missed when introducing user type storage in cassandra.

Checklist

  • Add a new entry in an appropriate subdirectory of changelog.d
  • Read and follow the PR guidelines

@zebot zebot added the ok-to-test Approved for running tests in CI, overrides not-ok-to-test if both labels exist label Mar 2, 2026
@fisx fisx force-pushed the WPB-23644-fix-subsystems-dep-graph branch from 9ea4b9f to effb4d1 Compare March 3, 2026 15:49
Copilot AI added a commit that referenced this pull request Mar 3, 2026
…factor inferUserType

Co-authored-by: fisx <10210727+fisx@users.noreply.github.com>
@fisx fisx force-pushed the WPB-23644-fix-subsystems-dep-graph branch 2 times, most recently from e881280 to de3e000 Compare March 4, 2026 06:50
@fisx fisx marked this pull request as ready for review March 4, 2026 10:04
@fisx fisx requested review from a team as code owners March 4, 2026 10:04
-- | Returned by 'browseTeam' under @/teams/:tid/search@.
data TeamContact = TeamContact
{ teamContactUserId :: UserId,
teamContactUserType :: UserType,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you've checked the item for having read and followed PR guidelines, but I don't see an update to the v15 API. Perhaps this should also be part of changelog?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"id": "00000002-0000-0002-0000-000100000002",
"managed_by": null,
"name": "}O",
"name": "\ud4f5}O",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know why this is changing?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it's equivalent. copilot didn't like the previous one.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should i roll it back?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did copilot generate this JSON? Or is it generated by aeson?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i asked copilot to fix the golden tests. i also read the changes, just didn't bother to undo this one because i saw no harm.

i've changed it back: b77e7b8

in hindsight i should have just done something with find, and perl, would have been much faster both in implementation and in code review...

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be nice if these JSON files can be automatically regenerated and our current machinery is maintained for this. It shouldn't require extra massaging. I'll dig into what happened here, for now I guess its ok.

Comment thread libs/wire-subsystems/src/Wire/UserSearch/Types.hs Outdated
Comment thread libs/wire-subsystems/src/Wire/UserSubsystem/Interpreter.hs Outdated
Comment thread libs/wire-subsystems/src/Wire/StoredUser.hs
@fisx fisx requested a review from akshaymankar March 4, 2026 14:02
Copy link
Copy Markdown
Member

@akshaymankar akshaymankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nits inline.

Comment thread changelog.d/5-internal/WPB-23644-refactor Outdated
Comment thread libs/wire-api/src/Wire/API/Routes/Public/Brig.hs Outdated
Comment thread libs/wire-api/src/Wire/API/App.hs Outdated
"id": "00000002-0000-0002-0000-000100000002",
"managed_by": null,
"name": "}O",
"name": "\ud4f5}O",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be nice if these JSON files can be automatically regenerated and our current machinery is maintained for this. It shouldn't require extra massaging. I'll dig into what happened here, for now I guess its ok.

@fisx fisx force-pushed the WPB-23644-fix-subsystems-dep-graph branch from f560ec9 to 31b6e3a Compare March 9, 2026 09:36
@fisx
Copy link
Copy Markdown
Contributor Author

fisx commented Mar 9, 2026

The CI errors consistently happen on this branch and nowhere else, but I don't understand:

STDERR:
  I0309 10:14:26.035245    2534 warnings.go:110] "Warning: would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"brig-index\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"brig-index\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"brig-index\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"brig-index\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"
  Error: failed post-install: 1 error occurred:
  	* job brig-index-migrate-data failed: BackoffLimitExceeded

This PR doesn't change helm charts or values at all, especially not the security context stuff. How can it be realted to this branch? Or is this a red herring?

@fisx
Copy link
Copy Markdown
Contributor Author

fisx commented Mar 10, 2026

The CI errors consistently happen on this branch and nowhere else, but I don't understand:

STDERR:
  I0309 10:14:26.035245    2534 warnings.go:110] "Warning: would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (container \"brig-index\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container \"brig-index\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or container \"brig-index\" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container \"brig-index\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")"
  Error: failed post-install: 1 error occurred:
  	* job brig-index-migrate-data failed: BackoffLimitExceeded

This PR doesn't change helm charts or values at all, especially not the security context stuff. How can it be realted to this branch? Or is this a red herring?

the logs contain network errors involving cassandra. but i'm only changing the migration logic in this PR, nothing in helm charts or values where PodSecurity would mean anything.

the riddle deepens...

@fisx fisx force-pushed the WPB-23644-fix-subsystems-dep-graph branch from f02ae2f to eea2326 Compare March 11, 2026 08:41
fisx and others added 5 commits March 12, 2026 14:25
Co-authored-by: Akshay Mankar <akshay@wire.com>
Co-authored-by: Akshay Mankar <akshay@wire.com>
@fisx fisx force-pushed the WPB-23644-fix-subsystems-dep-graph branch from fda21d2 to b6654a3 Compare March 12, 2026 13:25
@fisx fisx merged commit cadd47c into develop Mar 12, 2026
10 checks passed
@fisx fisx deleted the WPB-23644-fix-subsystems-dep-graph branch March 12, 2026 14:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ok-to-test Approved for running tests in CI, overrides not-ok-to-test if both labels exist

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants