design-proposal: cross-cluster mesh for tenant access to host services#7
design-proposal: cross-cluster mesh for tenant access to host services#7kvaps wants to merge 5 commits intocozystack:mainfrom
Conversation
Propose a controller-driven design that wires Cozystack tenant clusters into a node-to-node WireGuard mesh with the host cluster, using Kilo's mesh-granularity=cross topology. The motivating use case is exposing a Rook-managed Ceph cluster to tenant pods. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a design proposal for a cross-cluster mesh using Kilo to allow tenant clusters to access host-cluster services like Ceph. The design utilizes a bipartite node-to-node topology managed by a new operator. The review feedback provides several technical improvements, including addressing MTU overhead for WireGuard, analyzing scalability limits of the N x M mesh, implementing fallback logic for node endpoints, using finalizers for robust resource cleanup, and expanding IP disjointness checks to include Service CIDRs.
|
|
||
| ### Topology | ||
|
|
||
| Both the host cluster and every participating tenant cluster run Kilo with `--mesh-granularity=cross`. In this mode every node is a topology segment of one. Within a single logical location (e.g. all nodes inside one cluster) traffic uses the underlying CNI without WireGuard. Across logical locations every node holds a direct WireGuard tunnel to every node in the other location. |
There was a problem hiding this comment.
The proposal should address MTU configuration for the cross-cluster mesh. Since WireGuard adds encapsulation overhead (typically 60-80 bytes), packets from pods using the default 1500 MTU will exceed the tunnel MTU, leading to fragmentation or packet loss. The design should specify how this will be handled, for example, by configuring the Kilo interface MTU and ensuring MSS clamping is active or by adjusting the CNI MTU in the tenant clusters.
|
|
||
| Both the host cluster and every participating tenant cluster run Kilo with `--mesh-granularity=cross`. In this mode every node is a topology segment of one. Within a single logical location (e.g. all nodes inside one cluster) traffic uses the underlying CNI without WireGuard. Across logical locations every node holds a direct WireGuard tunnel to every node in the other location. | ||
|
|
||
| For the host ↔ tenant pair, the result is a full bipartite mesh: every tenant node has a tunnel to every host node, and vice versa. The number of tunnels is `N × M` where N is the tenant node count and M is the host node count; this is intentional and is what enables the throughput and HA properties described below. |
There was a problem hiding this comment.
The N x M bipartite mesh topology may face scalability challenges as the number of nodes increases. For instance, a 100-node host cluster and a 100-node tenant cluster would result in 10,000 WireGuard peers per node. The proposal should include an analysis of the practical limits for the number of peers the kg-agent and the Linux kernel can manage before performance or control-plane stability is impacted.
| For each `TenantMeshLink`, the operator: | ||
|
|
||
| 1. Validates `spec.podCIDR` against all other `TenantMeshLink` objects and the host cluster's pod-CIDR; any overlap sets `PodCIDRConflict=True` and aborts further reconciliation for that tenant. | ||
| 2. Lists host cluster Nodes; for each node, ensures a `Peer` exists in the tenant cluster with: `publicKey` from the `kilo.squat.ai/wireguard-public-key` annotation, `endpoint` from `kilo.squat.ai/force-endpoint`, and `allowedIPs` containing the node's per-node pod-CIDR. |
There was a problem hiding this comment.
The operator should have a fallback strategy if the kilo.squat.ai/force-endpoint annotation is missing on a host node. Without a defined endpoint, tenant nodes will not be able to initiate the WireGuard handshake. Consider falling back to the Node's ExternalIP or InternalIP, or surfacing a specific error in the TenantMeshLink status.
| 2. Lists host cluster Nodes; for each node, ensures a `Peer` exists in the tenant cluster with: `publicKey` from the `kilo.squat.ai/wireguard-public-key` annotation, `endpoint` from `kilo.squat.ai/force-endpoint`, and `allowedIPs` containing the node's per-node pod-CIDR. | |
| 2. Lists host cluster Nodes; for each node, ensures a Peer exists in the tenant cluster with: publicKey from the kilo.squat.ai/wireguard-public-key annotation, endpoint from kilo.squat.ai/force-endpoint (falling back to Node IP if missing), and allowedIPs containing the node's per-node pod-CIDR. |
| 1. Validates `spec.podCIDR` against all other `TenantMeshLink` objects and the host cluster's pod-CIDR; any overlap sets `PodCIDRConflict=True` and aborts further reconciliation for that tenant. | ||
| 2. Lists host cluster Nodes; for each node, ensures a `Peer` exists in the tenant cluster with: `publicKey` from the `kilo.squat.ai/wireguard-public-key` annotation, `endpoint` from `kilo.squat.ai/force-endpoint`, and `allowedIPs` containing the node's per-node pod-CIDR. | ||
| 3. Lists tenant cluster Nodes; for each node, ensures a `Peer` exists in the host cluster with: `publicKey` from the tenant Node's annotation, `allowedIPs` containing the tenant per-node pod-CIDR, no `endpoint` (the tenant initiates). | ||
| 4. Removes orphaned Peer objects on either side using a label selector tied to the `TenantMeshLink` name. |
There was a problem hiding this comment.
To ensure that Peer objects created in the tenant cluster are reliably cleaned up when a TenantMeshLink is deleted, the operator should use Kubernetes finalizers. Without finalizers, if the operator is unavailable or the tenant API is unreachable during deletion, orphaned Peer resources will remain in the tenant cluster.
| 4. Removes orphaned Peer objects on either side using a label selector tied to the `TenantMeshLink` name. | |
| 4. Uses finalizers to ensure all remote Peer objects are removed from the tenant cluster before the TenantMeshLink is deleted.\n5. Removes orphaned Peer objects on either side using a label selector tied to the TenantMeshLink name. |
|
|
||
| The constraints on pod-CIDRs are: | ||
|
|
||
| - The host pod-CIDR and every tenant pod-CIDR must be pairwise disjoint. |
There was a problem hiding this comment.
The disjointness requirement should be extended to include the Service CIDRs of both clusters. Overlaps between a tenant's pod-CIDR and the host's Service CIDR (or vice versa) can cause routing conflicts, making it impossible for pods to reach internal services or the advertised host services.
| - The host pod-CIDR and every tenant pod-CIDR must be pairwise disjoint. | |
| - The host pod-CIDR, host service-CIDR, and every tenant pod-CIDR must be pairwise disjoint. |
Adjust the proposal to reflect that the controller will be developed as an independent project under the kilo-io organization, per confirmed interest from Kilo maintainer @squat. Generalize the CRD from a tenant-specific TenantMeshLink to a tenant-agnostic ClusterMesh that references peer clusters through a map of kubeconfig Secrets. Move all tenant semantics into a dedicated Cozystack integration section that also accounts for the kubernetes-nodes split (PR cozystack#8) so a single ClusterMesh covers multi-location, multi-backend tenants. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
…oller allowlist + RBAC monopoly Drop the planned admission webhook. Instead, harden the design with two controls owned by the host-cluster operator: - The controller is the only principal with write access to kilo.squat.ai/Peer in any participating cluster. Tenant-provisioning, the dashboard, and cluster admins can author ClusterMesh objects (intent) but never touch Peer directly. - The controller is configured at deploy time with a subnet allowlist (--allowed-cidr). Any ClusterMesh whose allowedNetworks fall outside that list is rejected with a status condition before any Peer is written. The allowlist cannot be widened through the ClusterMesh API. Collapse the per-cluster podCIDR + advertise fields into a single allowedNetworks list, since both are now validated against the same allowlist and can be expressed uniformly. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
…ontainment Make the WG-IP threat model explicit. A tenant root that tampers with a Node's kilo.squat.ai/wireguard-ip annotation must not be able to inject a Peer with attacker-chosen allowedIPs onto the host side. Add: - A second controller-level allowlist, --allowed-wireguard-cidr, that bounds where any kilo0 interface in the mesh may live. spec.clusters carries no WG-CIDR field; the WG address space is host-admin-owned infrastructure, not part of per-mesh data. - Per-Node validation alongside the existing mesh-level checks: WG-IP must be /32 (or /128), in --allowed-wireguard-cidr, and unique within its cluster. PodCIDRs must be in allowedNetworks. Failures skip the offending Node only; the mesh stays Ready. - A primary-boundary statement in Security: the host's exposure to a tenant peer is bounded exclusively by the host-side Peer.allowedIPs, so anything the tenant does to its own kilo0, routes, or kg-agent post-reconcile cannot widen that bound. - Cozystack integration spelled out for both allowlists: pod-pool to --allowed-cidr, WG-pool to --allowed-wireguard-cidr; tenant provisioning allocates from each. WG-IP is now restored to Peer.allowedIPs (standard Kilo Peer shape), since the new allowlist makes that safe and it brings cross-cluster diagnostics back. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
…Networks list Drop the second --allowed-wireguard-cidr allowlist. WG-CIDR is just another entry in the same allowedNetworks list as pod-CIDR and service-CIDR; per-Node WG-IP containment is validated against the cluster's own allowedNetworks rather than against a separate global pool. A tenant root cannot widen its surface to host pod/WG/service-CIDR because those CIDRs live in the host's allowedNetworks (a different spec.clusters entry), and per-Node containment rejects out-of-range annotations. Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
|
|
||
| - Pods in any peer cluster can reach selected services in another peer cluster as if they were on the local network. (Cozystack use case: tenant pods reach host Ceph monitors, OSDs and MDS daemons.) | ||
| - Nodes added to or removed from any participating cluster are wired into / detached from the mesh automatically, without per-node manual configuration. | ||
| - A compromise of a peer cluster (up to and including full root on a peer node) cannot affect routing in another peer cluster beyond the network surface that was explicitly granted, and cannot affect unrelated peers. |
There was a problem hiding this comment.
unless it is the cluster running the controller, in which case I guess they do get perms on peer clusters, but that's not new
| # The controller's own cluster — no kubeconfig needed. | ||
| local: true | ||
| allowedNetworks: | ||
| - 10.4.0.0/16 # WG-CIDR |
There was a problem hiding this comment.
Should any of these be named fields in the struct rather than open fields in allowedNetworks? If the WireGuard mesh CIDR and the Pod CIDR are mandatory then maybe they get special treatment? Alternatively, using the open list can later be easily migrated into the stricter design
There was a problem hiding this comment.
I guess these are all technically optional and it just determines which networks from the Peer resources we want to honor / validate
|
|
||
| **Mesh-level (halts reconciliation on failure):** | ||
|
|
||
| 1. Every CIDR in every `spec.clusters[*].allowedNetworks` is a subset of the controller's `--allowed-cidr` allowlist; otherwise `NetworksNotAllowed=True`. |
There was a problem hiding this comment.
This is kind of annoying, it means that there is functionally no difference between the cluster admin and the mesh admin. If you want to create a new mesh, then you have to edit the mesh controller deployment to add the allow list. I need to think about this a bit. What are we hoping to defend against here?
| namespace: kilo | ||
| spec: | ||
| clusters: | ||
| cluster-a: |
There was a problem hiding this comment.
Maybe in keeping with Kubernetes convention this should become a list of named structs, like how a Pod contains a list of named containers.
| - **`--allowed-cidr` allowlist** bounds what `spec.clusters[*].allowedNetworks` can ever declare. Pod-CIDRs, WG-CIDRs, and service-CIDRs all flow through the same allowlist. A user who can author `ClusterMesh` objects cannot widen the address surface beyond what the host admin pre-approved. | ||
| - **Per-Node containment** validates that every observed annotation (`Node.Spec.PodCIDRs`, `kilo.squat.ai/wireguard-ip`) lies within the cluster's own `allowedNetworks`. A tenant root forging an annotation that points at the host pod-CIDR, host WG-CIDR, or any other CIDR the tenant did not declare itself is rejected — the offending Node is skipped and never appears as a Peer on the host side. | ||
| - **Trust direction by kubeconfig placement.** Whichever cluster holds the controller and the kubeconfig Secrets is the side that drives writes; the side whose kubeconfig is held cannot write back. In Cozystack, only the host holds tenant kubeconfigs — trust flows host → tenant. | ||
| - **Cross-mesh isolation.** Each `ClusterMesh`'s Peers are labelled with the mesh name; the controller never deletes or modifies Peers belonging to a different mesh, and `allowedNetworks` overlap between meshes (not just within a single mesh) is rejected. |
There was a problem hiding this comment.
We should probably also add labels for the source cluster name so if two controllers running on different hosts are managing meshes on the same tenant (some triangle where the two hosts don't know about each other) then the controllers are less likely to compete for ownership of Peers if the mesh object has the same name.
| 2. **Cluster identifier scope**: should `spec.clusters` keys be free-form strings or follow a stricter schema (e.g. DNS-1123 labels) so they can be reused as label values? Likely the latter; to confirm during implementation. | ||
| 3. **Transitive routing**: with three or more clusters in the same `ClusterMesh`, the controller currently builds a full mesh. Should it support partial topologies (e.g. star)? Out of scope for v1; the CRD shape allows it later. | ||
| 4. **Multi-controller scenarios**: in a deployment where two clusters each run their own controller, how should they coordinate? Likely via a "leader" cluster identified in the CRD; deferred. | ||
| 5. **Per-peer opt-in for received CIDRs**: today `allowedNetworks` is a unilateral declaration on the source side, plus a global allowlist on the controller. Should there additionally be a per-peer `acceptedNetworks` field, so a peer can refuse to accept some of what another peer publishes? Likely unnecessary given the controller-level allowlist, but worth revisiting once there are multi-tenant deployments with heterogeneous policies. |
There was a problem hiding this comment.
The more I read about the controller allowlist, the more o actually started leaning in this direction. Maybe this needs to be a flag on Kilo, actually (or an entirely new PeerClass resource that declares what allowed IPs are permissible for every Peer in a cluster). This would allow individual clusters to guard against peers being created by a rogue cluster mesh controller. It's not blocking: this is orthogonal Kilo work that would be great to upstream to improve the administration of Kilo meshes.
Summary
Adds a design proposal for cross-cluster connectivity between Cozystack-managed tenant clusters and the host cluster.
The motivating use case: a host cluster running Ceph (managed by Rook) that should be reachable from inside tenant clusters as if it were local storage. Standard single-gateway approaches (Submariner, Kilo's default
mesh-granularity=location) bottleneck Ceph traffic; this proposal uses Kilo'smesh-granularity=cross(squat/kilo#328) to build a node-to-node mesh that scales linearly with cluster size and handles Rook-driven failover without controller intervention on the data path.The proposal covers:
cozystack-meshlink-operator) andTenantMeshLinkCRD for managing Peer objects on both sidesLooking for feedback on the open questions, especially the upstream Kilo PR #328 strategy and whether tenant-side Kilo should be a hard requirement.
Test plan
This is a design proposal; no code yet. Implementation testing is scoped in the proposal and will follow in implementation PRs: