Released on 2025-11-07.
|
Release highlights
|
|
|
Overview of breaking changes
|
The following components of the SDP contain breaking changes for this release: |
Traffic between Open Policy Agent (OPA) and clients can be encrypted using TLS by enabling it in the OPA custom resource. The authorizers for Trino and NiFi automatically integrate with these secured OPA deployments and verify the authenticity of the server certificates when TLS for OPA is enabled. Support for other operators will be rolled out in a future release. See the TLS encryption documentation page and opa-operator#581.
All operators now emit a warning message on startup and in a regular interval when it may have reached end-of-support. Most of our operators reach end-of-support one year after they have been released which roughly translates to three SDP releases. This is in accordance with our support policy. The interval can be adjusted or the check can be disabled completely via Helm values.
maintenance:
endOfSupportCheck:
enabled: true
mode: offline # only offline is currently supported
interval: 24h # A human-readable durationSee issues#733.
-
The performance of the Trino rules in the
end-to-end-securitystack was improved. Batch queries are now significantly faster. See demos#289. -
A new demo has been added, showcasing the interaction between the Stackable Data Platform and ArgoCD to deploy resources managed in Git. The
argo-cd-git-opsdemo deploys Stackable operators and Airflow via ArgoCD, uses Sealed Secrets to safely deploy secrets and credentials and synchronizes Airflow DAGs via Git. See demos#205.
-
The Airflow triggerer component is now supported. This can be used with DAGs utilizing deferrable operators to keep worker slots free and enhance High Availability (HA). See airflow-operator#200.
-
The
airflow-scheduled-jobdemo for Airflow has been extended to showcase some of the new Airflow 3.x features in the context of SDP i.e. event scheduling (with Kafka), triggerer actions and user authorization with OPA and the SDP OPA authorizer. See demos#223.
|
Warning
|
It should be noted, that there are multiple known issues including but not limited to:
See the known issues section on the KRaft Controller page for an up-to-date list of known issues. |
This release adds experimental support for KRaft-managed Kafka clusters. KRaft Controllers can be deployed instead of Apache ZooKeeper to manage the state of Kafka. KRaft is supported by all Kafka versions provided by SDP, and starting with Kafka 4 it is the only cluster management option available. See kafka-operator#889.
A patch was added which allows disabling the SNI (Server Name Indication) checks for NiFi. The workaround is documented in the troubleshooting section. This can be useful in certain scenarios where the external name is not in the certificates used by NiFi. See nifi-operator#812.
-
The ServiceAccount of spark applications can now be overridden with
podOverrides. Previously, the application ServiceAccount was passed as command line argument to spark-submit and it was therefore not possible to overwrite it withpodOverridesfor the driver and executors. This CLI argument has now been moved to the Pod templates of the individual roles. See spark-k8s-operator#617. -
This release adds experimental support for Spark 4.0.1. The support is marked as experimental because Spark 4.0.1 has known compatibility issues with Apache HBase and Apache Iceberg. See spark-k8s-operator#586.
This release adds a dedicated per-rolegroup -metrics Service, which can be used to scrape Prometheus metrics.
Additionally, the operator exposes more Prometheus metrics, such as successful or failed bundle loads and information about the OPA environment.
The Stackable Data Platform now provides an operator for OpenSearch. We initially support version 3.1.0, which is also marked as the LTS line going forward.
OpenSearch is a powerful search and analytics engine built on Apache Lucene.
OpenSearch clusters can be defined via custom resources similar to other Stackable operators.
For instance, a cluster with OpenSearch nodes of different types and replication factors can be defined.
Logging, Monitoring and service exposition with ListenerClasses is supported as well.
As the operator is still in an early development phase, special care was taken to allow extensive overriding with configOverrides and podOverrides.
The operator only manages the OpenSearch back-end. The OpenSearch Dashboards front-end can be installed via the official Helm chart. Stackable provides a supported image for OpenSearch Dashboards which can be used with this Helm chart.
See the OpenSearch documentation page for more details.
-
The operator now supports configuring fault-tolerant execution via the TrinoCluster CRD. See the documentation page and trino-operator#779.
-
The Trino client spooling protocol can now be configured using the
spec.clusterConfig.clientProtocol.spoolingproperty. Users can configure an S3Connection and the location of spooling segments. Additional properties can be added using theconfigOverridesmechanism for thespooling-manager.propertiesfile. See the client spooling protocol documentation page and trino-operator#793.
37 CVEs were fixed in the Stackable product images. This includes 2 critical and 18 high-severity CVEs.
Breaking: With the release of SDP 25.11, we now sign container images and Helm charts using cosign 3 and its new bundle format, benefiting from the OCI Referrers API. This means to verify signatures of this and future releases, users need to use cosign 3. Verification using cosign 2 is also possible if you’re using version 2.6.0 or above and provide the additional flag --new-bundle-format, but cosign 3 is recommended for full compatibility and functionality. For guidance on how to verify image signatures, please consult the Stackable signature verification documentation.
This release includes various improvements in regards to metrics collection and exposition. Previously, some operators did not expose Prometheus annotations containing the HTTP(S) scheme or the metrics path and port. These annotations are now available which allows custom relabel configs in Prometheus to scrape the metric endpoints:
-
Apache Airflow: airflow-operator#698.
-
Apache Druid: druid-operator#761.
-
Apache Hive: hive-operator#641.
-
Apache Kafka: kafka-operator#897.
-
Apache NiFi: nifi-operator#855.
-
Apache Spark: spark-k8s-operator#619.
-
Apache Superset: superset-operator#671.
-
Apache ZooKeeper: zookeeper-operator#978.
-
Open Policy Agent: opa-operator#767.
-
Trino: trino-operator#807.
In addition to the annotation changes listed above, the following breaking changes were made:
-
Breaking: Apache HBase: The
prometheus.io/scrapelabel is now only available on themetricsService (instead of theheadlessService), which usesmetricsas the port name instead of the previousui-http/ui-httpsport name. See hbase-operator#701. -
Breaking: Apache Hadoop: The
metricsService previously exposed the JMX metrics via themetricsport. In this release, the JMX metrics have been moved to thejmx-metricsport. Themetricsport now instead exposes the native Prometheus metrics.WarningCare needs to be taken because the metrics format has changed.
See hdfs-operator#721.
-
Breaking: Apache Kafka: The
<cluster>-<role>-<rolegroup>Service was replaced with<cluster>-<role>-<rolegroup>-headlessand<cluster>-<role>-<rolegroup>-metricsServices. See kafka-operator#897.
-
All operators now correctly handle multiple CA certificates. This can be the case if the Stackable secret-operator auto rotated the CA certificate or if multiple CA certificates are present in a SecretClass. See issues#764 for more details.
-
New Helm values have been added to the operators for setting
priorityClassNameon the resulting Pods, giving administrators greater control over scheduling. When left unconfigured, the fields will not be present on the subsequent Pods. See issues#765 for more details.# Listener operator csiProvisioner: priorityClassName: ... csiNodeDriver: priorityClassName: ... # Secret operator controllerService: priorityClassName: ... csiNodeDriver: priorityClassName: ... # All other operators priorityClassName: ...
-
Previously, log entries for some supported products were occasionally corrupted. These issues have now been resolved by implementing multiple fixes in various affected (upstream) projects. See the tracking issue issues#778 for more details.
-
Pull request vectordotdev/vector#24028 was raised to fix log entries with multi-char delimiters. As of SDP 25.11.0 release, this PR has not been merged yet, but the fix is manually applied as a patch. See docker-images#1323.
-
An XMLLayout multithreading issue in logback has been fixed by raising qos-ch/logback#978. This fix has been rolled out in all affected products:
-
Apache Kafka: docker-images#1330
-
Apache NiFi: docker-images#1314
-
Apache ZooKeeper: docker-images#1320
-
-
-
The JWT key is now created internally by the operator. The same applies to the key previously defined in the credentials secret under
connections.secretKey: this change is non-breaking, asconnections.secretKeywill be ignored if supplied. See airflow-operator#686. -
Database initialization routines - which are idempotent and run by default - can be deactivated to e.g. help diagnose or troubleshoot start-up issues via the new
databaseInitialization.enabledfield.WarningTurning off these routines is an unsupported operation as subsequent updates to a running Airflow cluster can result in broken behaviour due to inconsistent metadata. Only use this setting if you know what you are doing!
-
The Airflow DAG-processor component now has an optional individual role in the CRD, allowing it to be separately configured (e.g. logging, resources) and run in a dedicated container. See airflow-operator#637.
-
Previously in setups where multiple Web/API-servers were used, only one instance was able to automatically access the connection passwords stored in the database. This could be solved by setting the fernet key explicitly, but now this detail is taken care of internally by the operator. See airflow-operator#694.
The Apache NiFi monitoring documentation page has been updated to include guidance on how to scrape NiFi 2 metrics using mTLS. See nifi-operator#813.
-
Breaking: The per-rolegroup Services now only expose the HTTP port and contain a
-headlesssuffix to better indicate their purpose and to be consistent with other operators. See opa-operator#748. -
The User Info Fetcher (UIF) is no longer marked as experimental. See opa-operator#751.
Reduce severity of Pod eviction error logs.
Previously, the operator would produce a lot of ERROR level logs containing Cannot evict pod as it would violate the pod’s disruption budget.
With this change, the log level is reduced to INFO.
See commons-operator#372.
-
Breaking: ListenerClass
.spec.externalTrafficPolicynow defaults tonull(leaving this field unconfigured, previously it defaulted toLocal). This improves LoadBalancer support across various Kubernetes environments. The Kubernetes API server will apply its default value ofClusterwhen this field is unset. See listener-operator#347.Details
You are affected if all of the following conditions apply:
-
You are using ListenerClasses.
-
You did NOT explicitly set
.spec.externalTrafficPolicyin your ListenerClass definitions so far. -
You experience performance degradation after upgrading or were relying on the old default behavior for different reasons.
What to do if you are affected: Explicitly set
.spec.externalTrafficPolicy: Localin your ListenerClass to restore the previous behavior.Why this change? The previous default of
Localprovides better performance but requires support by the LoadBalancer. Setting tonullallows the Kubernetes API server to apply its default (Cluster), which works across more environments. -
-
Breaking: The listener-operator Helm chart default value for
presetchanged fromstable-nodestoephemeral-nodes. This change improves reliability by making failures visible immediately rather than appearing unexpectedly during node rotations. See the tracking issue issues#770.Details
You are affected if all of the following conditions apply:
-
You are using the
external-stableListenerClass -
You are using the
ephemeral-nodespreset (new default now) -
Your Kubernetes cluster does NOT support LoadBalancers
What changes: Previously with the
stable-nodespreset, theexternal-stableListenerClass would use NodePorts and pin Pods to specific nodes. This could break at any point in time when nodes were rotated and the pinned Pods could not be scheduled to their pinned nodes anymore. With the newephemeral-nodesdefault,external-stablerequires LoadBalancer support and will fail immediately on deployment if LoadBalancers are not available. This fail-fast behavior is preferred over any potential long-term breakage.What to do if you are affected:
-
Option 1: Add LoadBalancer support to your Kubernetes cluster (recommended for production).
-
Option 2: Explicitly set the preset to
stable-nodesto restore the old behavior (but be aware it might break during node rotations):# Using Helm helm install listener-operator ... --set preset=stable-nodes # Using stackablectl stackablectl release install --listener-class-preset stable-nodes
It should be noted that
stackablectlautomatically detects k3s and kind clusters and uses thestable-nodespreset since version 1.2.0. -
Option 3: Use the new
.spec.pinnedNodePortsfield to control node pinning.
-
-
Breaking: Helm values have changed to allow for separate configuration of affinity, resource, etc… between the CSI Provisioner Deployment Pods and the CSI driver DaemonSet Pods.
Container resources for the CSI Controller Service (
sdp/listener-operatorin the Deployment):# Before controller: resources: ... # After csiProvisioner: controllerService: resources: ...
Container image/resources for the external-provisioner (
sig-storage/csi-provisionerin the Deployment):# Before csiProvisioner: image: ... resources: ... # After csiProvisioner: externalProvisioner: image: ... resources: ...
Container resources for the CSI Node Service (
sdp/listener-operatorin the DaemonSet):# Before node: driver: resources: ... # After csiNodeDriver: nodeService: resources: ...
Container image/resources for the node-driver-registrar (
sig-storage/csi-node-driver-registrarin the DaemonSet):# Before csiNodeDriverRegistrar: image: ... resources: ... # After csiNodeDriver: nodeDriverRegistrar: image: ... resources: ...
Settings that are now split:
# Before podAnnotations: ... podSecurityContext: ... securityContext: ... nodeSelector: ... tolerations: ... affinity: ... # After csiProvisioner: podAnnotations: ... podSecurityContext: ... nodeSelector: ... tolerations: ... affinity: ... controllerService: securityContext: ... csiNodeDriver: podAnnotations: ... podSecurityContext: ... nodeSelector: ... tolerations: ... affinity: ... nodeService: securityContext: ...
See the tracking issue issues#763 and listener-operator#334 for more details.
-
As part of the Helm value changes listed above, some resource names have also been updated.
WarningIt should be noted that generally no action is required, but that depends on whether or not your deployment scripts (eg: Kustomize) or monitoring/alerting system depends on any of the names and values.
-
Deployment
testing-listener-operator-deploymenthas been renamed totesting-listener-operator-csi-provisioner-
The
app.kubernetes.io/rolelabel value has changed fromcontrollertoprovisioner -
Container
listener-operatorhas been renamed tocsi-controller-service
-
-
DaemonSet
listener-operator-node-daemonsethas been renamed tolistener-operator-csi-node-driver-
The
app.kubernetes.io/rolelabel value has changed fromnodetonode-driver -
Container
listener-operatorhas been renamed tocsi-node-service
-
See listener-operator#334 for more details.
-
-
Breaking: The Helm Chart now deploys the secret-operator as two parts. This separation is needed for CRD versioning and conversion by the operator.
-
The controller (which reconciles resources, maintains CRDs and provides the CRD conversion webhook) runs as a Deployment with a single replica.
-
The CSI Provisioner and Driver runs on every Kubernetes cluster node via a DaemonSet (this behaviour is unchanged).
-
The Helm values are adjusted in accordance to the changes above.
Both the external provisioner and the node driver registrar have been moved under
csiNodeDriver:# Before csiProvisioner: resources: ... csiNodeDriverRegistrar: resources: ... # After csiNodeDriver: externalProvisioner: resources: ... nodeDriverRegistrar: resources: ...
The secret-operator is now deployed through a Deployment and a DaemonSet. As such, the resources of both secret-operator instances can be controlled separately:
# Before node: driver: resources: ... # After csiNodeDriver: nodeService: resources: ... controllerService: resources: ...
The
securityContexthas been split into two parts:# Before securityContext: ... # After csiNodeDriver: nodeService: securityContext: ... controllerService: securityContext: ...
Settings that are now split:
# Before podAnnotations: ... podSecurityContext: ... nodeSelector: ... tolerations: ... affinity: ... # After csiNodeDriver: podAnnotations: ... podSecurityContext: ... nodeSelector: ... tolerations: ... affinity: ... controllerService: podAnnotations: ... podSecurityContext: ... nodeSelector: ... tolerations: ... affinity: ...
Settings that have moved:
# Before kubeletDir: ... # After csiNodeDriver: kubeletDir: ...
-
As part of the Helm value changes listed above, some resource names have also been updated.
WarningIt should be noted that generally no action is required, but that depends on whether or not your deployment scripts (eg: Kustomize) or monitoring/alerting system depends on any of the names and values.
-
DaemonSet
secret-operator-daemonsethas been renamed tosecret-operator-csi-node-driver-
Container
secret-operatorhas been renamed tocsi-node-service
-
-
See secret-operator#645.
-
-
Breaking: The Stackable secret-operator no longer publishes retired and expired CA certificates:
-
CA certificates are by default retired one hour before they expire. This duration can be configured via
autoTls.ca.caCertificateRetirementDuration. -
Expired and retired CA certificates are no longer published in Volumes and TrustStore.
See the SecretClass and TrustStore documentation as well as secret-operator#650.
-
-
The custom
samAccountNamegeneration is no longer marked as experimental. To make this possible, the secret-operator is the first Stackable operator which supports CRD versioning.-
In version
v1alpha2of the SecretClass, theexperimentalGenerateSamAccountNamefield was renamed togenerateSamAccountName. See the SecretClass reference for more details. -
The stored version of SecretClass is
v1alpha2. It is however still possible to apply and retrieve SecretClasses inv1alpha1. The resources are automatically converted by the operator. -
The operator now deploys the CRDs for SecretClass and TrustStore by itself instead of relying on the Helm chart. This enables the operator to automatically rotate and update the TLS certificate (
caBundle) used for the conversion webhook. To enable this mechanism, the operator needs the following additional permissions:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ... rules: - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - create - patch - apiGroups: - secrets.stackable.tech resources: - secretclasses - truststores verbs: - create - patch
These permissions are automatically granted when using the Helm Chart, but need to be manually set if other deployment mechanisms are used.
The maintenance of CRDs (and default custom resources) can be disabled via Helm:
maintenance: customResourceDefinitions: maintain: false
WarningWhen CRD maintenance is disabled, the operator will not deploy and manage the CRDs. The CRDs need to be deployed manually and the conversion webhook is disabled. As a result, only
v1alpha1SecretClasses can be used. Only use this setting if you know what you are doing!NoteCurrently the maintenance of CRDs and the deployment of default custom resources, such as the
tlsSecretClass are tied together. This is slated to be changed in an upcoming SDP release.
See secret-operator#634.
-
-
The
certManagerbackend is no longer marked as experimental. In versionv1alpha2of the SecretClass, theexperimentalCertManagerfield was renamed tocertManager. See the SecretClass reference for more details. -
The operator now supports exporting the TrustStore CA certificate information to Secrets (in addition to ConfigMaps). See secret-operator#597.
Previously, when using custom images in combination with a SHA digest like oci.stackable.tech/sdp/spark-k8s@sha256:c8b7…, all operators created invalid labels app.kubernetes.io/version for their applied resources.
This was fixed by checking and replacing invalid characters in the created labels when a SHA digest is used to select the custom image.
See operator-rs#1076.
-
Previously, a missing OPA ConfigMap would crash the operator. With this release, we don’t panic on an invalid authorization config. See airflow-operator#667.
-
Previously, OPA authorization for Airflow 3 was not working. With this release, the operator now sets the required environment variables. See airflow-operator#668.
-
Allow multiple Airflows in the same namespace to use Kubernetes executors. Previously, the operator would always use the same name for the executor Pod template ConfigMap. Thus when deploying multiple Airflow instances in the same namespace, the ConfigMaps would conflict. See airflow-operator#678.
Spark Connect: Previously the property spec.image.pullSecrets was ignored by the operator when creating the executor templates.
This has now been corrected in the operator code.
See spark-k8s-operator#600.
Previously, there was a chance containers would not start, because Superset was starting too slowly and was killed because of a failing liveness probe. This has now been fixed by adding a proper startup probe, which allows Superset startup to succeed and not be killed. See superset-operator#654.
Previously the opa-operator ignored envOverrides set on role or rolegroup level.
With this release, the envOverrides are now properly propagated by the operator.
See opa-operator#754.
As with previous SDP releases, many product images have been updated to their latest versions. Refer to the supported versions documentation for a complete overview including LTS versions or deprecations.
The following product versions were already available before but are now marked as the LTS version:
-
Apache Hive: 4.0.1 (LTS)
-
Trino and Iceberg don’t fully work with Hive 4.x yet. See the hive-operator supported versions documentation for compatible versions and Hive 4 details. Be aware of upgrading Hive (e.g. 4.0.0 to 4.0.1 or 4.0.1 to Hive 4.1.0), as this upgrade is not easily reversible. Test the new version before upgrading your production workloads and take backups of your database.* Apache Kafka: 3.9.1 (LTS)
-
The following new product versions are now supported:
-
Apache Airflow: 3.0.6 (LTS)
-
Apache Druid: 34.0.0
-
Apache HBase: 2.6.3 (LTS)
-
Apache Hadoop: 3.4.2 (LTS)
-
Apache Hive: 4.1.0
-
Trino and Iceberg don’t fully work with Hive 4.x yet. See the hive-operator supported versions documentation for compatible versions and Hive 4 details. Be aware of upgrading Hive (e.g. 4.0.0 to 4.0.1 or 4.0.1 to Hive 4.1.0), as this upgrade is not easily reversible. Test the new version before upgrading your production workloads and take backups of your database.* Apache Kafka: 4.1.0 (experimental)
-
-
Apache NiFi: 2.6.0 (LTS)
-
Apache Spark: 3.5.7 (LTS), 4.0.1 (experimental)
-
Apache Superset: 4.1.4 (LTS)
-
Apache ZooKeeper: 3.9.4 (LTS)
-
Open Policy Agent: 1.8.0
-
OpenSearch: 3.1.0 (LTS)
-
Trino: 477 (LTS)
-
Vector: 0.49.0
The following product versions are deprecated and will be removed in a later release:
-
Apache Druid: 33.0.0
-
Apache HBase: 2.6.2
-
Apache Hadoop: 3.4.1
-
Apache Hive: 4.0.0
-
Trino and Iceberg don’t fully work with Hive 4.x yet. See the hive-operator supported versions documentation for compatible versions and Hive 4 details. Be aware of upgrading Hive (e.g. 4.0.0 to 4.0.1 or 4.0.1 to Hive 4.1.0), as this upgrade is not easily reversible. Test the new version before upgrading your production workloads and take backups of your database.* Apache Kafka: 3.7.2
-
-
Apache Spark: 3.5.6
-
Apache ZooKeeper: 3.9.3
-
Open Policy Agent: 1.4.2
The following product versions are no longer supported. These images for released product versions remain available here. Information on how to browse the registry can be found here.
This release supports the following Kubernetes versions:
-
1.34 -
1.33 -
1.32 -
1.31
These Kubernetes versions are no longer supported:
-
1.30
This release is available in the RedHat Certified Operator Catalog for the following OpenShift versions:
-
4.20 -
4.19 -
4.18
These OpenShift versions are no longer supported:
-
4.17 -
4.16
|
Warning
|
There is a known issue when updating the certified Stackable Listener Operator on OpenShift clusters |
Starting with stackablectl 1.0.0 the multiple consecutive commands described below can be shortened to just one command, which executes exactly those steps on its own.
$ stackablectl release upgrade 25.11Uninstall the 25.7 release
$ stackablectl release uninstall 25.7
Uninstalled release '25.7'
Use "stackablectl release list" to list available releases.
# ...Afterwards you will need to upgrade the CustomResourceDefinitions (CRDs) installed by the Stackable Platform.
The reason for this is that helm will uninstall the operators but not the CRDs.
This can be done using kubectl replace.
|
Note
|
It should be noted that the SecretClass and TrustStore CRDs don’t need to be replaced manually, because the Stackable secret-operator maintains them by default. |
kubectl replace -f https://raw.githubusercontent.com/stackabletech/airflow-operator/25.11.0/deploy/helm/airflow-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/commons-operator/25.11.0/deploy/helm/commons-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/druid-operator/25.11.0/deploy/helm/druid-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/hbase-operator/25.11.0/deploy/helm/hbase-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/hdfs-operator/25.11.0/deploy/helm/hdfs-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/hive-operator/25.11.0/deploy/helm/hive-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/kafka-operator/25.11.0/deploy/helm/kafka-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/listener-operator/25.11.0/deploy/helm/listener-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/nifi-operator/25.11.0/deploy/helm/nifi-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/opa-operator/25.11.0/deploy/helm/opa-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/spark-k8s-operator/25.11.0/deploy/helm/spark-k8s-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/superset-operator/25.11.0/deploy/helm/superset-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/trino-operator/25.11.0/deploy/helm/trino-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/zookeeper-operator/25.11.0/deploy/helm/zookeeper-operator/crds/crds.yamlcustomresourcedefinition.apiextensions.k8s.io "airflowclusters.airflow.stackable.tech" replaced
customresourcedefinition.apiextensions.k8s.io "authenticationclasses.authentication.stackable.tech" replaced
customresourcedefinition.apiextensions.k8s.io "s3connections.s3.stackable.tech" replaced
...Install the 25.11 release
$ stackablectl release install 25.11
Installed release '25.11'
Use "stackablectl operator installed" to list installed operators.Use helm list to list the currently installed operators.
You can use the following command to uninstall all operators that are part of the 25.7 release:
$ helm uninstall airflow-operator commons-operator druid-operator hbase-operator hdfs-operator hive-operator kafka-operator listener-operator nifi-operator opa-operator secret-operator spark-k8s-operator superset-operator trino-operator zookeeper-operator
release "airflow-operator" uninstalled
release "commons-operator" uninstalled
...Afterward you will need to upgrade the CustomResourceDefinitions (CRDs) installed by the Stackable Platform.
The reason for this is that helm will uninstall the operators but not the CRDs.
This can be done using kubectl replace.
|
Note
|
It should be noted that the SecretClass and TrustStore CRDs don’t need to be replaced manually, because the Stackable secret-operator maintains them by default. |
kubectl replace -f https://raw.githubusercontent.com/stackabletech/airflow-operator/25.11.0/deploy/helm/airflow-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/commons-operator/25.11.0/deploy/helm/commons-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/druid-operator/25.11.0/deploy/helm/druid-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/hbase-operator/25.11.0/deploy/helm/hbase-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/hdfs-operator/25.11.0/deploy/helm/hdfs-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/hive-operator/25.11.0/deploy/helm/hive-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/kafka-operator/25.11.0/deploy/helm/kafka-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/listener-operator/25.11.0/deploy/helm/listener-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/nifi-operator/25.11.0/deploy/helm/nifi-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/opa-operator/25.11.0/deploy/helm/opa-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/spark-k8s-operator/25.11.0/deploy/helm/spark-k8s-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/superset-operator/25.11.0/deploy/helm/superset-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/trino-operator/25.11.0/deploy/helm/trino-operator/crds/crds.yaml
kubectl replace -f https://raw.githubusercontent.com/stackabletech/zookeeper-operator/25.11.0/deploy/helm/zookeeper-operator/crds/crds.yamlcustomresourcedefinition.apiextensions.k8s.io "airflowclusters.airflow.stackable.tech" replaced
customresourcedefinition.apiextensions.k8s.io "authenticationclasses.authentication.stackable.tech" replaced
customresourcedefinition.apiextensions.k8s.io "s3connections.s3.stackable.tech" replaced
...Install the 25.11 release
|
Note
|
helm repo subcommands are not supported for OCI registries.
The operators are installed directly, without adding the Helm Chart repository first.
|
helm install --wait airflow-operator oci://oci.stackable.tech/sdp-charts/airflow-operator --version 25.11.0
helm install --wait commons-operator oci://oci.stackable.tech/sdp-charts/commons-operator --version 25.11.0
helm install --wait druid-operator oci://oci.stackable.tech/sdp-charts/druid-operator --version 25.11.0
helm install --wait hbase-operator oci://oci.stackable.tech/sdp-charts/hbase-operator --version 25.11.0
helm install --wait hdfs-operator oci://oci.stackable.tech/sdp-charts/hdfs-operator --version 25.11.0
helm install --wait hive-operator oci://oci.stackable.tech/sdp-charts/hive-operator --version 25.11.0
helm install --wait kafka-operator oci://oci.stackable.tech/sdp-charts/kafka-operator --version 25.11.0
helm install --wait listener-operator oci://oci.stackable.tech/sdp-charts/listener-operator --version 25.11.0
helm install --wait nifi-operator oci://oci.stackable.tech/sdp-charts/nifi-operator --version 25.11.0
helm install --wait opa-operator oci://oci.stackable.tech/sdp-charts/opa-operator --version 25.11.0
helm install --wait secret-operator oci://oci.stackable.tech/sdp-charts/secret-operator --version 25.11.0
helm install --wait spark-k8s-operator oci://oci.stackable.tech/sdp-charts/spark-k8s-operator --version 25.11.0
helm install --wait superset-operator oci://oci.stackable.tech/sdp-charts/superset-operator --version 25.11.0
helm install --wait trino-operator oci://oci.stackable.tech/sdp-charts/trino-operator --version 25.11.0
helm install --wait zookeeper-operator oci://oci.stackable.tech/sdp-charts/zookeeper-operator --version 25.11.0