-
Notifications
You must be signed in to change notification settings - Fork 153
Update self-hosted deployment instructions #1 #888
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -8,20 +8,67 @@ weight: 2 | |||||
|
|
||||||
| ## High-level List of Deployment Tasks | ||||||
|
|
||||||
| ## High-level List of Deployment Tasks | ||||||
| <ol> | ||||||
| <li>Review the prequisites for installing Layer5 Cloud on Kubernetes. (<a href="#prerequisites">docs</a>)</li> | ||||||
| <li>Prepare INIT_CONFIG parameters for initial setup</li> | ||||||
| <li>Install Layer5 Cloud on Kubernetes using Helm. Deploy it's services in Kubernetes in-cluster. (<a href="#installation">docs</a>)</li> | ||||||
| <li>Meshery deployments are separate from <a href="https://docs.meshery.io/extensibility/providers">Remote Provider</a> deployments (Layer5 Cloud). Deploy Meshery in Kubernetes in-cluster (or out-of-cluster). (<a href="https://docs.meshery.io/installation/quick-start">docs</a>)</li> | ||||||
| <li>Configure Meshery Server point to your Remote Provider. Learn more about the Meshery Server registration process with Remote Providers. (<a href="https://docs.meshery.io/extensibility/providers#meshery-server-registration">docs</a>)</li> | ||||||
| </ol> | ||||||
|
|
||||||
| ### Kubernetes-based Installation with Helm | ||||||
|
|
||||||
| Layer5 offers on-premises installation of its [Meshery Remote Provider](https://docs.meshery.io/extensibility/providers): Layer5 Cloud. Contained in the [Layer5 Helm repository](https://docs.layer5.io/charts) is one chart with two subcharts (see repo [index](https://docs.layer5.io/charts/index.yaml)). | ||||||
|
|
||||||
| #### Prerequisites | ||||||
|
|
||||||
| Before you begin ensure the following are installed: | ||||||
| - Helm. | ||||||
| - An ingress controller like `ingress-nginx`. | ||||||
| - A certificate manager like `cert-manager`. | ||||||
|
|
||||||
| ##### 1. Create dedicated namespaces | ||||||
|
|
||||||
| This deployment uses two namespaces, `cnpg-postgres` for hosting the PostgreSQL database using CloudNativePG operator and `layer5-cloud` namespace for the Layer5 cloud. You can also choose to keep all components in the same namespace. | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
| ```bash | ||||||
| kubectl create ns cnpg-postgres | ||||||
| kubectl create ns layer5-cloud | ||||||
| ``` | ||||||
|
|
||||||
| ##### 2. Prepare for data persistence (Persistent Volume) | ||||||
|
|
||||||
| Layer5 uses PostgreSQL database that requires a persistent storage. It can be configured in many different ways in a Kubernetes cluster. Here we are using _local path provisioner from Rancher_ which automatically creates a PV using a set local path. Running the follwing command to deploy the local path provisioner: | ||||||
|
|
||||||
| ```bash | ||||||
| kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml | ||||||
| ``` | ||||||
|
|
||||||
| This creates a default storage class called `local-path` which stores data by default in `/opt/local-path-provisioner` and has the reclaim policy set to `Delete`. | ||||||
|
|
||||||
| > **_NOTE:_** It is recommended you create a new storage class that uses a different path with ample storage and uses `Retain` reclaim policy. | ||||||
|
|
||||||
| For this guide, we will use the defaults. | ||||||
|
|
||||||
| ##### 3. Install an ingress controller | ||||||
|
|
||||||
| This example deployment uses ingress-nginx but you may choose to use an ingress controller of your choice. | ||||||
|
|
||||||
| ```bash | ||||||
| kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.3/deploy/static/provider/cloud/deploy.yaml | ||||||
| ``` | ||||||
|
|
||||||
| ### INIT_CONFIG | ||||||
| #### INIT_CONFIG | ||||||
|
|
||||||
| The `INIT_CONFIG` environment variable allows you to configure the initial setup of your self-hosted Layer5 Cloud provider. This variable accepts a JSON string that defines the provider initialization configuration. | ||||||
|
|
||||||
| #### Purpose | ||||||
| ##### Purpose | ||||||
|
|
||||||
| `INIT_CONFIG` enables you to: | ||||||
| - Pre-configure provider settings during deployment | ||||||
| - Automate initial setup for consistent deployments | ||||||
| - Define custom provider configurations without manual intervention | ||||||
|
|
||||||
| #### Usage | ||||||
| ##### Usage | ||||||
|
|
||||||
| Set the `INIT_CONFIG` environment variable with a JSON configuration string: | ||||||
|
|
||||||
|
|
@@ -48,92 +95,139 @@ env: | |||||
| The INIT_CONFIG variable is only processed during the initial startup. Subsequent restarts will not reprocess this configuration. | ||||||
| {{< /alert >}} | ||||||
|
|
||||||
| #### Configuration Schema | ||||||
| ##### Configuration Schema | ||||||
|
|
||||||
| The `INIT_CONFIG` JSON structure supports the following fields: | ||||||
|
|
||||||
| - `provider.name`: The name of your provider instance | ||||||
| - `provider.settings`: Custom provider settings specific to your deployment | ||||||
|
|
||||||
|
|
||||||
| <ol> | ||||||
| <li>Review the prequisites for installing Layer5 Cloud on Kubernetes. (<a href="#prerequisites">docs</a>)</li> | ||||||
| </li> | ||||||
| <li>Install Layer5 Cloud on Kubernetes using Helm. Deploy it's services in Kubernetes in-cluster. (<a href="#installation">docs</a>)</li> | ||||||
| <li>Meshery deployments are separate from <a href="https://docs.meshery.io/extensibility/providers">Remote Provider</a> deployments (Layer5 Cloud). Deploy Meshery in Kubernetes in-cluster (or out-of-cluster). (<a href="https://docs.meshery.io/installation/quick-start">docs</a>)</li> | ||||||
| <li>Configure Meshery Server point to your Remote Provider. Learn more about the Meshery Server registration process with Remote Providers. (<a href="https://docs.meshery.io/extensibility/providers#meshery-server-registration">docs</a>)</li> | ||||||
| </ol> | ||||||
|
|
||||||
| ### Kubernetes-based Installation with Helm | ||||||
| #### Installation | ||||||
|
|
||||||
| Layer5 offers on-premises installation of its [Meshery Remote Provider](https://docs.meshery.io/extensibility/providers): Layer5 Cloud. Contained in the [Layer5 Helm repository](https://docs.layer5.io/charts) is one chart with two subcharts (see repo [index](https://docs.layer5.io/charts/index.yaml)). | ||||||
| You will install the Postgres database first followed by Layer5 cloud. | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
|
||||||
| #### Prerequisites | ||||||
| ##### 1. Deploy PostgreSQL using CloudNativePG | ||||||
|
|
||||||
| ##### 1. Prepare a Persistent Volume | ||||||
| In this example, we are using CloudNativePG's operator based approach to create a PostgreSQL cluster. You can choose a different approach of your choice. | ||||||
|
|
||||||
| A persistent volume to store the Postgres database is necessary to prepare prior to deployment. If your target cluster does not have a persistent volume readily available (or not configured for automatic PV provisioning and binding of PVCs to PV), we suggest to apply the following configuration to your cluster. | ||||||
| PostgreSQL requires persistent storage which can be configured in many different ways in a Kubernetes cluster. Here we are using _local path provisioner from Rancher_ which automatically creates a PV using a set local path. Running the follwing command to deploy the local path provisioner: | ||||||
|
|
||||||
| ```bash | ||||||
| kubectl apply -f install/kubernetes/persistent-volume.yaml | ||||||
| kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.31/deploy/local-path-storage.yaml | ||||||
| ``` | ||||||
|
|
||||||
| ##### 2. Prepare a dedicated namespace and add the chart repo to your helm configuration | ||||||
| This creates a default storage class called `local-path` which stores data by default in `/opt/local-path-provisioner`. You can create a new storage class that uses a different path. For this deployment, we will use the defaults. | ||||||
|
|
||||||
| *You may choose to use an alternative namespace, but the following instructions assume the use of `layer5` namespace.* | ||||||
| Add and install CloudNativePG operator using the following commands: | ||||||
|
|
||||||
| ```bash | ||||||
| kubectl create ns layer5 | ||||||
| helm repo add layer5 https://docs.layer5.io/charts | ||||||
| ``` | ||||||
| helm repo add cnpg https://cloudnative-pg.github.io/charts | ||||||
|
|
||||||
| ##### 3. Ensure NGINX Ingress Controller is deployed | ||||||
| helm upgrade --install cnpg --namespace cnpg-system --create-namespace cnpg/cloudnative-pg | ||||||
| ``` | ||||||
|
|
||||||
| *You may chose to use an alternative ingress controller, but the following instructions assume the use of NGINX Ingress Controller.* | ||||||
| Deploying a PostgreSQL cluster requires the follwing pre-requisite resources: | ||||||
| - A super user secret | ||||||
| - A Meshery user secret | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder if here, we might be be referring to the |
||||||
|
|
||||||
| Run the following commands to create them replacing username and passwords as needed: | ||||||
| ```bash | ||||||
| kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml | ||||||
| kubectl -n cnpg-postgres create secret generic meshery-user --from-literal=username=meshery --from-literal=password=meshery --type=kubernetes.io/basic-auth | ||||||
| ``` | ||||||
|
|
||||||
| #### Installation | ||||||
|
|
||||||
| The first service to install is the Postgres database. The following command installs the Postgres database and initializes it's dataset. The dataset is used by the Layer5 Cloud server and the Layer5 Cloud identity provider. | ||||||
| ```bash | ||||||
| kubectl -n cnpg-postgres create secret generic cnpg-superuser --from-literal=username=postgres --from-literal=password=postgres --type=kubernetes.io/basic-auth | ||||||
| ``` | ||||||
|
|
||||||
| Layer5 Cloud `postgres` database requires [pg_cron](https://github.com/citusdata/pg_cron) extension to be enabled and configured to execute on a schedule. The Cloud instance is bundled with both Data Definition Language (DDL) to iniatilze the schema and with Data Manipulation Language (DML) that support both greenfield deployments and upgrades of existing deployments. | ||||||
| For this documentation, we use the following manifests to deploy a PostgreSQL cluster: | ||||||
| ```yaml | ||||||
| # cluster.yaml | ||||||
| apiVersion: postgresql.cnpg.io/v1 | ||||||
| kind: Cluster | ||||||
| metadata: | ||||||
| name: cnpg-postgres | ||||||
| namespace: cnpg-postgres | ||||||
| spec: | ||||||
| instances: 2 | ||||||
| # Persistent storage configuration | ||||||
| storage: | ||||||
| storageClass: local-path | ||||||
| size: 10Gi | ||||||
|
|
||||||
| superuserSecret: | ||||||
| name: cnpg-superuser | ||||||
| bootstrap: | ||||||
| initdb: | ||||||
| database: meshery | ||||||
| owner: meshery | ||||||
| secret: | ||||||
| name: meshery-user | ||||||
| postInitSQL: | ||||||
| - create database hydra owner meshery; | ||||||
| - create database kratos owner meshery; | ||||||
| - create extension "uuid-ossp"; | ||||||
| - ALTER ROLE meshery WITH SUPERUSER; | ||||||
| postInitApplicationSQLRefs: | ||||||
| configMapRefs: | ||||||
| - name: extra-init | ||||||
| key: init.sql | ||||||
| --- | ||||||
| # extra-init.yaml | ||||||
| apiVersion: v1 | ||||||
| kind: ConfigMap | ||||||
| metadata: | ||||||
| name: extra-init | ||||||
| namespace: cnpg-postgres | ||||||
| data: | ||||||
| init.sql: | | ||||||
| GRANT ALL PRIVILEGES ON DATABASE meshery to meshery; | ||||||
| GRANT ALL PRIVILEGES ON DATABASE hydra to meshery; | ||||||
| GRANT ALL PRIVILEGES ON DATABASE kratos to meshery; | ||||||
| ``` | ||||||
|
|
||||||
| ##### 1. Install Postgres database | ||||||
| CloudNativePG provides a curated list of [samples](https://github.com/cloudnative-pg/cloudnative-pg/blob/main/docs/src/samples.md) showing configuration options that can be used as a reference. | ||||||
|
|
||||||
| Apply the YAML file. You should notice two cnpg pods shortly thereafter. | ||||||
| ```bash | ||||||
| helm repo add bitnami https://charts.bitnami.com/bitnami | ||||||
| helm install postgresql bitnami/postgresql --version 14.0.1 | ||||||
| NAME READY STATUS RESTARTS AGE | ||||||
| cnpg-postgres-1 1/1 Running 0 3h5m | ||||||
| cnpg-postgres-2 1/1 Running 0 3h5m | ||||||
| ``` | ||||||
| Retrieve the _Service_ endpoints of cnpg. This must be updated in the Layer5 `values.yaml` file later. | ||||||
|
|
||||||
| ##### 2. Install Remote Provider Server and Identity Provider | ||||||
| ##### 2. Deploy Layer5 cloud | ||||||
|
|
||||||
| ```bash | ||||||
| ## TBD: Delete local filesystem reference | ||||||
| # helm install -f ./install/kubernetes/values.yaml cloud ./install/kubernetes -n <namespace>` | ||||||
|
|
||||||
| helm install -f ./install/helm-chart-values/layer5-cloud-values.yaml cloud ./install/kubernetes -n postgres \ | ||||||
| --set-file 'kratos.kratos.emailTemplates.recovery.valid.subject'=<path to the email templates to override>/valid/email-recover-subject.body.gotmpl \ | ||||||
| --set-file 'kratos.kratos.emailTemplates.recovery.valid.body'=<path to the email templates to override>/valid/email-recover.body.gotmpl \ | ||||||
| --set-file 'kratos.kratos.emailTemplates.verification.valid.subject'=<path to the email templates to override>/valid/email-verify-subject.body.gotmpl \ | ||||||
| --set-file 'kratos.kratos.emailTemplates.verification.valid.body'=<path to the email templates to override>/valid/email-verify.body.gotmpl | ||||||
| ``` | ||||||
| 1. Start by adding the Layer5 helm chart repo. | ||||||
| ```bash | ||||||
| helm repo add layer5 https://docs.layer5.io/charts | ||||||
| ``` | ||||||
|
|
||||||
| ##### 3. Create an OAuth 2.0 client | ||||||
| 1. Port forward the Hydra Admin service. | ||||||
| 2. ```bash | ||||||
| hydra clients create \ | ||||||
| --endpoint <port forwarded endpoint> \ | ||||||
| --id meshery-cloud \ <--- ensure the id specified matches with the env.oauthclientid in values.yaml | ||||||
| --secret some-secret \ <--- ensure the secret specified matches with the env.oauthsecret in values.yaml | ||||||
| --grant-types authorization_code,refresh_token,client_credentials,implicit \ | ||||||
| --response-types token,code,id_token \ | ||||||
| --scope openid,offline,offline_access \ | ||||||
| --callbacks <Layer5 Cloud host>/callback | ||||||
| 2. Next, to modify values such as the database connection or other parameters, you will use the `values.yaml` file. | ||||||
|
|
||||||
| You can generate it using the following command: | ||||||
| ```bash | ||||||
| helm show values layer5/layer5-cloud > values.yaml | ||||||
| ``` | ||||||
| Review and update values if necessary. If you have followed this tutorial with the exact steps, there are no changes requires to get started. | ||||||
|
|
||||||
| ## Uninstalling the Chart | ||||||
| 3. Deploy Layer5 cloud using the `helm install` command. | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
|
||||||
| ```bash | ||||||
| helm install -f values.yaml layer5-cloud -n layer5-cloud | ||||||
| ``` | ||||||
|
|
||||||
|
|
||||||
| ##### 3. Create an OAuth 2.0 client | ||||||
| 1. Port forward the _Hydra Admin_ service. | ||||||
|
|
||||||
| 2. Run the following command to create the hydra client: | ||||||
| ```bash | ||||||
| hydra clients create \ | ||||||
| --endpoint <port forwarded endpoint> \ | ||||||
| --id meshery-cloud \ <--- ensure the id specified matches with the env.oauthclientid in values.yaml | ||||||
| --secret some-secret \ <--- ensure the secret specified matches with the env.oauthsecret in values.yaml | ||||||
| --grant-types authorization_code,refresh_token,client_credentials,implicit \ | ||||||
| --response-types token,code,id_token \ | ||||||
| --scope openid,offline,offline_access \ | ||||||
| --callbacks <Layer5 Cloud host>/callback | ||||||
| ``` | ||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.