Support other Helm storage backends besides Secrets#760
Support other Helm storage backends besides Secrets#760fiscafusca wants to merge 1 commit intofluxcd:mainfrom
Conversation
Signed-off-by: Giorgia Fiscaletti <giorgiafiscaletti@gmail.com>
|
This is a welcome contribution, but with the changes on the horizon (see |
|
@hiddeco I see, just took a quick look at the Are you planning to release the rework soon? |
|
@stefanprodan could you maybe take a look and tell me whether we can proceed with incorporating this in the new code? Sorry for the ping, but it would be really helpful to have this feature in the near future! This also closes #272. I see @hiddeco is on paternity leave - congratulations 🎈 enjoy your time as a new dad! |
|
From https://twitter.com/stefanprodan/status/1716833055615443138
|
Overview
This PR adds support for all Helm storage backends (see the official documentation).
Usage
The storage driver can be set through the
--helm-storage-driverflag, and its value is propagated to theNewRunnerfunction. The set of allowed values is [secret,configmap,sql]. If the flag is unset, the value is retrieved from the environment variableHELM_DRIVER. If the environment variable is also unset, the value simply defaults tosecret- for backwards compatibility.Use case
The possibility to switch between backends is essential to have more flexibility in the cluster. Storing release information in Secrets may not always be the best option, mostly due to:
Moving the release information, for example, to a SQL backend, would easily address these issues and allow keeping a longer history of deployments.
Testing (cluster)
The changes were tested on a local K8s cluster. I personally used a single node
kindcluster with K8sv1.27.3.Steps to reproduce
Create the cluster:
Build a local image (I named it
test-helm-controller:latest) and load it to the kind cluster:export IMG=test-helm-controller:latest make docker-build kind load image test-helm-controller:latestDeploy the default
helm-controllerandsource-controller:Create the namespace for the Helm release:
Prepare the manifests for the
HelmRepoandHelmReleaseto use for testing. I personally used this chart for simplicity.helmrepo.yaml:helmrelease.yaml:Test case 1: Backwards compatibility
The Helm release information should still be stored in Secrets when both the flag and the env variable are unset.
Deployment patch in
config/manager/kustomization.yaml:Deploy the patched
helm-controller:kustomize build config/manager | k apply -f -Apply the sample
HelmRepoandHelmRelease:Check for the Helm release secret:
kubectl get secrets -n hello -l 'owner=helm' NAME TYPE DATA AGE sh.helm.release.v1.hello.v1 helm.sh/release.v1 1 29sTest case 2: Configmaps
The Helm release information should still be stored in Configmaps.
Deployment patch in
config/manager/kustomization.yaml(flag):OR (env var):
Deploy the patched
helm-controller:kustomize build config/manager | k apply -f -Apply the sample
HelmRepoandHelmRelease:Check for the Helm release configmap:
kubectl get configmaps -n hello -l 'owner=helm' NAME DATA AGE sh.helm.release.v1.hello.v1 1 25sTest case 3: SQL storage
The Helm release information should still be stored in Configmaps.
For this test, I used a PostgreSQL DB hosted on an Azure server.
Deployment patch in
config/manager/kustomization.yaml(flag):OR (env var):
Deploy the patched
helm-controller:kustomize build config/manager | k apply -f -Apply the sample
HelmRepoandHelmRelease:Connect to the DB (I used
psql) and check therelease_v1table. You will see that there's a new row:The content can be checked by simply running the following SQL query:
Unit and regression testing
The
runner.gofile has no test file, and the functions inhelmrelease_controller.gothat involve the new variable do not have any coverage, so it wasn't clear to me how to proceed. The changes I made are backwards compatible and should not cause any issue AFAIK, but I'm open to add anything else if needed. So feel free to leave any feedback/suggestion :)