Skip to content

feat: import snapshot-controller surface to enable migration#21

Merged
duckhawk merged 1 commit intomainfrom
feat/migrate-from-snapshot-controller
May 8, 2026
Merged

feat: import snapshot-controller surface to enable migration#21
duckhawk merged 1 commit intomainfrom
feat/migrate-from-snapshot-controller

Conversation

@duckhawk
Copy link
Copy Markdown
Member

@duckhawk duckhawk commented May 7, 2026

Description

Bring the surface previously owned by the snapshot-controller module into storage-foundation so the two modules can coexist while clusters are being migrated.

  • RBACtemplates/rbac-for-us.yaml, templates/user-authz-cluster-roles.yaml and templates/rbacv2/{manage,use}/{edit,view}.yaml get the missing rules for snapshot.storage.k8s.io (volumesnapshots, volumesnapshotclasses, volumesnapshotcontents). Previously these roles only covered the new storage.deckhouse.io resources, so after a hand-off admin-kubeconfig and the user-authz roles would have lost access to snapshot objects.
  • Hooks030-remove-finalizers-on-module-delete is ported from snapshot-controller as is (the existing storage-foundation hooks have no extra logic over snapshot-controller, so we add it verbatim with the import path adjusted). consts/consts.go regains AllowedProvisioners, WebhookConfigurationsToDelete, CRGVKsForFinalizerRemoval and the CRGVK type. hooks/go/go.mod is bumped to bring in module-sdk@v0.7.0, sds-common-lib, external-snapshotter/client/v8@v8.2.0, controller-runtime@v0.20.4 and the matching k8s.io/* packages so the hook compiles. go build ./... and go vet ./... pass.
  • CRDscrds/snapshot.storage.k8s.io_volumesnapshot{,classes,contents}.yaml are replaced with the (newer) versions from snapshot-controller; only the module: label is rewritten to storage-foundation. CRD uniqueness across modules is intentionally not enforced at this stage.
  • Russian docs — the matching crds/doc-ru-snapshot.storage.k8s.io_*.yaml files are added (they don't carry the module label, so no rewriting was required).

This change does not restart any critical cluster components on its own; it only enriches what storage-foundation would render when actually enabled.

Why do we need it, and what problem does it solve?

We need to migrate users from the snapshot-controller module to storage-foundation without forcing a hard cutover. To do that:

  1. storage-foundation must be a strict superset of snapshot-controller in terms of RBAC, CRDs and finalizer cleanup, otherwise turning snapshot-controller off after the migration drops permissions and leaks finalizers.
  2. snapshot-controller must yield to storage-foundation while both are enabled — this is solved by the companion PR feat: gate all templates on storage-foundation not being enabled snapshot-controller#72, which gates every template behind storage-foundation not being enabled.

Together the two PRs let an operator enable storage-foundation, observe the migration succeed, and then disable snapshot-controller without RBAC regressions or stuck finalizers on Secrets/ConfigMaps/StorageClasses/VolumeSnapshot*.

What is the expected result?

With this PR alone (other module untouched):

  • helm template of storage-foundation now emits RBAC objects whose rules cover both storage.deckhouse.io (existing) and snapshot.storage.k8s.io (new) — verified locally for rbac-for-us, user-authz-cluster-roles, rbacv2/manage/{edit,view}, rbacv2/use/{edit,view}.
  • crds/snapshot.storage.k8s.io_volumesnapshot{,classes,contents}.yaml carry module: storage-foundation.
  • crds/doc-ru-snapshot.storage.k8s.io_*.yaml are present.
  • Hooks binary built from hooks/go registers 030-remove-finalizers-on-module-delete and successfully runs OnAfterDeleteHelm on module uninstall (clearing finalizers from in-namespace Secrets/ConfigMaps, optional ValidatingWebhookConfigurations from WebhookConfigurationsToDelete, StorageClasses for AllowedProvisioners, and CRs listed in CRGVKsForFinalizerRemoval — by default the three snapshot.storage.k8s.io kinds).
  • No state changes for users that haven't enabled storage-foundation yet.

Checklist

  • The code is covered by unit tests.
  • e2e tests passed.
  • Documentation updated according to the changes.
  • Changes were tested in the Kubernetes cluster manually.

Prepare storage-foundation for parallel coexistence with the
snapshot-controller module while migration is in progress:

* RBAC: extend rbac-for-us.yaml, user-authz-cluster-roles.yaml and
  rbacv2/{manage,use}/{edit,view}.yaml with snapshot.storage.k8s.io
  rules (volumesnapshots, volumesnapshotclasses, volumesnapshotcontents)
  so that admin-kubeconfig and user-authz roles keep covering snapshot
  resources after snapshot-controller is replaced.
* Hooks: port 030-remove-finalizers-on-module-delete from
  snapshot-controller as-is, add CRGVK type and the matching variables
  (AllowedProvisioners, WebhookConfigurationsToDelete,
  CRGVKsForFinalizerRemoval) to consts; bump hooks go.mod to bring in
  module-sdk v0.7.0, sds-common-lib, external-snapshotter client/v8 and
  controller-runtime so the hook compiles.
* CRDs: copy snapshot.storage.k8s.io_volumesnapshot{,classes,contents}.yaml
  from snapshot-controller (newer upstream content) and update the
  module label to storage-foundation.
* Docs: add Russian doc-ru-snapshot.storage.k8s.io_* counterparts.
@duckhawk duckhawk force-pushed the feat/migrate-from-snapshot-controller branch from 58fe6d7 to 3bc3a79 Compare May 8, 2026 11:39
@duckhawk duckhawk merged commit 85183ff into main May 8, 2026
11 of 12 checks passed
@duckhawk duckhawk deleted the feat/migrate-from-snapshot-controller branch May 8, 2026 12:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants