Add disk cleaning feature for server discovery#888
Conversation
📝 WalkthroughWalkthroughThis PR introduces disk cleaning for managed servers, enabling operators to configure periodic or one-time disk wipes via Kubernetes Server CRDs. The feature spans API definitions (new types in spec and status), controller-driven configuration hierarchy and state reporting, Linux-specific disk-wiping implementations supporting quick and secure modes, probe agent integration, and comprehensive testing. ChangesDisk Cleaning Feature
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (1)
api/v1alpha1/server_types.go (1)
118-118: ⚡ Quick winUse a fixed-width integer type for
DisksProcessed.The
inttype at line 118 lacks platform-independence in CRD schema generation. Useint32(orint64) for explicit, stable API contracts as per Kubernetes conventions.♻️ Proposed fix
- DisksProcessed int `json:"disksProcessed,omitempty"` + DisksProcessed int32 `json:"disksProcessed,omitempty"`🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@api/v1alpha1/server_types.go` at line 118, The DisksProcessed field currently uses the platform-dependent int type; change its type to a fixed-width integer (e.g., int32) in the struct that defines DisksProcessed in api/v1alpha1/server_types.go so the CRD schema is stable across platforms—update the declaration from "DisksProcessed int `json:\"disksProcessed,omitempty\"`" to "DisksProcessed int32 `json:\"disksProcessed,omitempty\"`" (or int64 if you prefer 64-bit) and run codegen/CRD generation as needed to propagate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@cmd/metalprobe/main.go`:
- Around line 40-41: Validate the diskCleaningMode flag immediately after
flag.Parse() in main() by checking that the string value of diskCleaningMode is
one of the allowed values ("quick" or "secure"); if not, print a clear error to
stderr or use log.Fatalf and exit with a non-zero status so the process fails
fast. Locate the diskCleaningMode variable and its usage around flag.Parse() and
replace the current silent acceptance with this explicit validation (and apply
the same validation if diskCleaningMode is set or re-read elsewhere).
In `@internal/controller/server_controller_test.go`:
- Around line 1313-1390: This test creates bmcSecret and server objects but
never deletes them; add explicit cleanup after creation (e.g., immediate defers
or explicit deletions before test exit) to remove the BMCSecret (bmcSecret) and
Server (server) resources via the k8sClient.Delete calls so they don't persist
across tests; locate the creation sites for bmcSecret and server in the It(...)
block and add cleanup logic (defer Expect(k8sClient.Delete(ctx,
bmcSecret)).To(Succeed()) and same for server) or call k8sClient.Delete with the
same objects/refs before returning, ensuring deletion is invoked even on test
failures.
In `@internal/controller/server_controller.go`:
- Around line 423-425: The call to r.handleDiskCleaningCompletion(ctx, server)
currently swallows errors (just logs them), so persistent updates like resetting
CleanOnce/status may never be retried; change the reconcile behavior to
propagate the error instead of ignoring it so the controller requeues the
request (or explicitly requeue with a backoff) when handleDiskCleaningCompletion
fails. Ensure handleDiskCleaningCompletion itself is idempotent (safe to run
multiple times) and performs the persistent update (reset CleanOnce / status
update) in a retryable way (e.g., update the Server via the client with
optimistic retry or patch), and then return any error up to the caller so
reconciler can retry.
- Around line 1303-1311: Change the hard-success wording to indicate an
attempt/completion that may have ignored non-fatal errors: update
server.Status.DiskCleaningStatus.Message from "Disk cleaning completed
successfully" to something like "Disk cleaning attempted during discovery" (or
"Disk cleaning completed; errors may have been ignored"), and when calling
r.Conditions.UpdateSlice for ConditionDiskCleaningCompleted replace the
UpdateReason("DiskCleaningSucceeded") and UpdateMessage("Disk cleaning completed
during discovery") with a non-absolute reason/message such as
UpdateReason("DiskCleaningAttempted") and UpdateMessage("Disk cleaning attempted
during discovery; non-fatal errors may have occurred") so the condition does not
assert an unequivocal success when failures can be ignored.
- Around line 1289-1302: After re-fetching the server object, guard against a
nil server.Spec.DiskCleaningPolicy before dereferencing its DiskCleaningMode:
check if server.Spec != nil and server.Spec.DiskCleaningPolicy != nil and only
then read server.Spec.DiskCleaningPolicy.DiskCleaningMode into
server.Status.DiskCleaningStatus.CleaningMode; otherwise set CleaningMode from
the reconciler default
(metalv1alpha1.DiskCleaningMode(r.DiskCleaningDefaultMode)) so you never
dereference a nil DiskCleaningPolicy when updating
server.Status.DiskCleaningStatus in the Server controller.
In `@internal/probe/diskcleaning_linux.go`:
- Around line 205-220: The code currently writes the completion marker even when
zero disks were actually cleaned; update the logic so
markDiskCleaningCompleted() is only called when at least one disk was
successfully cleaned and there are no failures. Concretely, after computing
successCount, results and skippedCount (symbols: successCount, results,
skippedCount, blockStorage.Disks, markDiskCleaningCompleted), first return an
error if successCount == 0 (e.g., "no disks cleaned" or treat as
skipped/failure), then keep the existing failure check (if successCount <
len(results) return fmt.Errorf(...)); only call markDiskCleaningCompleted() when
successCount > 0 and successCount == len(results).
- Around line 281-303: The code currently uses
exec.CommandContext(...).CombinedOutput() for long-running wiping commands
("shred" and "dd") which can OOM; change both usages to start the command and
stream stdout/stderr instead of buffering: create the *exec.Cmd via
exec.CommandContext (same invocations around shred and dd), call
cmd.StdoutPipe()/cmd.StderrPipe(), start cmd with cmd.Start(), and concurrently
io.Copy from the pipes into a bounded logger sink or io.Discard (or log lines
incrementally) and then wait with cmd.Wait()/cmd.Run() to get the exit status;
keep the existing error-handling logic (fall back from shred to dd and call
rereadPartitionTable on success) but remove CombinedOutput usage so output is
not fully buffered in memory.
- Around line 69-71: The current device validation in validateDevicePath
incorrectly accepts character devices; update the check so it only allows block
devices by verifying fi.Mode() has os.ModeDevice set and does NOT have
os.ModeCharDevice set (i.e., reject when os.ModeCharDevice is present). Modify
the conditional around validateDevicePath's os.ModeDevice check to explicitly
test and return an error if fi.Mode()&os.ModeCharDevice != 0, keeping the
existing error message for non-block devices.
---
Nitpick comments:
In `@api/v1alpha1/server_types.go`:
- Line 118: The DisksProcessed field currently uses the platform-dependent int
type; change its type to a fixed-width integer (e.g., int32) in the struct that
defines DisksProcessed in api/v1alpha1/server_types.go so the CRD schema is
stable across platforms—update the declaration from "DisksProcessed int
`json:\"disksProcessed,omitempty\"`" to "DisksProcessed int32
`json:\"disksProcessed,omitempty\"`" (or int64 if you prefer 64-bit) and run
codegen/CRD generation as needed to propagate the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5d5b7bd8-9842-46e8-93f0-e1666bccf357
📒 Files selected for processing (19)
api/v1alpha1/applyconfiguration/api/v1alpha1/diskcleaningpolicy.goapi/v1alpha1/applyconfiguration/api/v1alpha1/diskcleaningstatus.goapi/v1alpha1/applyconfiguration/api/v1alpha1/serverspec.goapi/v1alpha1/applyconfiguration/api/v1alpha1/serverstatus.goapi/v1alpha1/applyconfiguration/utils.goapi/v1alpha1/server_types.goapi/v1alpha1/zz_generated.deepcopy.gocmd/main.gocmd/metalprobe/main.goconfig/crd/bases/metal.ironcore.dev_servers.yamlconfig/manager/manager.yamlinternal/controller/conditions.gointernal/controller/server_controller.gointernal/controller/server_controller_test.gointernal/controller/suite_test.gointernal/probe/diskcleaning_darwin.gointernal/probe/diskcleaning_linux.gointernal/probe/probe.gointernal/probe/probe_suite_test.go
| server.Status.DiskCleaningStatus.Message = "Disk cleaning completed successfully" | ||
|
|
||
| if err := r.Conditions.UpdateSlice( | ||
| &server.Status.Conditions, | ||
| ConditionDiskCleaningCompleted, | ||
| conditionutils.UpdateStatus(metav1.ConditionTrue), | ||
| conditionutils.UpdateReason("DiskCleaningSucceeded"), | ||
| conditionutils.UpdateMessage("Disk cleaning completed during discovery"), | ||
| conditionutils.UpdateObserved(server), |
There was a problem hiding this comment.
Avoid recording hard success when cleaning is allowed to fail non-fatally.
DiskCleaningSucceeded / “completed successfully” can become a false positive if discovery proceeds despite cleaning errors. Prefer completion/attempted wording unless success is explicitly reported from probe results.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/controller/server_controller.go` around lines 1303 - 1311, Change
the hard-success wording to indicate an attempt/completion that may have ignored
non-fatal errors: update server.Status.DiskCleaningStatus.Message from "Disk
cleaning completed successfully" to something like "Disk cleaning attempted
during discovery" (or "Disk cleaning completed; errors may have been ignored"),
and when calling r.Conditions.UpdateSlice for ConditionDiskCleaningCompleted
replace the UpdateReason("DiskCleaningSucceeded") and UpdateMessage("Disk
cleaning completed during discovery") with a non-absolute reason/message such as
UpdateReason("DiskCleaningAttempted") and UpdateMessage("Disk cleaning attempted
during discovery; non-fatal errors may have occurred") so the condition does not
assert an unequivocal success when failures can be ignored.
| cmd := exec.CommandContext(ctx, "shred", "-v", "-n", "1", "-z", "--force", devicePath) | ||
| output, err := cmd.CombinedOutput() | ||
| if err != nil { | ||
| log.Error(err, "shred failed, falling back to dd", "disk", diskName, "output", string(output)) | ||
| // Fall through to dd instead of returning error | ||
| } else { | ||
| // shred succeeded | ||
| if err := rereadPartitionTable(ctx, devicePath); err != nil { | ||
| log.V(1).Info("Warning: failed to re-read partition table", "error", err) | ||
| } | ||
| return nil | ||
| } | ||
| } | ||
|
|
||
| // Use dd as fallback (either shred not found or shred failed) | ||
| log.V(1).Info("Using dd for secure wipe", "disk", diskName) | ||
| cmd := exec.CommandContext(ctx, "dd", | ||
| "if=/dev/urandom", | ||
| "of="+devicePath, | ||
| "bs=1M", | ||
| "status=progress") | ||
| output, err := cmd.CombinedOutput() | ||
| if err != nil { |
There was a problem hiding this comment.
Avoid CombinedOutput() for long-running wipe commands.
shred -v and dd status=progress can emit large continuous output; buffering all of it in memory risks OOM. Prefer Run() with bounded/streamed logging (or disable verbose progress).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@internal/probe/diskcleaning_linux.go` around lines 281 - 303, The code
currently uses exec.CommandContext(...).CombinedOutput() for long-running wiping
commands ("shred" and "dd") which can OOM; change both usages to start the
command and stream stdout/stderr instead of buffering: create the *exec.Cmd via
exec.CommandContext (same invocations around shred and dd), call
cmd.StdoutPipe()/cmd.StderrPipe(), start cmd with cmd.Start(), and concurrently
io.Copy from the pipes into a bounded logger sink or io.Discard (or log lines
incrementally) and then wait with cmd.Wait()/cmd.Run() to get the exit status;
keep the existing error-handling logic (fall back from shred to dd and call
rereadPartitionTable on success) but remove CombinedOutput usage so output is
not fully buffered in memory.
This adds automated disk sanitization during server discovery to prepare bare-metal servers for reuse by removing old data, partition tables, and filesystem signatures. API Changes: - Add DiskCleaningPolicy to ServerSpec with Enabled, CleanOnce, and Mode fields - Add DiskCleaningStatus to ServerStatus with completion tracking - Add ConditionDiskCleaningCompleted condition type Operator Configuration: - Add --enable-disk-cleaning flag (default: false, opt-in) - Add --disk-cleaning-mode flag (default: "quick") - Three-level hierarchy: operator default → per-server → clean-once Controller: - Implement shouldEnableDiskCleaning() for three-level configuration resolution - Implement getDiskCleaningMode() for mode selection - Implement handleDiskCleaningCompletion() for CleanOnce auto-reset - Pass disk cleaning flags to probe via ignition config Probe Implementation: - Move disk cleaning into Agent lifecycle (after registration) - Concurrent disk cleaning with semaphore (max 4 disks in parallel) - Two modes: quick (wipe headers/footers) and secure (full disk wipe) - Safety checks: read-only detection, removable device filtering, block device validation - Flash storage detection via sysfs rotational flag - Secure erasure with blkdiscard for SSDs, shred/dd fallback for HDDs - Per-disk timeouts: 10 minutes (quick), 24 hours (secure) - Persistent marker file prevents re-cleaning on agent restart - Security: regex validation, path sanitization, command injection prevention Testing: - Add comprehensive unit tests for configuration hierarchy - Add parameterized tests for shouldEnableDiskCleaning (9 cases) - Add parameterized tests for getDiskCleaningMode (5 cases) - Add integration test for ignition flag generation - All existing tests pass Key Features: - Cleans ALL disks including OS disk (probe runs from RAM/PXE) - Non-fatal errors (discovery continues even if cleaning fails) - Follows OpenStack Ironic pattern (clean all disks by default) - Production-ready with security hardening and proper error handling Signed-off-by: Stefan Hipfel <stefan.hipfel@sap.com>
6bf1e57 to
df1de82
Compare
|
Thanks @stefanhipfel for the PR. I commented in the referenced issue how we could simplify the cleanup phase: #613 (comment) I would suggest to take the discussion there. |
|
|
||
| // DiskCleaningMode defines the disk cleaning method. | ||
| // +kubebuilder:validation:Enum=quick;secure | ||
| type DiskCleaningMode string |
There was a problem hiding this comment.
I would hide this behind a global flag for now. E.g. --enable-disk-sanitatation or --disk-sanitazation-policy (None by default, secure or fast).
| return false, err | ||
| } | ||
|
|
||
| if err := r.handleDiskCleaningCompletion(ctx, server); err != nil { |
There was a problem hiding this comment.
This task I would put into an own controller creating claims with tolerations for those cleanup tasks.
Summary
Adds automated disk sanitization during server discovery to prepare bare-metal servers for reuse.
Features
quick(wipe headers/footers) orsecure(full disk wipe)Configuration
Testing
Fixes #704