Skip to content

Add disk cleaning feature for server discovery#888

Open
stefanhipfel wants to merge 2 commits into
mainfrom
worktree-diskcleaning
Open

Add disk cleaning feature for server discovery#888
stefanhipfel wants to merge 2 commits into
mainfrom
worktree-diskcleaning

Conversation

@stefanhipfel
Copy link
Copy Markdown
Contributor

@stefanhipfel stefanhipfel commented May 13, 2026

Summary

Adds automated disk sanitization during server discovery to prepare bare-metal servers for reuse.

Features

  • Three-level control: operator flag (default off) → per-server → clean-once
  • Two modes: quick (wipe headers/footers) or secure (full disk wipe)
  • Smart erasure: blkdiscard for SSDs, shred/dd for HDDs
  • Concurrent: Up to 4 disks in parallel with timeouts (10min/24h)
  • Persistent state: Marker file prevents re-cleaning on restart
  • Security: Path validation, command injection prevention
  • Happens before agent sends discovery, so Server will not become available while disk cleaning is still happening.

Configuration

# Operator level
--enable-disk-cleaning=false  # default, opt-in
--disk-cleaning-mode=quick

# Per-server
spec:
  diskCleaningPolicy:
    enabled: true
    cleanOnce: true
    diskCleaningMode: secure

Testing

  • 14 new unit tests
  • Tested on Linux VM with "real disks" (via Lima on Mac)

Fixes #704

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 13, 2026

📝 Walkthrough

Walkthrough

This PR introduces disk cleaning for managed servers, enabling operators to configure periodic or one-time disk wipes via Kubernetes Server CRDs. The feature spans API definitions (new types in spec and status), controller-driven configuration hierarchy and state reporting, Linux-specific disk-wiping implementations supporting quick and secure modes, probe agent integration, and comprehensive testing.

Changes

Disk Cleaning Feature

Layer / File(s) Summary
API Contract and Type Definitions
api/v1alpha1/server_types.go, api/v1alpha1/applyconfiguration/api/v1alpha1/diskcleaningpolicy.go, api/v1alpha1/applyconfiguration/api/v1alpha1/diskcleaningstatus.go, api/v1alpha1/applyconfiguration/api/v1alpha1/serverspec.go, api/v1alpha1/applyconfiguration/api/v1alpha1/serverstatus.go, api/v1alpha1/applyconfiguration/utils.go, api/v1alpha1/zz_generated.deepcopy.go, config/crd/bases/metal.ironcore.dev_servers.yaml
New types DiskCleaningMode, DiskCleaningPolicy, and DiskCleaningStatus with generated apply-configuration wrappers, deepcopy implementations, and CRD schema updates. ServerSpec gains optional diskCleaningPolicy field; ServerStatus gains optional diskCleaningStatus field.
Controller Configuration and Completion Handling
internal/controller/conditions.go, internal/controller/server_controller.go
ServerReconciler extended with disk-cleaning config fields. Helper methods determine enablement precedence (server override vs. operator default) and effective mode. Completion handler resets CleanOnce, updates status with timestamp/mode/message, sets condition, and patches subresource. Integration into discovery-completion path.
CLI Flags and Manager Configuration
cmd/main.go, config/manager/manager.yaml
Registers --enable-disk-cleaning and --disk-cleaning-mode flags in controller main with validation (quick/secure). Wires values to ServerReconciler. Manager Deployment sets defaults (disabled, quick mode).
Probe Agent Disk Cleaning Support
internal/probe/probe.go, cmd/metalprobe/main.go
Agent struct gains disk-cleaning fields. NewAgent accepts disk-cleaning parameters. One-time cleaning in Start after registration; failures logged, loop uninterrupted. Metalprobe registers flags and wires to agent.
Linux and Darwin Disk Cleaning Implementations
internal/probe/diskcleaning_darwin.go, internal/probe/diskcleaning_linux.go
Darwin stub returns unsupported error. Linux: enumerates devices, filters read-only/removable, concurrent wipes via semaphore, quick mode (10-min timeout, first+last 10MiB wipes), secure mode (24-hour timeout, blkdiscard --secure or shred/dd). Marker file prevents re-runs. Helpers for sysfs queries, partition table rereading, and native zero-fill.
Testing and Validation
internal/controller/suite_test.go, internal/controller/server_controller_test.go, internal/probe/probe_suite_test.go
Test setup enables disk cleaning with "secure" default. Controller tests updated for new NewAgent parameter. New Describe("Disk Cleaning") section validates configuration hierarchy and mode resolution via tables, plus end-to-end test for disabled-policy ignition generation. Probe suite updated with new parameters.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

enhancement, api-change, size/XXL, area/metal-automation

Suggested reviewers

  • afritzler
  • nagadeesh-nagaraja
  • xkonni
🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add disk cleaning feature for server discovery' clearly summarizes the main change—introducing automated disk sanitization functionality—and is directly related to all the substantial modifications in the changeset.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description check ✅ Passed The pull request description is well-structured and comprehensive, providing clear summary, features, configuration examples, and testing details.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch worktree-diskcleaning

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (1)
api/v1alpha1/server_types.go (1)

118-118: ⚡ Quick win

Use a fixed-width integer type for DisksProcessed.

The int type at line 118 lacks platform-independence in CRD schema generation. Use int32 (or int64) for explicit, stable API contracts as per Kubernetes conventions.

♻️ Proposed fix
-	DisksProcessed int `json:"disksProcessed,omitempty"`
+	DisksProcessed int32 `json:"disksProcessed,omitempty"`
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@api/v1alpha1/server_types.go` at line 118, The DisksProcessed field currently
uses the platform-dependent int type; change its type to a fixed-width integer
(e.g., int32) in the struct that defines DisksProcessed in
api/v1alpha1/server_types.go so the CRD schema is stable across platforms—update
the declaration from "DisksProcessed int `json:\"disksProcessed,omitempty\"`" to
"DisksProcessed int32 `json:\"disksProcessed,omitempty\"`" (or int64 if you
prefer 64-bit) and run codegen/CRD generation as needed to propagate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@cmd/metalprobe/main.go`:
- Around line 40-41: Validate the diskCleaningMode flag immediately after
flag.Parse() in main() by checking that the string value of diskCleaningMode is
one of the allowed values ("quick" or "secure"); if not, print a clear error to
stderr or use log.Fatalf and exit with a non-zero status so the process fails
fast. Locate the diskCleaningMode variable and its usage around flag.Parse() and
replace the current silent acceptance with this explicit validation (and apply
the same validation if diskCleaningMode is set or re-read elsewhere).

In `@internal/controller/server_controller_test.go`:
- Around line 1313-1390: This test creates bmcSecret and server objects but
never deletes them; add explicit cleanup after creation (e.g., immediate defers
or explicit deletions before test exit) to remove the BMCSecret (bmcSecret) and
Server (server) resources via the k8sClient.Delete calls so they don't persist
across tests; locate the creation sites for bmcSecret and server in the It(...)
block and add cleanup logic (defer Expect(k8sClient.Delete(ctx,
bmcSecret)).To(Succeed()) and same for server) or call k8sClient.Delete with the
same objects/refs before returning, ensuring deletion is invoked even on test
failures.

In `@internal/controller/server_controller.go`:
- Around line 423-425: The call to r.handleDiskCleaningCompletion(ctx, server)
currently swallows errors (just logs them), so persistent updates like resetting
CleanOnce/status may never be retried; change the reconcile behavior to
propagate the error instead of ignoring it so the controller requeues the
request (or explicitly requeue with a backoff) when handleDiskCleaningCompletion
fails. Ensure handleDiskCleaningCompletion itself is idempotent (safe to run
multiple times) and performs the persistent update (reset CleanOnce / status
update) in a retryable way (e.g., update the Server via the client with
optimistic retry or patch), and then return any error up to the caller so
reconciler can retry.
- Around line 1303-1311: Change the hard-success wording to indicate an
attempt/completion that may have ignored non-fatal errors: update
server.Status.DiskCleaningStatus.Message from "Disk cleaning completed
successfully" to something like "Disk cleaning attempted during discovery" (or
"Disk cleaning completed; errors may have been ignored"), and when calling
r.Conditions.UpdateSlice for ConditionDiskCleaningCompleted replace the
UpdateReason("DiskCleaningSucceeded") and UpdateMessage("Disk cleaning completed
during discovery") with a non-absolute reason/message such as
UpdateReason("DiskCleaningAttempted") and UpdateMessage("Disk cleaning attempted
during discovery; non-fatal errors may have occurred") so the condition does not
assert an unequivocal success when failures can be ignored.
- Around line 1289-1302: After re-fetching the server object, guard against a
nil server.Spec.DiskCleaningPolicy before dereferencing its DiskCleaningMode:
check if server.Spec != nil and server.Spec.DiskCleaningPolicy != nil and only
then read server.Spec.DiskCleaningPolicy.DiskCleaningMode into
server.Status.DiskCleaningStatus.CleaningMode; otherwise set CleaningMode from
the reconciler default
(metalv1alpha1.DiskCleaningMode(r.DiskCleaningDefaultMode)) so you never
dereference a nil DiskCleaningPolicy when updating
server.Status.DiskCleaningStatus in the Server controller.

In `@internal/probe/diskcleaning_linux.go`:
- Around line 205-220: The code currently writes the completion marker even when
zero disks were actually cleaned; update the logic so
markDiskCleaningCompleted() is only called when at least one disk was
successfully cleaned and there are no failures. Concretely, after computing
successCount, results and skippedCount (symbols: successCount, results,
skippedCount, blockStorage.Disks, markDiskCleaningCompleted), first return an
error if successCount == 0 (e.g., "no disks cleaned" or treat as
skipped/failure), then keep the existing failure check (if successCount <
len(results) return fmt.Errorf(...)); only call markDiskCleaningCompleted() when
successCount > 0 and successCount == len(results).
- Around line 281-303: The code currently uses
exec.CommandContext(...).CombinedOutput() for long-running wiping commands
("shred" and "dd") which can OOM; change both usages to start the command and
stream stdout/stderr instead of buffering: create the *exec.Cmd via
exec.CommandContext (same invocations around shred and dd), call
cmd.StdoutPipe()/cmd.StderrPipe(), start cmd with cmd.Start(), and concurrently
io.Copy from the pipes into a bounded logger sink or io.Discard (or log lines
incrementally) and then wait with cmd.Wait()/cmd.Run() to get the exit status;
keep the existing error-handling logic (fall back from shred to dd and call
rereadPartitionTable on success) but remove CombinedOutput usage so output is
not fully buffered in memory.
- Around line 69-71: The current device validation in validateDevicePath
incorrectly accepts character devices; update the check so it only allows block
devices by verifying fi.Mode() has os.ModeDevice set and does NOT have
os.ModeCharDevice set (i.e., reject when os.ModeCharDevice is present). Modify
the conditional around validateDevicePath's os.ModeDevice check to explicitly
test and return an error if fi.Mode()&os.ModeCharDevice != 0, keeping the
existing error message for non-block devices.

---

Nitpick comments:
In `@api/v1alpha1/server_types.go`:
- Line 118: The DisksProcessed field currently uses the platform-dependent int
type; change its type to a fixed-width integer (e.g., int32) in the struct that
defines DisksProcessed in api/v1alpha1/server_types.go so the CRD schema is
stable across platforms—update the declaration from "DisksProcessed int
`json:\"disksProcessed,omitempty\"`" to "DisksProcessed int32
`json:\"disksProcessed,omitempty\"`" (or int64 if you prefer 64-bit) and run
codegen/CRD generation as needed to propagate the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5d5b7bd8-9842-46e8-93f0-e1666bccf357

📥 Commits

Reviewing files that changed from the base of the PR and between bf8e075 and 6bf1e57.

📒 Files selected for processing (19)
  • api/v1alpha1/applyconfiguration/api/v1alpha1/diskcleaningpolicy.go
  • api/v1alpha1/applyconfiguration/api/v1alpha1/diskcleaningstatus.go
  • api/v1alpha1/applyconfiguration/api/v1alpha1/serverspec.go
  • api/v1alpha1/applyconfiguration/api/v1alpha1/serverstatus.go
  • api/v1alpha1/applyconfiguration/utils.go
  • api/v1alpha1/server_types.go
  • api/v1alpha1/zz_generated.deepcopy.go
  • cmd/main.go
  • cmd/metalprobe/main.go
  • config/crd/bases/metal.ironcore.dev_servers.yaml
  • config/manager/manager.yaml
  • internal/controller/conditions.go
  • internal/controller/server_controller.go
  • internal/controller/server_controller_test.go
  • internal/controller/suite_test.go
  • internal/probe/diskcleaning_darwin.go
  • internal/probe/diskcleaning_linux.go
  • internal/probe/probe.go
  • internal/probe/probe_suite_test.go

Comment thread cmd/metalprobe/main.go
Comment thread internal/controller/server_controller_test.go
Comment thread internal/controller/server_controller.go
Comment thread internal/controller/server_controller.go
Comment on lines +1303 to +1311
server.Status.DiskCleaningStatus.Message = "Disk cleaning completed successfully"

if err := r.Conditions.UpdateSlice(
&server.Status.Conditions,
ConditionDiskCleaningCompleted,
conditionutils.UpdateStatus(metav1.ConditionTrue),
conditionutils.UpdateReason("DiskCleaningSucceeded"),
conditionutils.UpdateMessage("Disk cleaning completed during discovery"),
conditionutils.UpdateObserved(server),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Avoid recording hard success when cleaning is allowed to fail non-fatally.

DiskCleaningSucceeded / “completed successfully” can become a false positive if discovery proceeds despite cleaning errors. Prefer completion/attempted wording unless success is explicitly reported from probe results.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@internal/controller/server_controller.go` around lines 1303 - 1311, Change
the hard-success wording to indicate an attempt/completion that may have ignored
non-fatal errors: update server.Status.DiskCleaningStatus.Message from "Disk
cleaning completed successfully" to something like "Disk cleaning attempted
during discovery" (or "Disk cleaning completed; errors may have been ignored"),
and when calling r.Conditions.UpdateSlice for ConditionDiskCleaningCompleted
replace the UpdateReason("DiskCleaningSucceeded") and UpdateMessage("Disk
cleaning completed during discovery") with a non-absolute reason/message such as
UpdateReason("DiskCleaningAttempted") and UpdateMessage("Disk cleaning attempted
during discovery; non-fatal errors may have occurred") so the condition does not
assert an unequivocal success when failures can be ignored.

Comment thread internal/probe/diskcleaning_linux.go
Comment thread internal/probe/diskcleaning_linux.go
Comment thread internal/probe/diskcleaning_linux.go Outdated
Comment on lines +281 to +303
cmd := exec.CommandContext(ctx, "shred", "-v", "-n", "1", "-z", "--force", devicePath)
output, err := cmd.CombinedOutput()
if err != nil {
log.Error(err, "shred failed, falling back to dd", "disk", diskName, "output", string(output))
// Fall through to dd instead of returning error
} else {
// shred succeeded
if err := rereadPartitionTable(ctx, devicePath); err != nil {
log.V(1).Info("Warning: failed to re-read partition table", "error", err)
}
return nil
}
}

// Use dd as fallback (either shred not found or shred failed)
log.V(1).Info("Using dd for secure wipe", "disk", diskName)
cmd := exec.CommandContext(ctx, "dd",
"if=/dev/urandom",
"of="+devicePath,
"bs=1M",
"status=progress")
output, err := cmd.CombinedOutput()
if err != nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Avoid CombinedOutput() for long-running wipe commands.

shred -v and dd status=progress can emit large continuous output; buffering all of it in memory risks OOM. Prefer Run() with bounded/streamed logging (or disable verbose progress).

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@internal/probe/diskcleaning_linux.go` around lines 281 - 303, The code
currently uses exec.CommandContext(...).CombinedOutput() for long-running wiping
commands ("shred" and "dd") which can OOM; change both usages to start the
command and stream stdout/stderr instead of buffering: create the *exec.Cmd via
exec.CommandContext (same invocations around shred and dd), call
cmd.StdoutPipe()/cmd.StderrPipe(), start cmd with cmd.Start(), and concurrently
io.Copy from the pipes into a bounded logger sink or io.Discard (or log lines
incrementally) and then wait with cmd.Wait()/cmd.Run() to get the exit status;
keep the existing error-handling logic (fall back from shred to dd and call
rereadPartitionTable on success) but remove CombinedOutput usage so output is
not fully buffered in memory.

This adds automated disk sanitization during server discovery to prepare
bare-metal servers for reuse by removing old data, partition tables, and
filesystem signatures.

API Changes:
- Add DiskCleaningPolicy to ServerSpec with Enabled, CleanOnce, and Mode fields
- Add DiskCleaningStatus to ServerStatus with completion tracking
- Add ConditionDiskCleaningCompleted condition type

Operator Configuration:
- Add --enable-disk-cleaning flag (default: false, opt-in)
- Add --disk-cleaning-mode flag (default: "quick")
- Three-level hierarchy: operator default → per-server → clean-once

Controller:
- Implement shouldEnableDiskCleaning() for three-level configuration resolution
- Implement getDiskCleaningMode() for mode selection
- Implement handleDiskCleaningCompletion() for CleanOnce auto-reset
- Pass disk cleaning flags to probe via ignition config

Probe Implementation:
- Move disk cleaning into Agent lifecycle (after registration)
- Concurrent disk cleaning with semaphore (max 4 disks in parallel)
- Two modes: quick (wipe headers/footers) and secure (full disk wipe)
- Safety checks: read-only detection, removable device filtering, block device validation
- Flash storage detection via sysfs rotational flag
- Secure erasure with blkdiscard for SSDs, shred/dd fallback for HDDs
- Per-disk timeouts: 10 minutes (quick), 24 hours (secure)
- Persistent marker file prevents re-cleaning on agent restart
- Security: regex validation, path sanitization, command injection prevention

Testing:
- Add comprehensive unit tests for configuration hierarchy
- Add parameterized tests for shouldEnableDiskCleaning (9 cases)
- Add parameterized tests for getDiskCleaningMode (5 cases)
- Add integration test for ignition flag generation
- All existing tests pass

Key Features:
- Cleans ALL disks including OS disk (probe runs from RAM/PXE)
- Non-fatal errors (discovery continues even if cleaning fails)
- Follows OpenStack Ironic pattern (clean all disks by default)
- Production-ready with security hardening and proper error handling

Signed-off-by: Stefan Hipfel <stefan.hipfel@sap.com>
@stefanhipfel stefanhipfel force-pushed the worktree-diskcleaning branch from 6bf1e57 to df1de82 Compare May 13, 2026 12:34
@afritzler
Copy link
Copy Markdown
Member

Thanks @stefanhipfel for the PR. I commented in the referenced issue how we could simplify the cleanup phase: #613 (comment) I would suggest to take the discussion there.


// DiskCleaningMode defines the disk cleaning method.
// +kubebuilder:validation:Enum=quick;secure
type DiskCleaningMode string
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would hide this behind a global flag for now. E.g. --enable-disk-sanitatation or --disk-sanitazation-policy (None by default, secure or fast).

return false, err
}

if err := r.handleDiskCleaningCompletion(ctx, server); err != nil {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This task I would put into an own controller creating claims with tolerations for those cleanup tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

3 participants