test(dataload): migrate controller tests to Ginkgo v2#5700
test(dataload): migrate controller tests to Ginkgo v2#5700hxrshxz wants to merge 3 commits intofluid-cloudnative:masterfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the testing infrastructure for the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
Hi @hxrshxz. Thanks for your PR. I'm waiting for a fluid-cloudnative member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Code Review
This pull request successfully migrates the controller tests for dataload to Ginkgo v2/Gomega, significantly enhancing the test suite's structure and readability. The changes introduce comprehensive unit tests for dataLoadOperation and DataLoadReconciler methods, covering various policy handlers and error scenarios. Additionally, new unit tests for JuiceFSRuntimeReconciler have been added, ensuring its core functionalities like runtime retrieval, engine management, and controller setup are well-tested. The use of gomonkey for mocking external dependencies in unit tests is a good practice that improves test isolation and reliability. Overall, these updates contribute to better test coverage and maintainability of the codebase.
| opStatus, err := handler.GetOperationStatus(ctx, &mockCronDataload.Status) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| // When no current job is found, the status is returned unchanged | ||
| Expect(opStatus).NotTo(BeNil()) |
There was a problem hiding this comment.
This test verifies that no error occurs and the status is not nil when no current job matches the schedule. To make this test more robust and explicit, it would be beneficial to assert that the opStatus returned is identical to the initial mockCronDataload.Status. This ensures that the status remains unchanged as expected in this scenario.
| Expect(opStatus).NotTo(BeNil()) | |
| Expect(opStatus).To(Equal(&mockCronDataload.Status)) |
|
/gemini review |
1 similar comment
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request primarily migrates the dataload controller tests to use Ginkgo v2, which is a positive step for improving the test suite. The new tests are well-structured and increase coverage, particularly with the addition of hermetic tests for error paths in the status handler. However, the pull request also includes the addition and modification of tests for the juicefs controller. These changes are out of scope for a PR focused on dataload and should be submitted separately to maintain focus and simplify the review process.
| /* | ||
| Copyright 2021 The Fluid Authors. | ||
|
|
||
| Licensed under the Apache License, Version 2.0 (the "License"); | ||
| you may not use this file except in compliance with the License. | ||
| You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software | ||
| distributed under the License is distributed on an "AS IS" BASIS, | ||
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| See the License for the specific language governing permissions and | ||
| limitations under the License. | ||
| */ | ||
|
|
||
| package juicefs | ||
|
|
||
| import ( | ||
| "context" | ||
| "fmt" | ||
| "sync" | ||
|
|
||
| "github.com/agiledragon/gomonkey/v2" | ||
| . "github.com/onsi/ginkgo/v2" | ||
| . "github.com/onsi/gomega" | ||
| metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
| "k8s.io/apimachinery/pkg/runtime" | ||
| "k8s.io/apimachinery/pkg/types" | ||
| "k8s.io/client-go/tools/record" | ||
| ctrl "sigs.k8s.io/controller-runtime" | ||
|
|
||
| datav1alpha1 "github.com/fluid-cloudnative/fluid/api/v1alpha1" | ||
| "github.com/fluid-cloudnative/fluid/pkg/controllers" | ||
| "github.com/fluid-cloudnative/fluid/pkg/dataoperation" | ||
| "github.com/fluid-cloudnative/fluid/pkg/ddc" | ||
| "github.com/fluid-cloudnative/fluid/pkg/ddc/base" | ||
| cruntime "github.com/fluid-cloudnative/fluid/pkg/runtime" | ||
| "github.com/fluid-cloudnative/fluid/pkg/utils/fake" | ||
| ) | ||
|
|
||
| // mockEngine is a minimal no-op implementation of base.Engine used in tests only. | ||
| type mockEngine struct{} | ||
|
|
||
| func (m *mockEngine) ID() string { return "mock" } | ||
| func (m *mockEngine) Shutdown() error { return nil } | ||
| func (m *mockEngine) Setup(_ cruntime.ReconcileRequestContext) (bool, error) { return true, nil } | ||
| func (m *mockEngine) CreateVolume() error { return nil } | ||
| func (m *mockEngine) DeleteVolume() error { return nil } | ||
| func (m *mockEngine) Sync(_ cruntime.ReconcileRequestContext) error { return nil } | ||
| func (m *mockEngine) Validate(_ cruntime.ReconcileRequestContext) error { return nil } | ||
| func (m *mockEngine) Operate(_ cruntime.ReconcileRequestContext, _ *datav1alpha1.OperationStatus, _ dataoperation.OperationInterface) (ctrl.Result, error) { | ||
| return ctrl.Result{}, nil | ||
| } | ||
|
|
||
| // newTestJuiceFSReconciler builds a JuiceFSRuntimeReconciler seeded with the | ||
| // given scheme and runtime objects. Pass nil scheme to get a default one. | ||
| func newTestJuiceFSReconciler(s *runtime.Scheme, objs ...runtime.Object) *JuiceFSRuntimeReconciler { | ||
| if s == nil { | ||
| s = runtime.NewScheme() | ||
| _ = datav1alpha1.AddToScheme(s) | ||
| } | ||
| fakeClient := fake.NewFakeClientWithScheme(s, objs...) | ||
| log := ctrl.Log.WithName("juicefs-test") | ||
| recorder := record.NewFakeRecorder(10) | ||
| r := &JuiceFSRuntimeReconciler{ | ||
| Scheme: s, | ||
| mutex: &sync.Mutex{}, | ||
| engines: map[string]base.Engine{}, | ||
| } | ||
| r.RuntimeReconciler = controllers.NewRuntimeReconciler(r, fakeClient, log, recorder) | ||
| return r | ||
| } | ||
|
|
||
| var _ = Describe("JuiceFSRuntimeReconciler Implement", func() { | ||
|
|
||
| Describe("getRuntime", func() { | ||
| var r *JuiceFSRuntimeReconciler | ||
|
|
||
| BeforeEach(func() { | ||
| testRuntime := &datav1alpha1.JuiceFSRuntime{ | ||
| ObjectMeta: metav1.ObjectMeta{Name: "test", Namespace: "default"}, | ||
| } | ||
| s := runtime.NewScheme() | ||
| _ = datav1alpha1.AddToScheme(s) | ||
| r = newTestJuiceFSReconciler(s, testRuntime) | ||
| }) | ||
|
|
||
| It("should return the runtime when it exists in the cluster", func() { | ||
| ctx := cruntime.ReconcileRequestContext{ | ||
| Context: context.TODO(), | ||
| NamespacedName: types.NamespacedName{Name: "test", Namespace: "default"}, | ||
| } | ||
| result, err := r.getRuntime(ctx) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(result).NotTo(BeNil()) | ||
| Expect(result.Name).To(Equal("test")) | ||
| Expect(result.Namespace).To(Equal("default")) | ||
| }) | ||
|
|
||
| It("should return an error when the runtime does not exist", func() { | ||
| ctx := cruntime.ReconcileRequestContext{ | ||
| Context: context.TODO(), | ||
| NamespacedName: types.NamespacedName{Name: "nonexistent", Namespace: "default"}, | ||
| } | ||
| result, err := r.getRuntime(ctx) | ||
| Expect(err).To(HaveOccurred()) | ||
| Expect(result).To(BeNil()) | ||
| }) | ||
| }) | ||
|
|
||
| Describe("GetOrCreateEngine", func() { | ||
| var r *JuiceFSRuntimeReconciler | ||
|
|
||
| BeforeEach(func() { | ||
| r = newTestJuiceFSReconciler(nil) | ||
| }) | ||
|
|
||
| It("should propagate engine creation errors", func() { | ||
| patches := gomonkey.ApplyFunc(ddc.CreateEngine, | ||
| func(_ string, _ cruntime.ReconcileRequestContext) (base.Engine, error) { | ||
| return nil, fmt.Errorf("engine creation failed") | ||
| }) | ||
| defer patches.Reset() | ||
|
|
||
| ctx := cruntime.ReconcileRequestContext{ | ||
| Context: context.TODO(), | ||
| NamespacedName: types.NamespacedName{Name: "fail", Namespace: "default"}, | ||
| } | ||
| engine, err := r.GetOrCreateEngine(ctx) | ||
| Expect(err).To(HaveOccurred()) | ||
| Expect(err.Error()).To(ContainSubstring("engine creation failed")) | ||
| Expect(engine).To(BeNil()) | ||
| }) | ||
|
|
||
| It("should create engine on first call and return cached engine on second call", func() { | ||
| mock := &mockEngine{} | ||
| callCount := 0 | ||
| patches := gomonkey.ApplyFunc(ddc.CreateEngine, | ||
| func(_ string, _ cruntime.ReconcileRequestContext) (base.Engine, error) { | ||
| callCount++ | ||
| return mock, nil | ||
| }) | ||
| defer patches.Reset() | ||
|
|
||
| ctx := cruntime.ReconcileRequestContext{ | ||
| Context: context.TODO(), | ||
| NamespacedName: types.NamespacedName{Name: "cached", Namespace: "default"}, | ||
| } | ||
|
|
||
| // First call: engine is created and stored. | ||
| engine1, err := r.GetOrCreateEngine(ctx) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(engine1).To(Equal(base.Engine(mock))) | ||
| Expect(callCount).To(Equal(1)) | ||
|
|
||
| // Second call: engine should be retrieved from the cache without re-creation. | ||
| engine2, err := r.GetOrCreateEngine(ctx) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(engine2).To(Equal(base.Engine(mock))) | ||
| Expect(callCount).To(Equal(1), "CreateEngine must not be called a second time") | ||
| }) | ||
| }) | ||
|
|
||
| Describe("RemoveEngine", func() { | ||
| var r *JuiceFSRuntimeReconciler | ||
|
|
||
| BeforeEach(func() { | ||
| r = newTestJuiceFSReconciler(nil) | ||
| }) | ||
|
|
||
| It("should remove a cached engine by namespaced name", func() { | ||
| id := ddc.GenerateEngineID(types.NamespacedName{Name: "test", Namespace: "default"}) | ||
| r.engines[id] = &mockEngine{} | ||
|
|
||
| ctx := cruntime.ReconcileRequestContext{ | ||
| Context: context.TODO(), | ||
| NamespacedName: types.NamespacedName{Name: "test", Namespace: "default"}, | ||
| } | ||
| r.RemoveEngine(ctx) | ||
|
|
||
| _, found := r.engines[id] | ||
| Expect(found).To(BeFalse()) | ||
| }) | ||
|
|
||
| It("should not panic when removing a non-existent engine", func() { | ||
| ctx := cruntime.ReconcileRequestContext{ | ||
| Context: context.TODO(), | ||
| NamespacedName: types.NamespacedName{Name: "ghost", Namespace: "default"}, | ||
| } | ||
| Expect(func() { r.RemoveEngine(ctx) }).NotTo(Panic()) | ||
| }) | ||
| }) | ||
|
|
||
| Describe("Reconcile", func() { | ||
| It("should return no error when the runtime is not found", func() { | ||
| // The fake client has no JuiceFSRuntime objects, so getRuntime will | ||
| // return a NotFound error, which Reconcile should swallow gracefully. | ||
| s := runtime.NewScheme() | ||
| _ = datav1alpha1.AddToScheme(s) | ||
| r := newTestJuiceFSReconciler(s) | ||
|
|
||
| req := ctrl.Request{ | ||
| NamespacedName: types.NamespacedName{Name: "missing", Namespace: "default"}, | ||
| } | ||
| result, err := r.Reconcile(context.TODO(), req) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(result).To(Equal(ctrl.Result{})) | ||
| }) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
While these new tests for the JuiceFS controller are valuable, they appear to be out of scope for this pull request, which is focused on migrating dataload controller tests. To adhere to the best practice of creating small, focused pull requests, please consider moving these juicefs related changes to a separate PR. This will make the changes easier to review and merge, improving overall maintainability.
There was a problem hiding this comment.
Pull request overview
Migrates controller unit tests to Ginkgo v2/Gomega for the dataload package and adds/adjusts related controller test suites, aiming to improve coverage and make cleanup paths hermetic.
Changes:
- Port
pkg/controllers/v1alpha1/dataloadtests to Ginkgo v2, adding a package suite bootstrap. - Expand/modernize
dataloadstatus handler tests, including Cron path and missing-job cleanup error path. - Add/adjust
juicefscontroller tests and simplify its suite bootstrap.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| pkg/controllers/v1alpha1/juicefs/suite_test.go | Simplifies Ginkgo v2 suite bootstrap (drops envtest wiring). |
| pkg/controllers/v1alpha1/juicefs/juicefsruntime_controller_test.go | Adds Ginkgo v2 unit tests for JuiceFSRuntime reconciler helpers. |
| pkg/controllers/v1alpha1/juicefs/implement_test.go | Adds unit tests for JuiceFSRuntime reconciler behaviors using fakes/patches. |
| pkg/controllers/v1alpha1/dataload/suite_test.go | Adds Ginkgo v2 suite bootstrap for dataload tests. |
| pkg/controllers/v1alpha1/dataload/status_handler_test.go | Migrates and expands status handler test coverage (Once/Cron/OnEvent). |
| pkg/controllers/v1alpha1/dataload/implement_test.go | Migrates and expands implement/controller unit tests to Ginkgo v2. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| func TestAPIs(t *testing.T) { | ||
| RegisterFailHandler(Fail) | ||
|
|
||
| RunSpecs(t, | ||
| "Controller Suite") | ||
| RunSpecs(t, "JuiceFS Controller Suite") | ||
| } |
| It("returns current status when job is still running (no finished condition)", func() { | ||
| runningJob := batchv1.Job{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload-loader-job", | ||
| Namespace: "default", | ||
| Name: "", | ||
| }, | ||
| Log: fake.NullLogger(), | ||
| Status: batchv1.JobStatus{ | ||
| // No conditions — job is still running | ||
| }, | ||
| } | ||
| opStatus, err := onceStatusHandler.GetOperationStatus(ctx, &mockDataload.Status) | ||
| if err != nil { | ||
| t.Errorf("fail to GetOperationStatus with error %v", err) | ||
| fakeClient := fake.NewFakeClientWithScheme(testScheme, &mockDataload, &runningJob) | ||
| handler := &OnceStatusHandler{Client: fakeClient, dataLoad: &mockDataload} | ||
| ctx := cruntime.ReconcileRequestContext{ | ||
| NamespacedName: types.NamespacedName{Namespace: "default"}, | ||
| Log: fake.NullLogger(), | ||
| } | ||
| if opStatus.Phase != testcase.expectedPhase { | ||
| t.Error("Failed to GetOperationStatus", "expected phase", testcase.expectedPhase, "get", opStatus.Phase) | ||
|
|
||
| opStatus, err := handler.GetOperationStatus(ctx, &mockDataload.Status) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| // Status should be returned unchanged when job is still running | ||
| Expect(opStatus).NotTo(BeNil()) | ||
| Expect(opStatus.Phase).To(Equal(mockDataload.Status.Phase)) | ||
| }) |
| cleanupNamespace = namespace | ||
| return sentinelErr | ||
| }) | ||
| defer patches.Reset() |
| Name: "hadoop", | ||
|
|
||
| _, err := handler.GetOperationStatus(ctx, &mockDataload.Status) | ||
| Expect(err).To(MatchError(sentinelErr)) |
| Expect(v1alpha1.AddToScheme(testScheme)).To(Succeed()) | ||
| Expect(batchv1.AddToScheme(testScheme)).To(Succeed()) | ||
|
|
||
| startTime := time.Date(2023, 8, 1, 12, 0, 0, 0, time.Local) |
| Describe("UpdateOperationApiStatus", func() { | ||
| It("updates the DataLoad status via the client", func() { | ||
| newStatus := &datav1alpha1.OperationStatus{ | ||
| Phase: common.PhaseComplete, | ||
| } | ||
| err := op.UpdateOperationApiStatus(newStatus) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
Code Review
The pull request successfully migrates the controller tests for dataload to Ginkgo v2 and Gomega, significantly expanding test coverage. The new tests for implement_test.go and status_handler_test.go cover various scenarios, including error handling and edge cases, which improves the robustness of the test suite. The suite_test.go files have been updated to reflect the new testing framework setup. Overall, this is a positive change that enhances the quality and maintainability of the test code.
Expand package-local coverage and keep dataload status tests hermetic so this package clears the Ginkgo migration gate. Signed-off-by: Harsh <harshmastic@gmail.com>
7f74f23 to
a78d761
Compare
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request successfully migrates the dataload controller tests to Ginkgo v2 and Gomega, improving test coverage and structure. The changes are well-implemented, converting traditional table-driven tests to Ginkgo's DescribeTable and adding new test cases for better coverage, including error paths. I have one suggestion to enhance a test case by adding a verification step to ensure the object status is correctly updated.
| newStatus := &datav1alpha1.OperationStatus{ | ||
| Phase: common.PhaseComplete, | ||
| } | ||
| err := op.UpdateOperationApiStatus(newStatus) | ||
| Expect(err).NotTo(HaveOccurred()) |
There was a problem hiding this comment.
This test only checks that UpdateOperationApiStatus doesn't return an error. To make the test more robust, you should also verify that the DataLoad object's status was actually updated. This can be done by fetching the object from the fake client after the update and asserting its status. Note that you may need to import the context package.
| newStatus := &datav1alpha1.OperationStatus{ | |
| Phase: common.PhaseComplete, | |
| } | |
| err := op.UpdateOperationApiStatus(newStatus) | |
| Expect(err).NotTo(HaveOccurred()) | |
| newStatus := &datav1alpha1.OperationStatus{ | |
| Phase: common.PhaseComplete, | |
| } | |
| err := op.UpdateOperationApiStatus(newStatus) | |
| Expect(err).NotTo(HaveOccurred()) | |
| key := types.NamespacedName{Name: mockDataload.Name, Namespace: mockDataload.Namespace} | |
| updatedDataload := &datav1alpha1.DataLoad{} | |
| Expect(op.Client.Get(context.Background(), key, updatedDataload)).To(Succeed()) | |
| Expect(updatedDataload.Status.Phase).To(Equal(common.PhaseComplete)) |
|
/gemini review |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #5700 +/- ##
=======================================
Coverage 61.22% 61.22%
=======================================
Files 444 444
Lines 30557 30557
=======================================
Hits 18710 18710
Misses 10307 10307
Partials 1540 1540 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Code Review
This pull request successfully migrates the controller tests for dataload to Ginkgo v2 and significantly expands test coverage. The changes are well-structured and improve the maintainability of the tests. I have one suggestion regarding test coverage for an edge case in the status handlers, which could uncover a potential bug in the error handling logic.
| It("returns error when job is missing and helm release cleanup fails", func() { | ||
| sentinelErr := errors.New("cleanup failed") | ||
| expectedReleaseName := utils.GetDataLoadReleaseName(mockDataload.Name) | ||
| var cleanupReleaseName string | ||
| var cleanupNamespace string | ||
| patches := gomonkey.ApplyFunc(helm.DeleteReleaseIfExists, func(name, namespace string) error { | ||
| cleanupReleaseName = name | ||
| cleanupNamespace = namespace | ||
| return sentinelErr | ||
| }) | ||
| defer patches.Reset() | ||
|
|
||
| fakeClient := fake.NewFakeClientWithScheme(testScheme, &mockDataload) | ||
| handler := &OnceStatusHandler{Client: fakeClient, dataLoad: &mockDataload} | ||
| ctx := cruntime.ReconcileRequestContext{ | ||
| NamespacedName: types.NamespacedName{Namespace: "default"}, | ||
| Log: fake.NullLogger(), | ||
| } | ||
| } | ||
| } | ||
|
|
||
| func TestCronGetOperationStatus(t *testing.T) { | ||
| testScheme := runtime.NewScheme() | ||
| _ = v1alpha1.AddToScheme(testScheme) | ||
| _ = batchv1.AddToScheme(testScheme) | ||
|
|
||
| startTime := time.Date(2023, 8, 1, 12, 0, 0, 0, time.Local) | ||
| lastScheduleTime := v1.NewTime(startTime) | ||
| lastSuccessfulTime := v1.NewTime(startTime.Add(time.Second * 10)) | ||
|
|
||
| mockCronDataload := v1alpha1.DataLoad{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload", | ||
| Namespace: "default", | ||
| }, | ||
| Spec: v1alpha1.DataLoadSpec{ | ||
| Dataset: v1alpha1.TargetDataset{ | ||
| Name: "hadoop", | ||
|
|
||
| _, err := handler.GetOperationStatus(ctx, &mockDataload.Status) | ||
| Expect(err).To(MatchError(sentinelErr)) | ||
| Expect(cleanupReleaseName).To(Equal(expectedReleaseName)) | ||
| Expect(cleanupNamespace).To(Equal(mockDataload.Namespace)) | ||
| }) |
There was a problem hiding this comment.
This is a great test case for the failure path of the cleanup logic. To make it more robust, I suggest adding a test case for the success path, i.e., when helm.DeleteReleaseIfExists succeeds.
The current implementation of OnceStatusHandler appears to return the original NotFound error from kubeclient.GetJob in this scenario. This might not be interpreted as a requeue-able error by the controller, potentially halting the reconciliation for this DataLoad instead of recreating the job.
Additionally, CronStatusHandler has a similar but more critical issue where it returns (nil, nil) if the cronjob is missing and the Helm release deletion succeeds. This would incorrectly signal a successful reconciliation. Adding tests for these scenarios in both handlers would ensure the cleanup and recovery logic is fully covered and behaves as expected.
Strengthen the review follow-up assertions without widening scope so the updated PR head reflects the latest bot-driven test improvements. Signed-off-by: Harsh <harshmastic@gmail.com>
|
/gemini review |
1 similar comment
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request successfully migrates the controller tests for pkg/controllers/v1alpha1/dataload to Ginkgo v2 and Gomega. The changes include refactoring existing testing package-based tests into Ginkgo DescribeTable and It blocks, and adding a suite_test.go for bootstrapping the test suite. The new tests for dataLoadOperation and DataLoadReconciler cover various scenarios, including error handling and different policy types. The use of gomonkey for mocking in status_handler_test.go is a good practice for isolating unit tests. Overall, the migration is well-executed and enhances the test suite's structure and coverage.
There was a problem hiding this comment.
Code Review
The pull request successfully migrates the controller tests for pkg/controllers/v1alpha1/dataload to Ginkgo v2 and Gomega. This migration improves the test structure, readability, and maintainability, especially with the use of DescribeTable for parameterized tests. The changes also expand test coverage, notably by adding a test case for cleanup failure in OnceStatusHandler and ensuring proper handling of Cron and OnEvent policies. The introduction of suite_test.go correctly bootstraps the Ginkgo test suite for the package. Overall, this is a well-executed migration that enhances the quality of the test suite.
Capture the remaining safe test-only review follow-ups so the PR head matches the latest bot-driven validation work. Signed-off-by: Harsh <harshmastic@gmail.com>
|
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request effectively migrates the dataload controller tests to Ginkgo v2, refactoring existing tests and adding new ones to expand coverage. The changes improve the test structure and cover more edge cases for dataLoadOperation methods and status handlers. The implementation is solid, and I have not identified any issues requiring changes.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request effectively migrates the dataload controller tests to Ginkgo v2, significantly improving test structure and expanding coverage. The new tests are well-written and cover important code paths, including making the status-handler cleanup hermetic. I have one suggestion to further enhance test completeness by adding a test case for an error path in the CronStatusHandler.
| var _ = Describe("CronStatusHandler", func() { | ||
| var ( | ||
| testScheme *runtime.Scheme | ||
| mockCronDataload v1alpha1.DataLoad | ||
| lastScheduleTime v1.Time | ||
| lastSuccessfulTime v1.Time | ||
| mockCronJob batchv1.CronJob | ||
| patch *gomonkey.Patches | ||
| ) | ||
|
|
||
| BeforeEach(func() { | ||
| testScheme = runtime.NewScheme() | ||
| Expect(v1alpha1.AddToScheme(testScheme)).To(Succeed()) | ||
| Expect(batchv1.AddToScheme(testScheme)).To(Succeed()) | ||
|
|
||
| startTime := time.Date(2023, 8, 1, 12, 0, 0, 0, time.Local) | ||
| lastScheduleTime = v1.NewTime(startTime) | ||
| lastSuccessfulTime = v1.NewTime(startTime.Add(time.Second * 10)) | ||
|
|
||
| mockCronDataload = v1alpha1.DataLoad{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload", | ||
| Namespace: "default", | ||
| }, | ||
| Policy: "Cron", | ||
| Schedule: "* * * * *", | ||
| }, | ||
| Status: v1alpha1.OperationStatus{ | ||
| Phase: common.PhaseComplete, | ||
| }, | ||
| } | ||
| Spec: v1alpha1.DataLoadSpec{ | ||
| Dataset: v1alpha1.TargetDataset{ | ||
| Name: "hadoop", | ||
| Namespace: "default", | ||
| }, | ||
| Policy: "Cron", | ||
| Schedule: "* * * * *", | ||
| }, | ||
| Status: v1alpha1.OperationStatus{ | ||
| Phase: common.PhaseComplete, | ||
| }, | ||
| } | ||
|
|
||
| mockCronJob := batchv1.CronJob{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload-loader-job", | ||
| Namespace: "default", | ||
| }, | ||
| Spec: batchv1.CronJobSpec{ | ||
| Schedule: "* * * * *", | ||
| }, | ||
| Status: batchv1.CronJobStatus{ | ||
| LastScheduleTime: &lastScheduleTime, | ||
| LastSuccessfulTime: &lastSuccessfulTime, | ||
| }, | ||
| } | ||
| mockCronJob = batchv1.CronJob{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload-loader-job", | ||
| Namespace: "default", | ||
| }, | ||
| Spec: batchv1.CronJobSpec{ | ||
| Schedule: "* * * * *", | ||
| }, | ||
| Status: batchv1.CronJobStatus{ | ||
| LastScheduleTime: &lastScheduleTime, | ||
| LastSuccessfulTime: &lastSuccessfulTime, | ||
| }, | ||
| } | ||
|
|
||
| // Patch IsBatchV1CronJobSupported to avoid real cluster API discovery in unit tests. | ||
| patch = gomonkey.ApplyFunc(compatibility.IsBatchV1CronJobSupported, func() bool { | ||
| return true | ||
| }) | ||
| }) | ||
|
|
||
| mockJob := batchv1.Job{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload-loader-job-1", | ||
| Namespace: "default", | ||
| Labels: map[string]string{ | ||
| "cronjob": "test-dataload-loader-job", | ||
| AfterEach(func() { | ||
| if patch != nil { | ||
| patch.Reset() | ||
| } | ||
| }) | ||
|
|
||
| It("returns PhaseComplete when job is complete", func() { | ||
| mockJob := batchv1.Job{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload-loader-job-1", | ||
| Namespace: "default", | ||
| Labels: map[string]string{ | ||
| "cronjob": "test-dataload-loader-job", | ||
| }, | ||
| CreationTimestamp: lastScheduleTime, | ||
| }, | ||
| CreationTimestamp: lastScheduleTime, | ||
| }, | ||
| Status: batchv1.JobStatus{ | ||
| Conditions: []batchv1.JobCondition{ | ||
| { | ||
| Type: batchv1.JobComplete, | ||
| LastProbeTime: lastSuccessfulTime, | ||
| LastTransitionTime: lastSuccessfulTime, | ||
| Status: batchv1.JobStatus{ | ||
| Conditions: []batchv1.JobCondition{ | ||
| { | ||
| Type: batchv1.JobComplete, | ||
| LastProbeTime: lastSuccessfulTime, | ||
| LastTransitionTime: lastSuccessfulTime, | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| } | ||
| } | ||
|
|
||
| mockRunningJob := batchv1.Job{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload-loader-job-1", | ||
| Namespace: "default", | ||
| Labels: map[string]string{ | ||
| "cronjob": "test-dataload-loader-job", | ||
| fakeClient := fake.NewFakeClientWithScheme(testScheme, &mockCronDataload, &mockCronJob, &mockJob) | ||
| handler := &CronStatusHandler{Client: fakeClient, dataLoad: &mockCronDataload} | ||
| ctx := cruntime.ReconcileRequestContext{Log: fake.NullLogger()} | ||
|
|
||
| opStatus, err := handler.GetOperationStatus(ctx, &mockCronDataload.Status) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(opStatus.LastScheduleTime).To(Equal(&lastScheduleTime)) | ||
| Expect(opStatus.LastSuccessfulTime).To(Equal(&lastSuccessfulTime)) | ||
| Expect(opStatus.Phase).To(Equal(common.PhaseComplete)) | ||
| }) | ||
|
|
||
| It("returns PhasePending when job is still running", func() { | ||
| mockRunningJob := batchv1.Job{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload-loader-job-1", | ||
| Namespace: "default", | ||
| Labels: map[string]string{ | ||
| "cronjob": "test-dataload-loader-job", | ||
| }, | ||
| CreationTimestamp: lastScheduleTime, | ||
| }, | ||
| CreationTimestamp: lastScheduleTime, | ||
| }, | ||
| Status: batchv1.JobStatus{}, | ||
| } | ||
|
|
||
| testcases := []struct { | ||
| name string | ||
| job batchv1.Job | ||
| expectedPhase common.Phase | ||
| }{ | ||
| { | ||
| name: "job complete", | ||
| job: mockJob, | ||
| expectedPhase: common.PhaseComplete, | ||
| }, | ||
| { | ||
| name: "job running", | ||
| job: mockRunningJob, | ||
| expectedPhase: common.PhasePending, | ||
| }, | ||
| } | ||
| Status: batchv1.JobStatus{}, | ||
| } | ||
|
|
||
| for _, testcase := range testcases { | ||
| client := fake.NewFakeClientWithScheme(testScheme, &mockCronDataload, &mockCronJob, &testcase.job) | ||
| cronStatusHandler := &CronStatusHandler{Client: client, dataLoad: &mockCronDataload} | ||
| fakeClient := fake.NewFakeClientWithScheme(testScheme, &mockCronDataload, &mockCronJob, &mockRunningJob) | ||
| handler := &CronStatusHandler{Client: fakeClient, dataLoad: &mockCronDataload} | ||
| ctx := cruntime.ReconcileRequestContext{Log: fake.NullLogger()} | ||
| opStatus, err := cronStatusHandler.GetOperationStatus(ctx, &mockCronDataload.Status) | ||
| if err != nil { | ||
| t.Errorf("fail to GetOperationStatus with error %v", err) | ||
|
|
||
| opStatus, err := handler.GetOperationStatus(ctx, &mockCronDataload.Status) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(opStatus.LastScheduleTime).To(Equal(&lastScheduleTime)) | ||
| Expect(opStatus.LastSuccessfulTime).To(Equal(&lastSuccessfulTime)) | ||
| Expect(opStatus.Phase).To(Equal(common.PhasePending)) | ||
| }) | ||
|
|
||
| It("returns PhaseFailed when job failed", func() { | ||
| mockFailedJob := batchv1.Job{ | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "test-dataload-loader-job-1", | ||
| Namespace: "default", | ||
| Labels: map[string]string{ | ||
| "cronjob": "test-dataload-loader-job", | ||
| }, | ||
| CreationTimestamp: lastScheduleTime, | ||
| }, | ||
| Status: batchv1.JobStatus{ | ||
| Conditions: []batchv1.JobCondition{ | ||
| { | ||
| Type: batchv1.JobFailed, | ||
| LastProbeTime: lastSuccessfulTime, | ||
| LastTransitionTime: lastSuccessfulTime, | ||
| }, | ||
| }, | ||
| }, | ||
| } | ||
| if !reflect.DeepEqual(opStatus.LastScheduleTime, &lastScheduleTime) || !reflect.DeepEqual(opStatus.LastSuccessfulTime, &lastSuccessfulTime) { | ||
| t.Error("fail to get correct Operation Status", "expected LastScheduleTime", lastScheduleTime, "expected LastSuccessfulTime", lastSuccessfulTime, "get", opStatus) | ||
|
|
||
| // Start with pending phase so it can transition to failed | ||
| mockCronDataload.Status.Phase = common.PhasePending | ||
| fakeClient := fake.NewFakeClientWithScheme(testScheme, &mockCronDataload, &mockCronJob, &mockFailedJob) | ||
| handler := &CronStatusHandler{Client: fakeClient, dataLoad: &mockCronDataload} | ||
| ctx := cruntime.ReconcileRequestContext{Log: fake.NullLogger()} | ||
|
|
||
| opStatus, err := handler.GetOperationStatus(ctx, &mockCronDataload.Status) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(opStatus.Phase).To(Equal(common.PhaseFailed)) | ||
| }) | ||
|
|
||
| It("returns current status when no current job matches schedule", func() { | ||
| // CronJob with LastScheduleTime but no jobs matching | ||
| fakeClient := fake.NewFakeClientWithScheme(testScheme, &mockCronDataload, &mockCronJob) | ||
| handler := &CronStatusHandler{Client: fakeClient, dataLoad: &mockCronDataload} | ||
| ctx := cruntime.ReconcileRequestContext{Log: fake.NullLogger()} | ||
|
|
||
| opStatus, err := handler.GetOperationStatus(ctx, &mockCronDataload.Status) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| expectedStatus := mockCronDataload.Status.DeepCopy() | ||
| expectedStatus.LastScheduleTime = &lastScheduleTime | ||
| expectedStatus.LastSuccessfulTime = &lastSuccessfulTime | ||
| Expect(opStatus).To(Equal(expectedStatus)) | ||
| }) | ||
|
|
||
| It("returns current status when cronjob is missing and helm release cleanup succeeds", func() { | ||
| expectedReleaseName := utils.GetDataLoadReleaseName(mockCronDataload.Name) | ||
| var cleanupReleaseName string | ||
| var cleanupNamespace string | ||
| cleanupPatch := gomonkey.ApplyFunc(helm.DeleteReleaseIfExists, func(name, namespace string) error { | ||
| cleanupReleaseName = name | ||
| cleanupNamespace = namespace | ||
| return nil | ||
| }) | ||
| defer cleanupPatch.Reset() | ||
|
|
||
| fakeClient := fake.NewFakeClientWithScheme(testScheme, &mockCronDataload) | ||
| handler := &CronStatusHandler{Client: fakeClient, dataLoad: &mockCronDataload} | ||
| ctx := cruntime.ReconcileRequestContext{ | ||
| NamespacedName: types.NamespacedName{Namespace: mockCronDataload.Namespace}, | ||
| Log: fake.NullLogger(), | ||
| } | ||
| if opStatus.Phase != testcase.expectedPhase { | ||
| t.Error("Failed to GetOperationStatus", "expected phase", testcase.expectedPhase, "get", opStatus.Phase) | ||
|
|
||
| opStatus, err := handler.GetOperationStatus(ctx, &mockCronDataload.Status) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(opStatus).To(Equal(&mockCronDataload.Status)) | ||
| Expect(cleanupReleaseName).To(Equal(expectedReleaseName)) | ||
| Expect(cleanupNamespace).To(Equal(mockCronDataload.Namespace)) | ||
| }) | ||
| }) |
There was a problem hiding this comment.
For completeness and to align with the test coverage for OnceStatusHandler, consider adding a test case for CronStatusHandler that covers the scenario where the CronJob is missing and the subsequent Helm release cleanup fails. This will ensure the error handling path is also tested for cron-based dataloads.
Here is a suggested test case to add within the Describe("CronStatusHandler", ...) block:
It("returns error when cronjob is missing and helm release cleanup fails", func() {
sentinelErr := errors.New("cleanup failed")
expectedReleaseName := utils.GetDataLoadReleaseName(mockCronDataload.Name)
var cleanupReleaseName string
var cleanupNamespace string
cleanupPatch := gomonkey.ApplyFunc(helm.DeleteReleaseIfExists, func(name, namespace string) error {
cleanupReleaseName = name
cleanupNamespace = namespace
return sentinelErr
})
defer cleanupPatch.Reset()
fakeClient := fake.NewFakeClientWithScheme(testScheme, &mockCronDataload)
handler := &CronStatusHandler{Client: fakeClient, dataLoad: &mockCronDataload}
ctx := cruntime.ReconcileRequestContext{
NamespacedName: types.NamespacedName{Namespace: mockCronDataload.Namespace},
Log: fake.NullLogger(),
}
_, err := handler.GetOperationStatus(ctx, &mockCronDataload.Status)
Expect(err).To(MatchError(sentinelErr))
Expect(cleanupReleaseName).To(Equal(expectedReleaseName))
Expect(cleanupNamespace).To(Equal(mockCronDataload.Namespace))
})


Ⅰ. Describe what this PR does
Migrate
pkg/controllers/v1alpha1/dataloadtests to Ginkgo v2/Gomega, add the package suite bootstrap, expand package-local coverage to 77.2%, and make the status-handler cleanup path hermetic.Ⅱ. Does this pull request fix one issue?
NONE
Ⅲ. List the added test cases (unit test/integration test) if any, please explain if no tests are needed.
Add Ginkgo coverage for
implement_test.go,status_handler_test.go, andsuite_test.go, including the Cron status path and the missing-job cleanup error path.Ⅳ. Describe how to verify it
go test ./pkg/controllers/v1alpha1/dataload/... -count=1 -vgo test -coverprofile=/tmp/cover.out ./pkg/controllers/v1alpha1/dataload/... -count=1go tool cover -func=/tmp/cover.out | tail -1make fmtⅤ. Special notes for reviews
N/A