-
Notifications
You must be signed in to change notification settings - Fork 128
Immersed boundaries integration with IGR #1095
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughIntegrates IBM with IGR: adds Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant IGR as IGR Solver
participant Jac as Jacobian (jac / jac_old)
participant IBM as m_ibm (s_interpolate_sigma_igr)
participant GPU as GPU-parallel
Note over IGR: Complete IGR iterative solve
alt IBM active
IGR->>GPU: zero jac at IB-marked cells (parallel)
IGR->>IBM: call s_interpolate_sigma_igr(jac)
IBM->>GPU: launch GPU-parallel interpolation & assemble sigma / q_cons_vf
GPU-->>Jac: update jac entries at image/ghost points
alt Jacobi solver selected
IGR->>GPU: copy jac -> jac_old (parallel)
end
else IBM inactive
Note over IGR: no IBM interpolation invoked
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
Nitpicks 🔍
|
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/simulation/m_ibm.fpp (1)
891-1083: Addpresent(q_cons_vf)guard to IGR branch ins_interpolate_image_pointThe
if (igr)branch at line 963 dereferencesq_cons_vfwithout checking if it is present. While the call site ins_ibm_correct_stateonly passesq_cons_vf=in the explicitelse if (igr)branch, the mutually-exclusive if-else-if structure means that when bothbubbles_euler=.true.andigr=.true.are set, the call enters thebubbles_eulerbranch without passingq_cons_vf, but the subroutine still executes theif (igr)block internally with an absent optional argument. The Fortran standard prohibits accessing an omitted optional argument, creating undefined behavior on lines 972–981 and 995–996.Change line 963 from:
if (igr) thento:
if (igr .and. present(q_cons_vf)) thenOptionally add
@:ASSERT(.not. igr .or. present(q_cons_vf), 'IGR requires q_cons_vf in s_interpolate_image_point')near the top of the subroutine for early detection of unsupported configurations.
🧹 Nitpick comments (2)
src/simulation/m_igr.fpp (1)
18-18: New dependency onm_ibmlooks reasonable but widens couplingBringing
m_ibmintom_igrto accessib_markersands_update_igris consistent with the new IBM–IGR workflow and doesn’t introduce a circular dependency (m_ibm doesn’tuse m_igr). Just be aware this further couples the IGR solver to IBM internals; if you intends_update_igrto stay IBM-internal, consider exposing only what’s needed via anonly:list on theusestatement in a follow‑up.src/simulation/m_ibm.fpp (1)
315-328: Pressure setting skipped for IGR is fine but note the TODO for moving IGRThe pressure assignment block:
! !TEMPORARY, NEED TO FIX FOR MOVING IGR if (.not. igr) then if (patch_ib(patch_id)%moving_ibm <= 1) then q_prim_vf(E_idx)%sf(j,k,l) = pres_IP else ... end if end ifcorrectly avoids reusing primitive-variable pressure logic for IGR cases, which now build ghost states from conservative variables. The in-code TODO acknowledges that moving IBM + IGR will eventually need a dedicated treatment; for static or currently tested configurations the guard is appropriate.
I’d just suggest turning this into a more explicit FIXME (or opening a tracked issue) so the moving‑IGR pressure model doesn’t get forgotten.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/simulation/m_ibm.fpp(11 hunks)src/simulation/m_igr.fpp(2 hunks)toolchain/mfc/case_validator.py(0 hunks)
💤 Files with no reviewable changes (1)
- toolchain/mfc/case_validator.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
🧠 Learnings (9)
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_igr.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_igr.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: cubic · AI code reviewer
🔇 Additional comments (1)
src/simulation/m_ibm.fpp (1)
269-309: Index formulations for species volume fractions are correctly synchronizedThe code correctly uses both index forms interchangeably: IGR writes to
q_cons_vf(E_idx + q)while non-IGR writes toq_prim_vf(advxb + q - 1), and sinceadvxb = E_idx + 1, both expressions reference identical array positions. The conservative variable assignments stay in sync across all branches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4 issues found across 3 files
Prompt for AI agents (all 4 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="src/simulation/m_ibm.fpp">
<violation number="1" location="src/simulation/m_ibm.fpp:162">
P2: GPU data mapping concern: `jac_sf` is accessed within the GPU parallel kernel but is not explicitly included in data clauses (copyin/copy/update). This could result in accessing host memory from device code or using stale data. Verify that `jac_sf%sf` is properly mapped to the device before this kernel executes.</violation>
<violation number="2" location="src/simulation/m_ibm.fpp:240">
P2: Inconsistent loop bounds: zeroing `jac` for IB markers uses `0..m, 0..n, 0..p` ranges, but copying to `jac_old` uses `idwbuff(...)%beg:...%end`. If `idwbuff` extends beyond the computational domain (e.g., includes ghost/halo cells), this may leave ghost regions of `jac_old` inconsistent after IB updates.</violation>
<violation number="3" location="src/simulation/m_ibm.fpp:291">
P1: The optional `q_cons_vf` argument is accessed without a `present(q_cons_vf)` guard when `igr` is true. If this subroutine is ever called with `igr` enabled but without passing `q_cons_vf`, this will cause undefined behavior. Consider adding `if (igr .and. .not. present(q_cons_vf))` error handling or restructuring the logic.</violation>
<violation number="4" location="src/simulation/m_ibm.fpp:982">
P1: Variables `rho_K`, `gamma_K`, `pi_inf_K`, and `qv_K` will be uninitialized when both `elasticity` and `igr` are true. The elasticity case is commented out, but these variables are still used in subsequent calculations (`max(rho_K, sgm_eps)`, division by `rho_K`, and `s_compute_pressure` call). Either uncomment and fix the elasticity call, or add a runtime check to prevent this code path.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/simulation/m_ibm.fpp (1)
898-927: Uninitialized mixture variables whenigrandelasticityare both trueThe elasticity branch in the IGR path of
s_interpolate_image_pointremains commented out (lines 988–989), leavingrho_K,gamma_K,pi_inf_K, andqv_Kuninitialized when bothigrandelasticityare enabled. These uninitialized variables are then used in:
rho_K = max(rho_K, sgm_eps)(line 997)- Dynamic pressure accumulation with division by
rho_K(lines 1003–1006)s_compute_pressurecall passingpi_inf_K,gamma_K,rho_K,qv_K(lines 1008–1012)This produces undefined behavior and garbage values for pressure and velocity at IBM ghost points.
Enable the elasticity path and declare the missing variables
G_K(scalar) andGs(array of sizenum_fluids) to match the pattern used in other routines likem_sim_helpers.fpp.
🧹 Nitpick comments (2)
src/simulation/m_ibm.fpp (2)
244-257: Moving-IBM interior pressure is now explicitly disabled for IGRGating the “moving IBM interior pressure” initialization with
if (.not. igr)makes the current limitation (moving IBM + IGR not supported) explicit and prevents that heuristic from running under IGR.This is reasonable as an interim step; just ensure this limitation is documented or guarded elsewhere (e.g., in case setup/validator) so users don’t assume full moving-IBM support with IGR.
898-905: Add guard for optionalq_cons_vfwhen IGR is enabledThe subroutine
s_interpolate_image_pointdeclaresq_cons_vfas optional but dereferences it unconditionally inside theif (igr)block without checkingpresent(). While the current only IGR call site (s_ibm_correct_state) always suppliesq_cons_vf, this is fragile. Future callers might invoke this routine under IGR without providingq_cons_vf, causing undefined behavior.Add a guard at the routine entry:
if (igr .and. .not. present(q_cons_vf)) then call s_mpi_abort('s_interpolate_image_point: q_cons_vf must be present when igr is enabled') end ifThis hardens the API boundary and prevents silent errors.
Also applies to: 970-987
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/simulation/m_ibm.fpp(11 hunks)src/simulation/m_igr.fpp(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
🧠 Learnings (10)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: cubic · AI code reviewer
🔇 Additional comments (2)
src/simulation/m_igr.fpp (1)
373-398: IBM post-processing ofjacandjac_oldlooks consistentZeroing
jacinside IBM-marked cells over0:m,0:n,0:p, then callings_update_igr(jac_sf)and finally re-syncingjac_oldfor Jacobi (igr_iter_solver == 1) matches the allocation/usage patterns ofjac/jac_oldand the IBM ghost-point interpolation.No further issues spotted in this block.
src/simulation/m_ibm.fpp (1)
289-307: IGR-specific conservative updates ins_ibm_correct_statelook coherentFor
igrruns:
- The image-point interpolation now fills
alpha_rho_IP/alpha_IPvia the IGR path ins_interpolate_image_point.- You correctly map those into
q_cons_vf(single- and multi-fluid) at the ghost points, instead of going throughq_prim_vf.- The later
.not. igrblock that resets continuity and advective variables is skipped, so the IGR-consistent values aren’t overwritten.This matches the new conservative-variable-based design; no issues spotted.
Also applies to: 390-397
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #1095 +/- ##
==========================================
- Coverage 43.98% 43.81% -0.17%
==========================================
Files 71 71
Lines 20284 20363 +79
Branches 1982 1991 +9
==========================================
+ Hits 8922 8923 +1
- Misses 10225 10300 +75
- Partials 1137 1140 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@Cowsreal It would be nice to see a convergence result for IGR vs WENO. I.e., run the supersonic flow over a cylinder case at a moderately high resolution with WENO5, then run IGR at three or four resolutions approaching that moderately high resolution, and compare the surface pressure along the same axis. Depending on how automated/quick this is, it might be good to see the same for the airfoil (or do the airfoil instead. Since they're NACA profiles, you could compute the pressure lift/drag and compare it to published values). |
Yes, ok that would be cool to find a real study and compare it. I'll try to do that. |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/simulation/m_ibm.fpp (1)
1029-1038: Incorrect volume fraction interpolation for multi-fluid IGRThe calculation of
alpha_IP(num_fluids)at line 1037 is mathematically incorrect. Sincealpha_sumaccumulatescoeff * alpha_l, the formula(1._wp - alpha_sum)produces incorrect interpolation weights.Current behavior per cell:
alpha_sum = coeff * Σ(alpha_1..alpha_{n-1})alpha_IP(n) += 1 - coeff*local_sumExpected behavior:
alpha_IP(n) += coeff * (1 - local_sum) = coeff - coeff*local_sumThe difference
(1 - coeff)accumulates across cells, causing incorrect volume fraction values in multi-fluid IGR simulations.🔎 Proposed fix
alpha_rho_IP(num_fluids) = alpha_rho_IP(num_fluids) + coeff*q_cons_vf(num_fluids)%sf(i, j, k) - alpha_IP(num_fluids) = alpha_IP(num_fluids) + (1._wp - alpha_sum) + alpha_IP(num_fluids) = alpha_IP(num_fluids) + coeff - alpha_sum end if
🧹 Nitpick comments (3)
src/simulation/m_ibm.fpp (3)
172-179: Uninitialized read ofip_grid(3)in 2D modeLine 174 unconditionally reads
gp%ip_grid(3)before the 2D guard overwritesl1/l2. In 2D runs (p == 0),ip_grid(3)is never initialized bys_compute_image_points, making this a potentially undefined read. Move theip_grid(3)access inside anelsebranch as suggested in past review.🔎 Proposed fix
j1 = gp%ip_grid(1); j2 = j1 + 1 k1 = gp%ip_grid(2); k2 = k1 + 1 - l1 = gp%ip_grid(3); l2 = l1 + 1 - if (p == 0) then l1 = 0 l2 = 0 + else + l1 = gp%ip_grid(3) + l2 = l1 + 1 end if
1058-1091: Features silently disabled with IGRSurface tension (line 1058), Euler bubbles (line 1062), and QBMM (line 1077) interpolation are skipped when IGR is active. While this prevents crashes, users may not realize these features are inactive.
Consider adding validation at simulation setup to warn or abort when unsupported combinations are requested:
! In case validation or initialization: if (igr .and. surface_tension) & call s_mpi_abort("Surface tension is not supported with IGR") if (igr .and. bubbles_euler) & call s_mpi_abort("Euler bubbles are not supported with IGR") if (igr .and. qbmm) & call s_mpi_abort("QBMM is not supported with IGR")
36-39: Consider addings_interpolate_sigma_igrto explicit public listThe new subroutine
s_interpolate_sigma_igris imported bym_igr.fppbut isn't listed in the explicitpublicdeclaration. While it's implicitly public (no module-levelprivatestatement), adding it explicitly would improve maintainability.; public :: s_initialize_ibm_module, & s_ibm_setup, & s_ibm_correct_state, & - s_finalize_ibm_module + s_finalize_ibm_module, & + s_interpolate_sigma_igr
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/simulation/m_ibm.fpp
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_ibm.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/simulation/m_ibm.fpp
🧠 Learnings (19)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Avoid `goto` statements (except unavoidable legacy); avoid global state (`COMMON`, `save`)
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `wp` (working precision) parameter from `m_precision_select` instead of hardcoded precision like `real*8`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `s_mpi_abort(<msg>)` for error termination instead of `stop`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Avoid outdated Fortran features: no `equivalence` statements, no `data` statements (use initialization expressions), use `character(len=N)` instead of `Character*N`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Call s_mpi_abort(<msg>) for errors, never use stop or error stop
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Every variable must have `intent(in|out|inout)` specification and appropriate `dimension` / `allocatable` / `pointer`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not place `stop` or `error stop` inside device code
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Applied to files:
src/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: Code Cleanliness Check
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Build & Publish
🔇 Additional comments (5)
src/simulation/m_ibm.fpp (5)
244-257: LGTM!The interior pressure initialization is correctly guarded for non-IGR paths, and all loop indices (
j,k,l) are properly declared in theprivatelist for GPU parallelization.
289-291: LGTM!The IGR-specific interpolation call correctly passes the conservative variable field and elasticity-related parameters (
G_K,Gs) as optional arguments.
298-316: LGTM!The conditional assignment correctly routes to conservative variables (
q_cons_vf) for IGR and primitive variables (q_prim_vf) for the standard path. The multi-fluid handling properly distinguishes single-fluid and multi-fluid cases.
390-397: LGTM!The guard correctly prevents overwriting conservative variables that were already set for IGR at lines 298-308. For non-IGR paths, continuity and advection variables are properly assigned to
q_cons_vf.
896-930: LGTM!The interface extension for IGR support is well-structured:
- Optional
q_cons_vfparameter properly declared withintent(IN)- Optional
G_KandGsparameters for elasticity support- New local variables (
alpha_sum,pres,dyn_pres, etc.) properly declared for IGR calculations
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="src/pre_process/m_check_ib_patches.fpp">
<violation number="1" location="src/pre_process/m_check_ib_patches.fpp:47">
P2: Error message references wrong patch. The condition `any(patch_ib(:)%moving_ibm > 0)` checks ALL patches, but the error message blames patch `iStr` (the current loop index), which may not be the offending patch. Consider either checking only the current patch `patch_ib(i)%moving_ibm > 0` or using a generic message that doesn't reference a specific patch number.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
src/simulation/m_ibm.fpp (2)
323-336: Moving IBM with IGR is now blocked at validation timeThe
!TEMPORARYcomment indicates incomplete support for moving IBM pressure corrections with IGR. This is acceptable since the new validation inm_check_ib_patches.fpp(lines 47-49) now blocks this combination at pre-processing time, preventing users from running with an incomplete implementation.
1030-1038: Incorrect multi-fluid volume fraction interpolation for IGRThis issue was previously flagged. Since
alpha_sumaccumulatescoeff * alpha_l(weighted), the formula at line 1038 computes1 - coeff*local_suminstead of the correctcoeff * (1 - local_sum).The fix should use
coeff - alpha_sumto correctly weight the last fluid's contribution.🔎 Proposed fix
- alpha_IP(num_fluids) = alpha_IP(num_fluids) + (1._wp - alpha_sum) + alpha_IP(num_fluids) = alpha_IP(num_fluids) + coeff - alpha_sum
🧹 Nitpick comments (1)
src/pre_process/m_check_ib_patches.fpp (1)
47-49: Consider improving the validation check placement and scopeThe check has a few issues:
Redundant execution: This check runs once per active IB patch (inside the loop), but only needs to run once since it uses
any()over all patches.Uninitialized access:
patch_ib(:)includes allnum_patches_maxpatches, which may include uninitialized values for inactive patches. Should usepatch_ib(1:num_ibs)to only check active patches.Misleading error message: The message references
patch_ib("//trim(iStr)//")whereiStris the current loop index, but the problematic patch could be any of the active patches.🔎 Suggested improvement
Move the check outside the loop and improve the message:
impure subroutine s_check_ib_patches integer :: i + ! Global check: IGR does not support moving immersed boundaries + @:PROHIBIT(igr .and. any(patch_ib(1:num_ibs)%moving_ibm > 0), "Cannot use & + moving immersed boundary with IGR. All patch_ib(:)%moving_ibm must be 0.") + do i = 1, num_patches_max if (i <= num_ibs) then ! call s_check_patch_geometry(i) call s_int_to_str(i, iStr) @:PROHIBIT(patch_ib(i)%geometry == dflt_int, "IB patch undefined. & patch_ib("//trim(iStr)//")%geometry must be set.") - @:PROHIBIT(igr .and. any(patch_ib(:)%moving_ibm > 0), "Cannot use & - moving immersed boundary with IGR. patch_ib("//trim(iStr)//")%moving_ibm must be 0.") - ! Constraints on the geometric initial condition patch parameters
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/pre_process/m_check_ib_patches.fppsrc/simulation/m_ibm.fpp
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/pre_process/m_check_ib_patches.fppsrc/simulation/m_ibm.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/pre_process/m_check_ib_patches.fppsrc/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_ibm.fpp
🧠 Learnings (19)
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Avoid `goto` statements (except unavoidable legacy); avoid global state (`COMMON`, `save`)
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `wp` (working precision) parameter from `m_precision_select` instead of hardcoded precision like `real*8`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `s_mpi_abort(<msg>)` for error termination instead of `stop`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Avoid outdated Fortran features: no `equivalence` statements, no `data` statements (use initialization expressions), use `character(len=N)` instead of `Character*N`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Call s_mpi_abort(<msg>) for errors, never use stop or error stop
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Every variable must have `intent(in|out|inout)` specification and appropriate `dimension` / `allocatable` / `pointer`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not place `stop` or `error stop` inside device code
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Applied to files:
src/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Georgia Tech | Phoenix (cpu)
- GitHub Check: Georgia Tech | Phoenix (gpu-acc)
- GitHub Check: Georgia Tech | Phoenix (gpu-omp)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Code Cleanliness Check
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Build & Publish
🔇 Additional comments (6)
src/simulation/m_ibm.fpp (6)
154-197: LGTM:s_interpolate_sigma_igrcorrectly interpolates sigma at ghost pointsThe new subroutine properly mirrors the existing image-point interpolation pattern. The GPU parallel loop has appropriate private variables, and the 2D case handling at lines 177-180 correctly sets
l1 = l2 = 0to avoid out-of-bounds access.
245-258: LGTM: Interior pressure initialization correctly gated by IGRThe block properly skips interior pressure initialization when IGR is active, which is handled separately through the conservative variable pathway.
290-296: LGTM: IGR-specific image point interpolation callThe new branch correctly passes
q_cons_vf,G_K, andGsto enable conservative-variable interpolation for the IGR pathway.
299-317: LGTM: Conservative variable assignment for IGR ghost pointsThe logic correctly handles both single-fluid and multi-fluid cases when setting conservative variables at ghost points under IGR.
904-912: LGTM: Optional parameters for IGR and elasticity supportThe new optional parameters
q_cons_vf,G_K, andGsenable the conservative-variable pathway for IGR while maintaining backward compatibility with existing calls.
1059-1092: LGTM: Feature guards for IGR compatibilityThe guards correctly disable surface tension, Euler bubbles, and QBMM interpolation when IGR is active, preventing undefined behavior with unsupported physics combinations.
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/pre_process/m_check_ib_patches.fppsrc/simulation/m_ibm.fpp
🚧 Files skipped from review as they are similar to previous changes (1)
- src/pre_process/m_check_ib_patches.fpp
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_ibm.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/simulation/m_ibm.fpp
🧠 Learnings (19)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Avoid `goto` statements (except unavoidable legacy); avoid global state (`COMMON`, `save`)
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `wp` (working precision) parameter from `m_precision_select` instead of hardcoded precision like `real*8`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `s_mpi_abort(<msg>)` for error termination instead of `stop`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Avoid outdated Fortran features: no `equivalence` statements, no `data` statements (use initialization expressions), use `character(len=N)` instead of `Character*N`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Call s_mpi_abort(<msg>) for errors, never use stop or error stop
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Every variable must have `intent(in|out|inout)` specification and appropriate `dimension` / `allocatable` / `pointer`
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not place `stop` or `error stop` inside device code
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Applied to files:
src/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
- GitHub Check: Code Cleanliness Check
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Georgia Tech | Phoenix (gpu-omp)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Oak Ridge | Frontier (cpu)
- GitHub Check: Oak Ridge | Frontier (gpu-acc)
- GitHub Check: Georgia Tech | Phoenix (cpu)
- GitHub Check: Georgia Tech | Phoenix (gpu-acc)
- GitHub Check: Oak Ridge | Frontier (gpu-omp)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Build & Publish
|
Resolve AI comments or comment on them |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is being reviewed by Cursor Bugbot
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| call s_compute_pressure(q_cons_vf(E_idx)%sf(i, j, k), & | ||
| q_cons_vf(alf_idx)%sf(i, j, k), & | ||
| dyn_pres, pi_inf_K, gamma_K, rho_K, & | ||
| qv_K, rhoYks, pres, T, pres_mag=pres_mag) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uninitialized variables in IGR path for chemistry mode
The IGR code path in s_interpolate_image_point declares rhoYks and T (line 926-927) but never initializes them before passing to s_compute_pressure (line 1013-1016). In chemistry mode (compile-time flag), s_compute_pressure reads rhoYks to compute species fractions and uses T as a Newton solver initial guess. Since chemistry is not prohibited with IGR in the validator, using IGR + IBM with a chemistry-enabled build would result in incorrect pressure calculations due to uninitialized species fractions and potentially poor temperature solver convergence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IGR is not supported with chemistry yet, so a guard needs to be added inside the validator. It's ok to leave rhoYks and T as is because they're never used inside s_compute_pressure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Implementation 🛠️
Implementation: Initialize rhoYks and T before calling s_compute_pressure in the IGR interpolation path to avoid passing uninitialized values in chemistry-enabled builds.
| qv_K, rhoYks, pres, T, pres_mag=pres_mag) | |
| ! Ensure chemistry-related arguments are initialized even if chemistry is not supported with IGR yet. | |
| rhoYks = 0._wp | |
| T = 300._wp | |
| call s_compute_pressure(q_cons_vf(E_idx)%sf(i, j, k), & | |
| q_cons_vf(alf_idx)%sf(i, j, k), & | |
| dyn_pres, pi_inf_K, gamma_K, rho_K, & | |
| qv_K, rhoYks, pres, T, pres_mag=pres_mag) |
See review comment here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, I should initialize them with values just for code safety anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Implementation 🛠️
Implementation: Initialize chemistry-related temporaries (rhoYks and T) in the IGR interpolation path before calling s_compute_pressure to avoid passing uninitialized values in chemistry-enabled builds.
| qv_K, rhoYks, pres, T, pres_mag=pres_mag) | |
| ! Ensure chemistry-related temporaries are initialized even if chemistry is not supported with IGR yet. | |
| rhoYks = 0._wp | |
| T = 300._wp | |
| call s_compute_pressure(q_cons_vf(E_idx)%sf(i, j, k), & | |
| q_cons_vf(alf_idx)%sf(i, j, k), & | |
| dyn_pres, pi_inf_K, gamma_K, rho_K, & | |
| qv_K, rhoYks, pres, T, pres_mag=pres_mag) |
See review comment here
…or cases with IGR + chemistry enabled
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
@Cowsreal Do you have a visual showing what the output of the airfoil case you ran for convergence looks like? |
|
What happened to airfoils, circles, and spheres? what about 3D cases? Do those work for you? The current circle and airfoil cases look obviously incorrect (initial shock wave propagating from the surface). It's an IC + BC problem. |
Yes Ben, here are some movies. After setting the region around the airfoil itself to be 0 to ensure continuity in the initial conditions, we only see a shockwave when the velocity region encounters the airfoil IB itself. I examined the first few frames individually too. Frames 1-3 contain no shock waves. Frame 4 is when the velocity region's boundary encounters the airfoil's surface and the circular shockwave that you see is generated. Both movies below are ran at a grid resolution of 800x800. I will get to working on the circle case too. Here's IGR5. movie.mp4Here's WENO5. weno.mp4 |
|
/improve |
| subroutine s_interpolate_sigma_igr(jac) | ||
| real(wp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:), intent(inout) :: jac | ||
| integer :: j, k, l, r, s, t, i | ||
| integer :: j1, j2, k1, k2, l1, l2 | ||
| real(wp) :: coeff, jac_IP | ||
| type(ghost_point) :: gp | ||
|
|
||
| ! At all ghost points, use its image point to interpolate sigma | ||
| if (num_gps > 0) then | ||
| $:GPU_PARALLEL_LOOP(private='[i, j, k, l, j1, j2, k1, k2, l1, l2, r, s, t, gp, coeff, jac_IP]') | ||
| do i = 1, num_gps | ||
| jac_IP = 0._wp | ||
| gp = ghost_points(i) | ||
| r = gp%loc(1) | ||
| s = gp%loc(2) | ||
| t = gp%loc(3) | ||
|
|
||
| j1 = gp%ip_grid(1); j2 = j1 + 1 | ||
| k1 = gp%ip_grid(2); k2 = k1 + 1 | ||
| l1 = gp%ip_grid(3); l2 = l1 + 1 | ||
|
|
||
| if (p == 0) then | ||
| l1 = 0 | ||
| l2 = 0 | ||
| end if | ||
|
|
||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do l = l1, l2 | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do k = k1, k2 | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do j = j1, j2 | ||
| coeff = gp%interp_coeffs(j - j1 + 1, k - k1 + 1, l - l1 + 1) | ||
| jac_IP = jac_IP + coeff*jac(j, k, l) | ||
| end do | ||
| end do | ||
| end do | ||
| jac(r, s, t) = jac_IP | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| end if | ||
| end subroutine s_interpolate_sigma_igr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: To prevent a data race in the GPU-parallel loop within s_interpolate_sigma_igr, modify it to use a separate read-only source array for interpolation instead of reading from and writing to jac simultaneously. Also, add a bounds check for image point indices to prevent out-of-bounds access. [possible issue, importance: 9]
| subroutine s_interpolate_sigma_igr(jac) | |
| real(wp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:), intent(inout) :: jac | |
| integer :: j, k, l, r, s, t, i | |
| integer :: j1, j2, k1, k2, l1, l2 | |
| real(wp) :: coeff, jac_IP | |
| type(ghost_point) :: gp | |
| ! At all ghost points, use its image point to interpolate sigma | |
| if (num_gps > 0) then | |
| $:GPU_PARALLEL_LOOP(private='[i, j, k, l, j1, j2, k1, k2, l1, l2, r, s, t, gp, coeff, jac_IP]') | |
| do i = 1, num_gps | |
| jac_IP = 0._wp | |
| gp = ghost_points(i) | |
| r = gp%loc(1) | |
| s = gp%loc(2) | |
| t = gp%loc(3) | |
| j1 = gp%ip_grid(1); j2 = j1 + 1 | |
| k1 = gp%ip_grid(2); k2 = k1 + 1 | |
| l1 = gp%ip_grid(3); l2 = l1 + 1 | |
| if (p == 0) then | |
| l1 = 0 | |
| l2 = 0 | |
| end if | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do l = l1, l2 | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do k = k1, k2 | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do j = j1, j2 | |
| coeff = gp%interp_coeffs(j - j1 + 1, k - k1 + 1, l - l1 + 1) | |
| jac_IP = jac_IP + coeff*jac(j, k, l) | |
| end do | |
| end do | |
| end do | |
| jac(r, s, t) = jac_IP | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| end if | |
| end subroutine s_interpolate_sigma_igr | |
| subroutine s_interpolate_sigma_igr(jac, jac_src) | |
| real(wp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:), intent(inout) :: jac | |
| real(wp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:), intent(in) :: jac_src | |
| integer :: j, k, l, r, s, t, i | |
| integer :: j1, j2, k1, k2, l1, l2 | |
| real(wp) :: coeff, jac_IP | |
| type(ghost_point) :: gp | |
| if (num_gps > 0) then | |
| $:GPU_PARALLEL_LOOP(private='[i, j, k, l, j1, j2, k1, k2, l1, l2, r, s, t, gp, coeff, jac_IP]') | |
| do i = 1, num_gps | |
| jac_IP = 0._wp | |
| gp = ghost_points(i) | |
| r = gp%loc(1) | |
| s = gp%loc(2) | |
| t = gp%loc(3) | |
| j1 = gp%ip_grid(1); j2 = j1 + 1 | |
| k1 = gp%ip_grid(2); k2 = k1 + 1 | |
| l1 = gp%ip_grid(3); l2 = l1 + 1 | |
| if (p == 0) then | |
| l1 = 0 | |
| l2 = 0 | |
| end if | |
| if (j1 < idwbuff(1)%beg .or. j2 > idwbuff(1)%end .or. & | |
| k1 < idwbuff(2)%beg .or. k2 > idwbuff(2)%end .or. & | |
| l1 < idwbuff(3)%beg .or. l2 > idwbuff(3)%end) cycle | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do l = l1, l2 | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do k = k1, k2 | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do j = j1, j2 | |
| coeff = gp%interp_coeffs(j - j1 + 1, k - k1 + 1, l - l1 + 1) | |
| jac_IP = jac_IP + coeff*jac_src(j, k, l) | |
| end do | |
| end do | |
| end do | |
| jac(r, s, t) = jac_IP | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| end if | |
| end subroutine s_interpolate_sigma_igr |
| if (ib) then | ||
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | ||
| do l = 0, p | ||
| do k = 0, n | ||
| do j = 0, m | ||
| if (ib_markers%sf(j, k, l) /= 0) then | ||
| jac(j, k, l) = 0._wp | ||
| end if | ||
| end do | ||
| end do | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| call s_interpolate_sigma_igr(jac) | ||
| ! If using Jacobi Iter, we need to update jac_old again | ||
| if (igr_iter_solver == 1) then | ||
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | ||
| do l = idwbuff(3)%beg, idwbuff(3)%end | ||
| do k = idwbuff(2)%beg, idwbuff(2)%end | ||
| do j = idwbuff(1)%beg, idwbuff(1)%end | ||
| jac_old(j, k, l) = jac(j, k, l) | ||
| end do | ||
| end do | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| end if | ||
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: In s_igr_iterative_solve, call s_interpolate_sigma_igr with jac_old as the source for interpolation. This ensures ghost-point values are calculated from the field before it is zeroed-out within the immersed boundary, preventing corrupted stencil values. [possible issue, importance: 9]
| if (ib) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = 0, p | |
| do k = 0, n | |
| do j = 0, m | |
| if (ib_markers%sf(j, k, l) /= 0) then | |
| jac(j, k, l) = 0._wp | |
| end if | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| call s_interpolate_sigma_igr(jac) | |
| ! If using Jacobi Iter, we need to update jac_old again | |
| if (igr_iter_solver == 1) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = idwbuff(3)%beg, idwbuff(3)%end | |
| do k = idwbuff(2)%beg, idwbuff(2)%end | |
| do j = idwbuff(1)%beg, idwbuff(1)%end | |
| jac_old(j, k, l) = jac(j, k, l) | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| end if | |
| end if | |
| if (ib) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = 0, p | |
| do k = 0, n | |
| do j = 0, m | |
| if (ib_markers%sf(j, k, l) /= 0) then | |
| jac(j, k, l) = 0._wp | |
| end if | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| call s_interpolate_sigma_igr(jac, jac_old) | |
| if (igr_iter_solver == 1) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = idwbuff(3)%beg, idwbuff(3)%end | |
| do k = idwbuff(2)%beg, idwbuff(2)%end | |
| do j = idwbuff(1)%beg, idwbuff(1)%end | |
| jac_old(j, k, l) = jac(j, k, l) | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| end if | |
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Although this won't work for Gauss Siedel because jac_old is not allocated in that case to space space. However, now that you bring this up, the rest of ibm module also suffers from this, so perhaps we can try using atomic writes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question:
Good point. Although this won't work for Gauss Siedel because jac_old is not allocated in that case to space space. However, now that you bring this up, the rest of ibm module also suffers from this, so perhaps we can try using atomic writes.
Answer:
Add project-specific guidance so /improve avoids suggestions that rely on arrays/buffers that may not exist in some solver modes (e.g., Gauss-Seidel), and instead proposes safe alternatives (conditional allocation, in-place compute ordering, atomics, or per-solver branches).
[pr_code_suggestions]
extra_instructions = """\
For solver/algorithm variants (e.g., Gauss-Seidel vs others), do NOT suggest using a work array unless you can confirm it is allocated/available in that mode.
- If a suggestion would require an unallocated array (like jac_old in Gauss-Seidel), propose one of:
(a) conditional allocation with clear guards,
(b) reordering/in-place computation that preserves required values,
(c) atomic writes / race-safe parallel updates (GPU/OMP) when relevant,
(d) per-solver branching (igr_iter_solver dependent) with minimal code duplication.
- When proposing changes to GPU-parallel loops, explicitly call out any race conditions and only recommend atomics when correctness requires it.
- Prefer solutions that are correct for all solver modes; if not possible, state the mode limitation explicitly.
"""If you want even fewer “speculative” refactors in performance-sensitive GPU code, you can also keep suggestions focused on correctness issues:
[pr_code_suggestions]
focus_only_on_problems = trueRelevant Sources:
|
@Cowsreal for that vortex on the airfoil case, what are you running there? Those vortices don't look like anything that I have seen with airfoils before. Is that video IRG or WENO? And how dooes one compare to the other? |
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
| "fluid_pp(1)%gamma": 1.0 / (gam_a - 1.0), | ||
| "fluid_pp(1)%pi_inf": 0.0, | ||
| "viscous": "T", | ||
| "fluid_pp(1)%Re(1)": 1 / mu, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Incorrect Reynolds-number assignment: fluid_pp(1)%Re(1) is set to 1 / mu, but given mu = rho0 * v0 * h / 2e5 the correct Reynolds number should be rho0 * v0 * h / mu (which yields 2e5 as intended), not 1 / mu. Assign the physically-correct expression so Re is dimensionless and consistent. [logic error]
Severity Level: Critical 🚨
- ❌ Viscous-flow setup uses wrong Reynolds number.
- ⚠️ Example validation against IGR/WENO mismatches.
- ⚠️ Performance/timestep choices may be inconsistent.| "fluid_pp(1)%Re(1)": 1 / mu, | |
| "fluid_pp(1)%Re(1)": rho0 * v0 * h / mu, |
Steps of Reproduction ✅
1. Inspect examples/2D_IGR_forward_facing_step/case.py: mu is computed at line 14 (`mu =
rho0 * v0 * h / 2e5`) and the Reynolds entry is set at line 93 (`"fluid_pp(1)%Re(1)": 1 /
mu,`).
2. Run `python examples/2D_IGR_forward_facing_step/case.py` to emit the JSON (print at
line 17). The emitted JSON contains `"fluid_pp(1)%Re(1)"` equal to the numeric value of `1
/ mu`.
3. Calculate expected Re for this example: since mu was set as `rho0 * v0 * h / 2e5` (line
14), the intended nondimensional Re should be `rho0 * v0 * h / mu == 2e5`. The current
printed value `1 / mu` does not equal 2e5 and therefore demonstrates the incorrect formula
in-place.
4. Any solver or validation harness consuming this JSON will use the wrong Reynolds number
for this example, as demonstrated by inspecting the printed JSON.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** examples/2D_IGR_forward_facing_step/case.py
**Line:** 93:93
**Comment:**
*Logic Error: Incorrect Reynolds-number assignment: `fluid_pp(1)%Re(1)` is set to `1 / mu`, but given `mu = rho0 * v0 * h / 2e5` the correct Reynolds number should be `rho0 * v0 * h / mu` (which yields 2e5 as intended), not `1 / mu`. Assign the physically-correct expression so `Re` is dimensionless and consistent.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.|
CodeAnt AI Incremental review completed. |


User description
User description
User description
Description
Added immersed boundaries compatibility with IGR. I deleted the relevant prohibition inside the Python case checker.
Type of change
Please delete options that are not relevant.
Scope
If you cannot check the above box, please split your PR into multiple PRs that each have a common goal.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes.
Provide instructions so we can reproduce.
Please also list any relevant details for your test configuration
Plots of mean difference between IGR5 and WENO5 on airfoil contour
Figure: This shows the average of pressure differences over the airfoil contour for all time. (i.e., (mean(p1(s) - p2(s)))(t))

Figure: This shows the average of density differences over the airfoil contour for all time. (i.e., (mean(rho1(s) - rho2(s)))(t))

Movies of pressure profile around airfoil contour
IGR5 results
presContour.mp4
WENO5 results
presContour.mp4
Movies of pressure
Airfoil, IGR5
airfoil_500_500.mp4
Airfoil, WENO5
airfoil_500_500.mp4
Square and Circle have viscosity with Re = 25000
Circle, IGR5
pres.mp4
Circle, WENO5
pres.mp4
Square, IGR5
pres.mp4
Square, WENO5
pres.mp4
Test Configuration:
Checklist
./mfc.sh formatbefore committing my codeIf your code changes any code source files (anything in
src/simulation)To make sure the code is performing as expected on GPU devices, I have:
PR Type
Enhancement
Description
Integrate immersed boundary method with IGR solver
Add
s_update_igrsubroutine for ghost point interpolationModify state correction to handle IGR-specific variable updates
Remove IBM-IGR incompatibility restriction from validator
Diagram Walkthrough
File Walkthrough
m_ibm.fpp
Integrate IBM with IGR solver capabilitiessrc/simulation/m_ibm.fpp
s_update_igrsubroutine to interpolate sigma at ghost pointsusing image point data
s_ibm_correct_stateto conditionally skip interior pressureinitialization when IGR is enabled
for IGR instead of primitive variables
variables at ghost points
calculations when IGR is active
s_interpolate_image_pointsignature to accept optionalconservative variables parameter
conservative variables and mixture properties
m_igr.fpp
Integrate IBM updates into IGR solver loopsrc/simulation/m_igr.fpp
use m_ibmmodule import for IBM integrations_update_igrjac_oldfor Jacobi iteration method whenIBM is active
case_validator.py
Remove IGR-IBM incompatibility restrictiontoolchain/mfc/case_validator.py
ibandigroptions
CodeAnt-AI Description
Enable IGR to work with stationary immersed boundaries and add a new IGR example
What Changed
Impact
✅ Run IGR with stationary immersed boundaries✅ Clearer error when IGR is used with moving immersed boundaries✅ New reproducible IGR + IBM example for testing💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
New Features
Bug Fixes
Chores
Tests
✏️ Tip: You can customize this high-level summary in your review settings.