Skip to content

[Cpp API Compatibility] Align some compat APIs with libtorch#78099

Merged
SigureMo merged 3 commits intoPaddlePaddle:developfrom
youge325:cFix
Mar 10, 2026
Merged

[Cpp API Compatibility] Align some compat APIs with libtorch#78099
SigureMo merged 3 commits intoPaddlePaddle:developfrom
youge325:cFix

Conversation

@youge325
Copy link
Contributor

@youge325 youge325 commented Mar 2, 2026

PR Category

Execute Infrastructure

PR Types

Bug fixes

Description

对齐 arange narrow sum 接口
abs接口有个测试还没对齐,由于 paddle 和 pytorch 底层实现不一样,paddle 的 abs 强制把 transpose 之后的 tensor 变为连续 tensor ,并根据 shape 重置 stride,pytorch 则是保留 stride 不变,还是非连续 tensor ,导致两者的 data_ptr 存储的数据顺序不同,后面看看能不能对齐下,diff详见 PFCCLab/PaddleCppAPITest#9

是否引起精度变化

Copilot AI review requested due to automatic review settings March 2, 2026 15:23
@paddle-bot
Copy link

paddle-bot bot commented Mar 2, 2026

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Mar 2, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates Paddle’s ATen compatibility layer to better align arange, narrow, and sum behaviors with libtorch/PyTorch expectations.

Changes:

  • Update sum dtype resolution to follow PyTorch-style promotion (integral inputs promote to int64 by default).
  • Add PyTorch-like bounds checks for narrow.
  • Adjust arange scalar handling to use floating-point conversion and add additional argument validation.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 8 comments.

File Description
paddle/phi/api/include/compat/ATen/ops/sum.h Changes default dtype resolution/promotion rules for sum to better match PyTorch.
paddle/phi/api/include/compat/ATen/ops/narrow.h Adds explicit bounds/dimension validation for narrow to match PyTorch behavior.
paddle/phi/api/include/compat/ATen/ops/arange.h Switches scalar conversions to double and adds new step validations for arange.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +29 to +39
// Match PyTorch promotion: integer inputs -> int64; others -> keep input
// dtype.
at::ScalarType resolved_dtype;
if (dtype.has_value()) {
resolved_dtype = dtype.value();
} else {
at::ScalarType input_dtype = self.scalar_type();
resolved_dtype = at::isIntegralType(input_dtype, /*includeBool=*/true)
? at::kLong
: input_dtype;
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change alters sum's implicit dtype promotion rules (integral -> int64, otherwise keep input dtype). There are existing C++ compat tests for sum, but none cover integral inputs or bool promotion. Adding/adjusting tests to assert the new promotion behavior (e.g., int32/bool inputs sum to int64 when dtype is not provided) would help prevent regressions.

Copilot uses AI. Check for mistakes.
Comment on lines +25 to +55
// Bounds checks matching PyTorch behavior
PD_CHECK(self.dim() > 0, "narrow() cannot be applied to a 0-dim tensor.");
PD_CHECK(length >= 0, "narrow(): length must be non-negative.");

// Normalize negative dim
int64_t ndim = self.dim();
if (dim < 0) dim += ndim;
PD_CHECK(dim >= 0 && dim < ndim,
"Dimension out of range (expected to be in range of [",
-ndim,
", ",
ndim - 1,
"], but got ",
dim,
")");

int64_t cur_size = self.sizes()[dim];

// Wrap negative start (matching PyTorch: only wrap when start != cur_size)
if (start != cur_size && start < 0) {
start += cur_size;
}

PD_CHECK(start >= 0 && start + length <= cur_size,
"start (",
start,
") + length (",
length,
") exceeds dimension size (",
cur_size,
").");
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New bounds checks were added to narrow(), but there are no C++ compat tests asserting the error/edge-case behavior (0-dim tensor, negative length, out-of-range start/length, negative dim out of range). Adding tests for these cases would help ensure PyTorch-aligned behavior stays stable.

Copilot uses AI. Check for mistakes.
Comment on lines +29 to +39
// Normalize negative dim
int64_t ndim = self.dim();
if (dim < 0) dim += ndim;
PD_CHECK(dim >= 0 && dim < ndim,
"Dimension out of range (expected to be in range of [",
-ndim,
", ",
ndim - 1,
"], but got ",
dim,
")");
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dimension-out-of-range error message uses the normalized dim value after dim += ndim. For inputs where dim < -ndim, the message can become misleading (e.g., original dim = -ndim-1 will report got -1, which is within the stated valid range). Preserve the original dim for error reporting while still normalizing for the actual indexing logic.

Copilot uses AI. Check for mistakes.
Comment on lines +29 to +39
// Match PyTorch promotion: integer inputs -> int64; others -> keep input
// dtype.
at::ScalarType resolved_dtype;
if (dtype.has_value()) {
resolved_dtype = dtype.value();
} else {
at::ScalarType input_dtype = self.scalar_type();
resolved_dtype = at::isIntegralType(input_dtype, /*includeBool=*/true)
? at::kLong
: input_dtype;
}
Copy link

Copilot AI Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dtype-resolution logic is duplicated across both sum overloads. Consider factoring it into a small helper (e.g., a local lambda) to keep the promotion rules in one place and reduce the chance of the overloads diverging in future edits.

Copilot uses AI. Check for mistakes.
@youge325
Copy link
Contributor Author

youge325 commented Mar 3, 2026

/re-run all-failed

@youge325
Copy link
Contributor Author

youge325 commented Mar 3, 2026

/re-run all-failed

1 similar comment
@youge325
Copy link
Contributor Author

youge325 commented Mar 4, 2026

/re-run all-failed

Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

abs接口有个测试还没对齐,由于 paddle 和 pytorch 底层实现不一样,paddle 的 abs 强制把 transpose 之后的 tensor 变为连续 tensor ,并根据 shape 重置 stride,pytorch 则是保留 stride 不变,还是非连续 tensor ,导致两者的 data_ptr 存储的数据顺序不同,后面看看能不能对齐下

如果对齐比较麻烦,可以考虑先检测该种 case,并抛出异常,告知用户现在该 API 行为是无法对齐的,并给出替代方案,避免在非对齐场景下隐藏问题(运行时问题比编译期问题难调得多)

@SigureMo SigureMo merged commit a365364 into PaddlePaddle:develop Mar 10, 2026
108 of 109 checks passed
@youge325 youge325 deleted the cFix branch March 10, 2026 04:00
@youge325
Copy link
Contributor Author

考虑先检测该种 case,并抛出异常,告知用户现在该 API 行为是无法对齐的,并给出替代方案,避免在非对齐场景下隐藏问题(运行时问题比编译期问题难调得多)

好的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants