WIP: Standalone C-level tests for 1D convolution (GSoC 2026)#19452
WIP: Standalone C-level tests for 1D convolution (GSoC 2026)#19452toastmt wants to merge 10 commits intoastropy:mainfrom
Conversation
|
Thank you for your contribution to Astropy! 🌌 This checklist is meant to remind the package maintainers who will review this pull request of some common things to look for.
|
|
How do you run it in CI? How do you ask user to run it? |
Hi @pllim, For the actual GSoC project, the goal will be to automate this. |
|
I don't think this idea is sustainable. astropy is a Python package first. It is enough to test it via the Python interface. |
This is in response to the methodology outlined in the official GSoC 2026 project idea, "Hardening astropy’s core stability," which proposes creating a fundamental test suite for the low-level C layer, in order to make it more independent from the existing astropy tests. (https://openastronomy.org/gsoc/gsoc2026/#/projects?project=hardening_astropy's_core_stability) I am happy to adjust the technical approach of my proposal to align with whatever consensus the maintainer team reaches regarding the testing infrastructure. |
|
Sorry for being so late. I think this approach is absolutely valid and desirable.
to restate the context: this GSoC proposal is a subset of a larger proposal to make astropy itself pure Python, but this means moving the compiled components to a different package, which is only reasonable if they have sufficient test coverage that they can be decoupled from the Python interface. So, before pushing this, @toastmt, how would you propose to integrate this to our automated test runners ? |
|
Hi @neutrinoceros. If we don't want to write a Cython wrapper for testing (in order to keep the C tests decoupled from Python) we could add a compile step directly in the CI workflow to compile the C testing suite into standalone executables and then have python evaluate the tests via subprocess. That way the C testing suite would be prepared for a future decoupling of the C-core or a migration to Meson. |
|
That seems reasonable to me. Feel free to give it a try. |
This will annoy me and probably other devs because it polutes the bin space for no reason. I stress again #19452 (comment) |
|
Okay let's reframe this as an experiment then; |
I doubt it. I maintain a C pipeline elsewhere. |
|
I've been giving this a try. I've implemented a pytest script that evaluates the C tests via subprocess. I created a new job in the YAML workflow file to compile the C test binary and then run the pytest file. I also inserted a mock NumPy header creation step in the CI workflow to decouple the C compilation from the Python environment. The remaining hurdle is fully isolating pytest from astropy's pyproject.toml and conftest.py, which pull in the full package. Open to suggestions on the cleanest approach here. |
.github/workflows/ci_workflows.yml
Outdated
|
|
||
| test_c_extensions: | ||
| # Standalone tests for the C math extensions of Astropy | ||
| # Create a mock ndarrayobject header to decouple C logic from Python/NumPy |
There was a problem hiding this comment.
I would expect test binaries to not need numpy at all. The fact that you need a mock header to make it work makes me doubt the value of the approach; it's not decoupled if you need a mock.
There was a problem hiding this comment.
I agree that a mock header isn't true decoupling. The issue is that the mock header is providing the type definitions that convolve.c needs. The alternative would be refactoring convolve.c to use standard C types like double instead of npy_float64, but that approach touches core C files across the repository. Is there a different approach you'd recommend for achieving true decoupling without modifying the core?
There was a problem hiding this comment.
I think it makes sense to slightly edit the C modules in this way (replacing a type alias with its primitive definition): the benefit of decloupling outweights the cost of refactoring, in my opinion. That is, I think it does, but maybe I'm not seeing the complete picture yet. Could you explain the benefit(s) of using npy_float64 instead of double ?
There was a problem hiding this comment.
npy_float64 is effectively just a typedef for double. However, npy_float64 guarantees 64-bit precision across all systems and platforms, while double doesn't. We can use a _Static_assert in the header to guarantee that with a double too. I could try to refactor the convolve.h header slightly to decouple the C file from NumPy.
Description
This Draft PR serves as a proof-of-concept accompanying my GSoC 2026 proposal: Hardening Astropy’s Core Stability.
I have implemented a standalone C testing file (test_convolve_1d.c) that directly interfaces with the convolve1d_c function in astropy/convolution/src/convolve.c.
What this code does:
Bypassing the Cython wrapper (_convolve.pyx), this test file directly validates the memory handling and mathematical edge cases of the C-core. It currently tests:
Standard array convolution accuracy.
NaN interpolation handling.
The bot=0 fallback behavior when encountering arrays comprised entirely of NaNs.
Because this is a standalone executable, it can be compiled and run directly from the src directory without building the entire Python package.
I am looking forward to any feedback on this approach!