feat: Update pre-commit configuration and enhance test coverage#353
feat: Update pre-commit configuration and enhance test coverage#353shenxianpeng merged 2 commits intomainfrom
Conversation
✅ Deploy Preview for commit-check ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
📝 WalkthroughWalkthroughConsolidates and reorganizes test suites (merging several config tests, replacing a comprehensive engine test), updates type hints in Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
commit_check/util.py (1)
257-268:⚠️ Potential issue | 🟡 MinorConfusing behavior when
suggestis an empty string.When
suggestis an empty string (falsy but notNone), the code prints"commit-check does not support yet."(with the empty string interpolated) and exits with code 1. This seems like unintended behavior—an empty string likely indicates a missing suggestion rather than an unsupported feature.Consider distinguishing between
None(no suggestion configured) and empty string (suggestion configured but empty):💡 Suggested improvement
def print_suggestion(suggest: Optional[str]) -> None: """Print suggestion to user :param suggest: what message to print out """ - if suggest: + if suggest is not None and suggest.strip(): print( f"Suggest: {GREEN}{suggest}{RESET_COLOR} ", end="", ) + print("\n") else: - print(f"commit-check does not support {suggest} yet.") - raise SystemExit(1) - print("\n") + # No suggestion provided, just skip + pass
🤖 Fix all issues with AI agents
In `@tests/config_test.py`:
- Around line 231-250: The permission test
test_load_config_file_permission_error is flaky on Windows and when running as
root; update it to skip on incompatible environments by checking platform and
privileges before altering file perms: in test_load_config_file_permission_error
(which calls load_config) add a guard that uses platform.system() == "Windows"
or (hasattr(os, "geteuid") and os.geteuid() == 0) to call pytest.skip (or apply
pytest.mark.skipif) so the test is only executed on Unix non-root environments
where os.chmod(0o000) can reliably trigger a PermissionError.
- Around line 340-343: The finally block that does sys.modules.clear() and
sys.modules.update(original_modules) is unsafe; replace this manual mutation
with unittest.mock.patch.dict to temporarily modify sys.modules (e.g., use
patch.dict(sys.modules, {...}, clear=False) or use it as a context manager /
setUp/tearDown) so the test no longer clears the global module table and
reliably restores modules even on exceptions or parallel test runs; update the
test that currently uses the finally block to use patch.dict around the section
that needs altered modules (refer to the finally block and the original_modules
variable to locate the change).
🧹 Nitpick comments (6)
tests/config_test.py (2)
224-228: Use specific exception type instead of bareException.The static analysis tool correctly flags this as B017. Catching bare
Exceptionmakes the test fragile—it will pass even if an unexpected error occurs. Use the specific TOML parsing exception.💡 Suggested fix
try: - with pytest.raises(Exception): # Should raise a TOML parsing error + # tomllib raises TOMLDecodeError, tomli raises TOMLDecodeError + # Both inherit from ValueError + with pytest.raises((ValueError,)): # TOML parsing error load_config(f.name) finally: os.unlink(f.name)
253-282: Duplicate test method nametest_tomli_import_fallback.This test method (lines 253-282) has the same name as another test method in
TestConfigclass (lines 140-207). While they're in different classes so there's no actual conflict, having identically named tests can cause confusion when reviewing test results.tests/engine_test.py (4)
156-159: Remove duplicate assertions.Lines 157-159 contain duplicate assertions that appear to be copy-paste artifacts. Each assertion is identical to line 157.
💡 Suggested fix
result = validator.validate(context) assert result == ValidationResult.PASS - assert result == ValidationResult.PASS - assert result == ValidationResult.PASS
304-312: Remove duplicate assertions.Lines 308-312 duplicate the assertions from lines 304-307. This appears to be a copy-paste error.
💡 Suggested fix
# Called once for skip logic ("an"), once for value ("ae") assert mock_get_commit_info.call_count == 2 assert mock_get_commit_info.call_args_list[0][0][0] == "an" assert mock_get_commit_info.call_args_list[1][0][0] == "ae" - assert result == ValidationResult.PASS - # Called once for skip logic ("an"), once for value ("ae") - assert mock_get_commit_info.call_count == 2 - assert mock_get_commit_info.call_args_list[0][0][0] == "an" - assert mock_get_commit_info.call_args_list[1][0][0] == "ae"
986-1007: Unused mock parametermock_get_commit_info.The static analysis tool correctly identifies that
mock_get_commit_infois declared but never used in this test. Either use it or remove it from the decorator stack.💡 Suggested fix
- `@patch`("commit_check.engine.get_commit_info") `@patch`("commit_check.engine.has_commits") `@pytest.mark.benchmark` def test_commit_message_validator_failure( - self, mock_has_commits, mock_get_commit_info + self, mock_has_commits ):
930-933: Prefer comparing enum members directly instead of to integers.While comparing
ValidationResult.PASS == 0works due to Python's enum behavior, it's more explicit and maintainable to compare enum members directly.💡 Suggested improvement
def test_validation_result_values(self): """Test ValidationResult enum values.""" - assert ValidationResult.PASS == 0 - assert ValidationResult.FAIL == 1 + assert ValidationResult.PASS.value == 0 + assert ValidationResult.FAIL.value == 1Note: The existing
TestValidationResultclass (lines 26-31) already does this correctly with.value.
| def test_load_config_file_permission_error(self): | ||
| """Test handling of file permission errors.""" | ||
| config_content = b""" | ||
| [checks] | ||
| test = true | ||
| """ | ||
| with tempfile.NamedTemporaryFile(mode="wb", suffix=".toml", delete=False) as f: | ||
| f.write(config_content) | ||
| f.flush() | ||
|
|
||
| try: | ||
| # Remove read permissions to simulate permission error | ||
| os.chmod(f.name, 0o000) | ||
|
|
||
| with pytest.raises(PermissionError): | ||
| load_config(f.name) | ||
| finally: | ||
| # Restore permissions and clean up | ||
| os.chmod(f.name, 0o644) | ||
| os.unlink(f.name) |
There was a problem hiding this comment.
Permission test may fail on certain platforms or CI environments.
Setting file permissions to 0o000 doesn't work reliably on Windows, and some CI environments run as root where permission checks are bypassed. Consider skipping this test on incompatible platforms.
💡 Suggested improvement
+ `@pytest.mark.skipif`(
+ os.name == "nt" or os.geteuid() == 0,
+ reason="Permission test unreliable on Windows or when running as root",
+ )
`@pytest.mark.benchmark`
def test_load_config_file_permission_error(self):🤖 Prompt for AI Agents
In `@tests/config_test.py` around lines 231 - 250, The permission test
test_load_config_file_permission_error is flaky on Windows and when running as
root; update it to skip on incompatible environments by checking platform and
privileges before altering file perms: in test_load_config_file_permission_error
(which calls load_config) add a guard that uses platform.system() == "Windows"
or (hasattr(os, "geteuid") and os.geteuid() == 0) to call pytest.skip (or apply
pytest.mark.skipif) so the test is only executed on Unix non-root environments
where os.chmod(0o000) can reliably trigger a PermissionError.
| finally: | ||
| # Restore original modules | ||
| sys.modules.clear() | ||
| sys.modules.update(original_modules) |
There was a problem hiding this comment.
Risky sys.modules manipulation may affect other tests.
Clearing and restoring sys.modules can cause issues if tests run in parallel or if exceptions occur before restoration completes. The current pattern with sys.modules.clear() followed by sys.modules.update() could leave the interpreter in an inconsistent state.
Consider using unittest.mock.patch.dict for safer module manipulation:
💡 Safer approach
finally:
# Restore original modules
- sys.modules.clear()
- sys.modules.update(original_modules)
+ # Only restore the modules we explicitly removed
+ for key in list(sys.modules.keys()):
+ if key not in original_modules:
+ del sys.modules[key]
+ for key, value in original_modules.items():
+ if key not in sys.modules:
+ sys.modules[key] = value📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| finally: | |
| # Restore original modules | |
| sys.modules.clear() | |
| sys.modules.update(original_modules) | |
| finally: | |
| # Restore original modules | |
| # Only restore the modules we explicitly removed | |
| for key in list(sys.modules.keys()): | |
| if key not in original_modules: | |
| del sys.modules[key] | |
| for key, value in original_modules.items(): | |
| if key not in sys.modules: | |
| sys.modules[key] = value |
🤖 Prompt for AI Agents
In `@tests/config_test.py` around lines 340 - 343, The finally block that does
sys.modules.clear() and sys.modules.update(original_modules) is unsafe; replace
this manual mutation with unittest.mock.patch.dict to temporarily modify
sys.modules (e.g., use patch.dict(sys.modules, {...}, clear=False) or use it as
a context manager / setUp/tearDown) so the test no longer clears the global
module table and reliably restores modules even on exceptions or parallel test
runs; update the test that currently uses the finally block to use patch.dict
around the section that needs altered modules (refer to the finally block and
the original_modules variable to locate the change).
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #353 +/- ##
==========================================
+ Coverage 87.46% 93.51% +6.05%
==========================================
Files 8 8
Lines 694 694
==========================================
+ Hits 607 649 +42
+ Misses 87 45 -42 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Merging this PR will degrade performance by 10.88%
Performance Changes
Comparing Footnotes
|
There was a problem hiding this comment.
Pull request overview
This PR updates the pre-commit configuration and consolidates test coverage by merging separate comprehensive test files into existing test files. The changes aim to improve code organization and eliminate redundancy while maintaining test coverage.
Changes:
- Updated pre-commit hook versions (ruff from v0.14.10 to v0.14.14, commit-check from v2.2.1 to v2.2.2)
- Consolidated test files by merging engine_comprehensive_test.py, config_import_test.py, config_fallback_test.py, and config_edge_test.py into their respective main test files
- Fixed type annotations in util.py to use
Optional[dict]instead ofdict | Nonefor Python 3.9+ compatibility - Added new utility function tests to improve coverage of helper functions
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| .pre-commit-config.yaml | Updated ruff-pre-commit to v0.14.14 and commit-check to v2.2.2 |
| commit_check/util.py | Fixed type annotations from union syntax to Optional for Python 3.9 compatibility |
| tests/engine_test.py | Consolidated comprehensive tests from engine_comprehensive_test.py; contains significant test duplication |
| tests/util_test.py | Added new TestHelperFunctions class to test previously untested utility functions |
| tests/config_test.py | Consolidated edge case and import fallback tests from separate test files |
| tests/engine_comprehensive_test.py | Deleted (merged into engine_test.py) |
| tests/config_import_test.py | Deleted (merged into config_test.py) |
| tests/config_fallback_test.py | Deleted (merged into config_test.py) |
| tests/config_edge_test.py | Deleted (merged into config_test.py) |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
|
@shenxianpeng I've opened a new pull request, #354, to work on those changes. Once the pull request is ready, I'll request review from you. |
…354) * Initial plan * refactor: remove duplicate test classes and fix code style issues Co-authored-by: shenxianpeng <3353385+shenxianpeng@users.noreply.github.com> * chore: Add 'copilot' to allowed branch types --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: shenxianpeng <3353385+shenxianpeng@users.noreply.github.com> Co-authored-by: Xianpeng Shen <xianpeng.shen@gmail.com>
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/config_test.py`:
- Around line 224-226: Replace the broad pytest.raises(Exception) in the failing
test around load_config(f.name) with the specific TOML parsing exception: import
TOMLDecodeError from tomllib if available else from tomli, then use
pytest.raises(TOMLDecodeError) to assert load_config(f.name) raises that
specific error; update the test that references load_config to import/alias the
correct TOMLDecodeError and use it in the context manager.
🧹 Nitpick comments (2)
tests/engine_test.py (1)
813-840: LGTM! Consider verifying no unexpected keys exist.The test correctly validates all expected VALIDATOR_MAP entries. For completeness, you could also assert that the map doesn't contain unexpected keys:
💡 Optional enhancement
for check, validator_class in expected_mappings.items(): assert engine.VALIDATOR_MAP[check] == validator_class + + # Verify no unexpected keys in VALIDATOR_MAP + assert set(engine.VALIDATOR_MAP.keys()) == set(expected_mappings.keys()),
tests/config_test.py (1)
368-383: Consider avoidingexec()for import testing.While
exec()works for testing import behavior, it's flagged by security linters (S102) and makes debugging harder. Consider usingimportlibfor a more maintainable approach.💡 Alternative approach
`@pytest.mark.benchmark` def test_tomli_import_fallback_simulation(self): """Test tomli import fallback by simulating the ImportError condition.""" import importlib import importlib.util # Test case 1: Verify tomllib is used on Python 3.11+ if sys.version_info >= (3, 11): spec = importlib.util.find_spec("tomllib") assert spec is not None, "tomllib should be available" # Test case 2: Verify the config module loads successfully from commit_check.config import toml_load assert callable(toml_load)
| try: | ||
| with pytest.raises(Exception): # Should raise a TOML parsing error | ||
| load_config(f.name) |
There was a problem hiding this comment.
Use a specific exception type instead of bare Exception.
Catching Exception is too broad and may mask unrelated errors. Use the specific TOML parsing error type for clearer test intent.
💡 Suggested fix
try:
- with pytest.raises(Exception): # Should raise a TOML parsing error
+ # tomllib raises TOMLDecodeError, tomli raises TOMLDecodeError
+ from commit_check.config import toml_load
+ with pytest.raises((Exception,)): # More specific: tomllib.TOMLDecodeError or tomli.TOMLDecodeError
load_config(f.name)Alternatively, import and use the specific error:
try:
from tomllib import TOMLDecodeError
except ImportError:
from tomli import TOMLDecodeError
with pytest.raises(TOMLDecodeError):
load_config(f.name)🧰 Tools
🪛 Ruff (0.14.14)
[warning] 225-225: Do not assert blind exception: Exception
(B017)
🤖 Prompt for AI Agents
In `@tests/config_test.py` around lines 224 - 226, Replace the broad
pytest.raises(Exception) in the failing test around load_config(f.name) with the
specific TOML parsing exception: import TOMLDecodeError from tomllib if
available else from tomli, then use pytest.raises(TOMLDecodeError) to assert
load_config(f.name) raises that specific error; update the test that references
load_config to import/alias the correct TOMLDecodeError and use it in the
context manager.



Summary by CodeRabbit
Chores
Configuration
Tests
✏️ Tip: You can customize this high-level summary in your review settings.