Skip to content

feat: Update pre-commit configuration and enhance test coverage#353

Merged
shenxianpeng merged 2 commits intomainfrom
feature/add-tests
Jan 31, 2026
Merged

feat: Update pre-commit configuration and enhance test coverage#353
shenxianpeng merged 2 commits intomainfrom
feature/add-tests

Conversation

@shenxianpeng
Copy link
Contributor

@shenxianpeng shenxianpeng commented Jan 31, 2026

Summary by CodeRabbit

  • Chores

    • Bumped pre-commit hook versions (ruff and commit-check) and modernized internal type hints.
  • Configuration

    • Expanded allowed branch types to include "copilot" in project config.
  • Tests

    • Reorganized test suite: removed several legacy tests, added consolidated and extensive tests covering config parsing, import/fallback behavior, validation engine mappings, and utility helpers; improved edge-case and fallback coverage.

✏️ Tip: You can customize this high-level summary in your review settings.

@shenxianpeng shenxianpeng requested a review from a team as a code owner January 31, 2026 00:17
@netlify
Copy link

netlify bot commented Jan 31, 2026

Deploy Preview for commit-check ready!

Name Link
🔨 Latest commit b6c26ce
🔍 Latest deploy log https://app.netlify.com/projects/commit-check/deploys/697d4e3663e6dc0008ebee5a
😎 Deploy Preview https://deploy-preview-353--commit-check.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@github-actions github-actions bot added the enhancement New feature or request label Jan 31, 2026
@shenxianpeng shenxianpeng added tests Add test related changes and removed enhancement New feature or request labels Jan 31, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 31, 2026

📝 Walkthrough

Walkthrough

Consolidates and reorganizes test suites (merging several config tests, replacing a comprehensive engine test), updates type hints in commit_check/util.py, bumps two pre-commit hook versions, and adds "copilot" to allowed branch types in cchk.toml.

Changes

Cohort / File(s) Summary
Pre-commit hooks
\.pre-commit-config\.yaml
Bumped hook revisions: astral-sh/ruff-pre-commit v0.14.10 → v0.14.14 and commit-check/commit-check v2.2.1 → v2.2.2.
Type hints
commit_check/util.py
Refined type annotations: use Optional[...] for some internal helpers; no runtime behavior changes.
Config tests (consolidation)
tests/config_test.py, tests/config_edge_test.py, tests/config_fallback_test.py, tests/config_import_test.py
Removed separate config test files and introduced a consolidated tests/config_test.py containing combined edge-case, fallback, and import tests for TOML loading and fallback behavior.
Engine tests (reorg)
tests/engine_comprehensive_test.py, tests/engine_test.py
Deleted the large monolithic engine test file and added/updated tests/engine_test.py to explicitly assert ValidationEngine.VALIDATOR_MAP contents and reorganize engine-related tests.
Utility tests (expansion)
tests/util_test.py
Added extensive helper-function tests (e.g., _find_check, _load_toml, _find_config_file, validate_config) including cases for missing toml library; note some duplicated class blocks present.
Config file
cchk.toml
Added "copilot" to allow_branch_types list.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Possibly related PRs

Suggested labels

enhancement

Poem

🐰 In burrows of code I hop and play,
Tests folded neat and type hints in sway.
Hooks nudged upward, branches gain "copilot" cheer,
A carrot-shaped commit — let's shift into gear! 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changes: updating pre-commit hook versions and consolidating/enhancing test coverage across multiple test files.
Docstring Coverage ✅ Passed Docstring coverage is 88.89% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/add-tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
commit_check/util.py (1)

257-268: ⚠️ Potential issue | 🟡 Minor

Confusing behavior when suggest is an empty string.

When suggest is an empty string (falsy but not None), the code prints "commit-check does not support yet." (with the empty string interpolated) and exits with code 1. This seems like unintended behavior—an empty string likely indicates a missing suggestion rather than an unsupported feature.

Consider distinguishing between None (no suggestion configured) and empty string (suggestion configured but empty):

💡 Suggested improvement
 def print_suggestion(suggest: Optional[str]) -> None:
     """Print suggestion to user
     :param suggest: what message to print out
     """
-    if suggest:
+    if suggest is not None and suggest.strip():
         print(
             f"Suggest: {GREEN}{suggest}{RESET_COLOR} ",
             end="",
         )
+        print("\n")
     else:
-        print(f"commit-check does not support {suggest} yet.")
-        raise SystemExit(1)
-    print("\n")
+        # No suggestion provided, just skip
+        pass
🤖 Fix all issues with AI agents
In `@tests/config_test.py`:
- Around line 231-250: The permission test
test_load_config_file_permission_error is flaky on Windows and when running as
root; update it to skip on incompatible environments by checking platform and
privileges before altering file perms: in test_load_config_file_permission_error
(which calls load_config) add a guard that uses platform.system() == "Windows"
or (hasattr(os, "geteuid") and os.geteuid() == 0) to call pytest.skip (or apply
pytest.mark.skipif) so the test is only executed on Unix non-root environments
where os.chmod(0o000) can reliably trigger a PermissionError.
- Around line 340-343: The finally block that does sys.modules.clear() and
sys.modules.update(original_modules) is unsafe; replace this manual mutation
with unittest.mock.patch.dict to temporarily modify sys.modules (e.g., use
patch.dict(sys.modules, {...}, clear=False) or use it as a context manager /
setUp/tearDown) so the test no longer clears the global module table and
reliably restores modules even on exceptions or parallel test runs; update the
test that currently uses the finally block to use patch.dict around the section
that needs altered modules (refer to the finally block and the original_modules
variable to locate the change).
🧹 Nitpick comments (6)
tests/config_test.py (2)

224-228: Use specific exception type instead of bare Exception.

The static analysis tool correctly flags this as B017. Catching bare Exception makes the test fragile—it will pass even if an unexpected error occurs. Use the specific TOML parsing exception.

💡 Suggested fix
             try:
-                with pytest.raises(Exception):  # Should raise a TOML parsing error
+                # tomllib raises TOMLDecodeError, tomli raises TOMLDecodeError
+                # Both inherit from ValueError
+                with pytest.raises((ValueError,)):  # TOML parsing error
                     load_config(f.name)
             finally:
                 os.unlink(f.name)

253-282: Duplicate test method name test_tomli_import_fallback.

This test method (lines 253-282) has the same name as another test method in TestConfig class (lines 140-207). While they're in different classes so there's no actual conflict, having identically named tests can cause confusion when reviewing test results.

tests/engine_test.py (4)

156-159: Remove duplicate assertions.

Lines 157-159 contain duplicate assertions that appear to be copy-paste artifacts. Each assertion is identical to line 157.

💡 Suggested fix
         result = validator.validate(context)
         assert result == ValidationResult.PASS
-        assert result == ValidationResult.PASS
-        assert result == ValidationResult.PASS

304-312: Remove duplicate assertions.

Lines 308-312 duplicate the assertions from lines 304-307. This appears to be a copy-paste error.

💡 Suggested fix
         # Called once for skip logic ("an"), once for value ("ae")
         assert mock_get_commit_info.call_count == 2
         assert mock_get_commit_info.call_args_list[0][0][0] == "an"
         assert mock_get_commit_info.call_args_list[1][0][0] == "ae"
-        assert result == ValidationResult.PASS
-        # Called once for skip logic ("an"), once for value ("ae")
-        assert mock_get_commit_info.call_count == 2
-        assert mock_get_commit_info.call_args_list[0][0][0] == "an"
-        assert mock_get_commit_info.call_args_list[1][0][0] == "ae"

986-1007: Unused mock parameter mock_get_commit_info.

The static analysis tool correctly identifies that mock_get_commit_info is declared but never used in this test. Either use it or remove it from the decorator stack.

💡 Suggested fix
-    `@patch`("commit_check.engine.get_commit_info")
     `@patch`("commit_check.engine.has_commits")
     `@pytest.mark.benchmark`
     def test_commit_message_validator_failure(
-        self, mock_has_commits, mock_get_commit_info
+        self, mock_has_commits
     ):

930-933: Prefer comparing enum members directly instead of to integers.

While comparing ValidationResult.PASS == 0 works due to Python's enum behavior, it's more explicit and maintainable to compare enum members directly.

💡 Suggested improvement
     def test_validation_result_values(self):
         """Test ValidationResult enum values."""
-        assert ValidationResult.PASS == 0
-        assert ValidationResult.FAIL == 1
+        assert ValidationResult.PASS.value == 0
+        assert ValidationResult.FAIL.value == 1

Note: The existing TestValidationResult class (lines 26-31) already does this correctly with .value.

Comment on lines +231 to +250
def test_load_config_file_permission_error(self):
"""Test handling of file permission errors."""
config_content = b"""
[checks]
test = true
"""
with tempfile.NamedTemporaryFile(mode="wb", suffix=".toml", delete=False) as f:
f.write(config_content)
f.flush()

try:
# Remove read permissions to simulate permission error
os.chmod(f.name, 0o000)

with pytest.raises(PermissionError):
load_config(f.name)
finally:
# Restore permissions and clean up
os.chmod(f.name, 0o644)
os.unlink(f.name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Permission test may fail on certain platforms or CI environments.

Setting file permissions to 0o000 doesn't work reliably on Windows, and some CI environments run as root where permission checks are bypassed. Consider skipping this test on incompatible platforms.

💡 Suggested improvement
+    `@pytest.mark.skipif`(
+        os.name == "nt" or os.geteuid() == 0,
+        reason="Permission test unreliable on Windows or when running as root",
+    )
     `@pytest.mark.benchmark`
     def test_load_config_file_permission_error(self):
🤖 Prompt for AI Agents
In `@tests/config_test.py` around lines 231 - 250, The permission test
test_load_config_file_permission_error is flaky on Windows and when running as
root; update it to skip on incompatible environments by checking platform and
privileges before altering file perms: in test_load_config_file_permission_error
(which calls load_config) add a guard that uses platform.system() == "Windows"
or (hasattr(os, "geteuid") and os.geteuid() == 0) to call pytest.skip (or apply
pytest.mark.skipif) so the test is only executed on Unix non-root environments
where os.chmod(0o000) can reliably trigger a PermissionError.

Comment on lines +340 to +343
finally:
# Restore original modules
sys.modules.clear()
sys.modules.update(original_modules)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Risky sys.modules manipulation may affect other tests.

Clearing and restoring sys.modules can cause issues if tests run in parallel or if exceptions occur before restoration completes. The current pattern with sys.modules.clear() followed by sys.modules.update() could leave the interpreter in an inconsistent state.

Consider using unittest.mock.patch.dict for safer module manipulation:

💡 Safer approach
         finally:
             # Restore original modules
-            sys.modules.clear()
-            sys.modules.update(original_modules)
+            # Only restore the modules we explicitly removed
+            for key in list(sys.modules.keys()):
+                if key not in original_modules:
+                    del sys.modules[key]
+            for key, value in original_modules.items():
+                if key not in sys.modules:
+                    sys.modules[key] = value
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
finally:
# Restore original modules
sys.modules.clear()
sys.modules.update(original_modules)
finally:
# Restore original modules
# Only restore the modules we explicitly removed
for key in list(sys.modules.keys()):
if key not in original_modules:
del sys.modules[key]
for key, value in original_modules.items():
if key not in sys.modules:
sys.modules[key] = value
🤖 Prompt for AI Agents
In `@tests/config_test.py` around lines 340 - 343, The finally block that does
sys.modules.clear() and sys.modules.update(original_modules) is unsafe; replace
this manual mutation with unittest.mock.patch.dict to temporarily modify
sys.modules (e.g., use patch.dict(sys.modules, {...}, clear=False) or use it as
a context manager / setUp/tearDown) so the test no longer clears the global
module table and reliably restores modules even on exceptions or parallel test
runs; update the test that currently uses the finally block to use patch.dict
around the section that needs altered modules (refer to the finally block and
the original_modules variable to locate the change).

@codecov
Copy link

codecov bot commented Jan 31, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.51%. Comparing base (2ca2701) to head (b6c26ce).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #353      +/-   ##
==========================================
+ Coverage   87.46%   93.51%   +6.05%     
==========================================
  Files           8        8              
  Lines         694      694              
==========================================
+ Hits          607      649      +42     
+ Misses         87       45      -42     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@codspeed-hq
Copy link

codspeed-hq bot commented Jan 31, 2026

Merging this PR will degrade performance by 10.88%

❌ 1 regressed benchmark
✅ 143 untouched benchmarks
🆕 7 new benchmarks
⏩ 108 skipped benchmarks1

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Performance Changes

Benchmark BASE HEAD Efficiency
🆕 test_load_config_file_permission_error N/A 340.3 µs N/A
🆕 test_tomli_import_fallback N/A 424.6 µs N/A
🆕 test_load_config_invalid_toml N/A 387.5 µs N/A
🆕 test_config_tomli_fallback_direct N/A 1.3 ms N/A
🆕 test_tomli_import_fallback_simulation N/A 1.2 ms N/A
🆕 test_import_paths_coverage N/A 326.6 µs N/A
test_commit_message_validator_valid_conventional_commit 509.1 µs 571.2 µs -10.88%
🆕 test_validation_engine_validator_map N/A 121.7 µs N/A

Comparing feature/add-tests (b6c26ce) with main (2ca2701)

Open in CodSpeed

Footnotes

  1. 108 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the pre-commit configuration and consolidates test coverage by merging separate comprehensive test files into existing test files. The changes aim to improve code organization and eliminate redundancy while maintaining test coverage.

Changes:

  • Updated pre-commit hook versions (ruff from v0.14.10 to v0.14.14, commit-check from v2.2.1 to v2.2.2)
  • Consolidated test files by merging engine_comprehensive_test.py, config_import_test.py, config_fallback_test.py, and config_edge_test.py into their respective main test files
  • Fixed type annotations in util.py to use Optional[dict] instead of dict | None for Python 3.9+ compatibility
  • Added new utility function tests to improve coverage of helper functions

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
.pre-commit-config.yaml Updated ruff-pre-commit to v0.14.14 and commit-check to v2.2.2
commit_check/util.py Fixed type annotations from union syntax to Optional for Python 3.9 compatibility
tests/engine_test.py Consolidated comprehensive tests from engine_comprehensive_test.py; contains significant test duplication
tests/util_test.py Added new TestHelperFunctions class to test previously untested utility functions
tests/config_test.py Consolidated edge case and import fallback tests from separate test files
tests/engine_comprehensive_test.py Deleted (merged into engine_test.py)
tests/config_import_test.py Deleted (merged into config_test.py)
tests/config_fallback_test.py Deleted (merged into config_test.py)
tests/config_edge_test.py Deleted (merged into config_test.py)

@shenxianpeng
Copy link
Contributor Author

@copilot open a new pull request to apply changes based on the comments in this thread

Copy link
Contributor

Copilot AI commented Jan 31, 2026

@shenxianpeng I've opened a new pull request, #354, to work on those changes. Once the pull request is ready, I'll request review from you.

…354)

* Initial plan

* refactor: remove duplicate test classes and fix code style issues

Co-authored-by: shenxianpeng <3353385+shenxianpeng@users.noreply.github.com>

* chore: Add 'copilot' to allowed branch types

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: shenxianpeng <3353385+shenxianpeng@users.noreply.github.com>
Co-authored-by: Xianpeng Shen <xianpeng.shen@gmail.com>
@github-actions github-actions bot added the enhancement New feature or request label Jan 31, 2026
@sonarqubecloud
Copy link

@shenxianpeng shenxianpeng removed the enhancement New feature or request label Jan 31, 2026
@shenxianpeng shenxianpeng merged commit 5eab803 into main Jan 31, 2026
32 of 33 checks passed
@shenxianpeng shenxianpeng deleted the feature/add-tests branch January 31, 2026 00:36
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tests/config_test.py`:
- Around line 224-226: Replace the broad pytest.raises(Exception) in the failing
test around load_config(f.name) with the specific TOML parsing exception: import
TOMLDecodeError from tomllib if available else from tomli, then use
pytest.raises(TOMLDecodeError) to assert load_config(f.name) raises that
specific error; update the test that references load_config to import/alias the
correct TOMLDecodeError and use it in the context manager.
🧹 Nitpick comments (2)
tests/engine_test.py (1)

813-840: LGTM! Consider verifying no unexpected keys exist.

The test correctly validates all expected VALIDATOR_MAP entries. For completeness, you could also assert that the map doesn't contain unexpected keys:

💡 Optional enhancement
         for check, validator_class in expected_mappings.items():
             assert engine.VALIDATOR_MAP[check] == validator_class
+
+        # Verify no unexpected keys in VALIDATOR_MAP
+        assert set(engine.VALIDATOR_MAP.keys()) == set(expected_mappings.keys())

,

tests/config_test.py (1)

368-383: Consider avoiding exec() for import testing.

While exec() works for testing import behavior, it's flagged by security linters (S102) and makes debugging harder. Consider using importlib for a more maintainable approach.

💡 Alternative approach
`@pytest.mark.benchmark`
def test_tomli_import_fallback_simulation(self):
    """Test tomli import fallback by simulating the ImportError condition."""
    import importlib
    import importlib.util
    
    # Test case 1: Verify tomllib is used on Python 3.11+
    if sys.version_info >= (3, 11):
        spec = importlib.util.find_spec("tomllib")
        assert spec is not None, "tomllib should be available"
    
    # Test case 2: Verify the config module loads successfully
    from commit_check.config import toml_load
    assert callable(toml_load)

Comment on lines +224 to +226
try:
with pytest.raises(Exception): # Should raise a TOML parsing error
load_config(f.name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use a specific exception type instead of bare Exception.

Catching Exception is too broad and may mask unrelated errors. Use the specific TOML parsing error type for clearer test intent.

💡 Suggested fix
             try:
-                with pytest.raises(Exception):  # Should raise a TOML parsing error
+                # tomllib raises TOMLDecodeError, tomli raises TOMLDecodeError
+                from commit_check.config import toml_load
+                with pytest.raises((Exception,)):  # More specific: tomllib.TOMLDecodeError or tomli.TOMLDecodeError
                     load_config(f.name)

Alternatively, import and use the specific error:

try:
    from tomllib import TOMLDecodeError
except ImportError:
    from tomli import TOMLDecodeError

with pytest.raises(TOMLDecodeError):
    load_config(f.name)
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 225-225: Do not assert blind exception: Exception

(B017)

🤖 Prompt for AI Agents
In `@tests/config_test.py` around lines 224 - 226, Replace the broad
pytest.raises(Exception) in the failing test around load_config(f.name) with the
specific TOML parsing exception: import TOMLDecodeError from tomllib if
available else from tomli, then use pytest.raises(TOMLDecodeError) to assert
load_config(f.name) raises that specific error; update the test that references
load_config to import/alias the correct TOMLDecodeError and use it in the
context manager.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

tests Add test related changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants