-
-
Notifications
You must be signed in to change notification settings - Fork 8
feat: Add comprehensive tests #331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for commit-check ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
WalkthroughConsolidates and reorganizes test coverage: three standalone config tests and a comprehensive engine test were removed; their cases were merged into expanded/updated test modules for config, engine, main, rule_builder, and util; Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~35 minutes
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/engine_test.py (2)
156-159: Remove duplicate assertions.Lines 157-159 contain identical assertions that should be consolidated into a single assertion.
Apply this diff to fix:
result = validator.validate(context) assert result == ValidationResult.PASS - assert result == ValidationResult.PASS - assert result == ValidationResult.PASS
245-253: Remove duplicate assertions and comments.Lines 249-253 duplicate the assertions and comments from lines 245-248.
Apply this diff to fix:
assert result == ValidationResult.PASS # Called once for skip logic ("an"), once for value ("ae") assert mock_get_commit_info.call_count == 2 assert mock_get_commit_info.call_args_list[0][0][0] == "an" assert mock_get_commit_info.call_args_list[1][0][0] == "ae" - assert result == ValidationResult.PASS - # Called once for skip logic ("an"), once for value ("ae") - assert mock_get_commit_info.call_count == 2 - assert mock_get_commit_info.call_args_list[0][0][0] == "an" - assert mock_get_commit_info.call_args_list[1][0][0] == "ae"
🧹 Nitpick comments (6)
tests/engine_test.py (3)
867-1004: Add missing test markers and consolidate duplicate tests.Several issues in this section:
Tests at lines 871, 888, 900, 916, 932, 948, 965, 981, and 993 are missing the
@pytest.mark.benchmarkdecorator used consistently throughout the file.
TestCommitMessageValidatorEdgeCases.test_validate_with_valid_conventional_commit(lines 871-882) duplicates the test at lines 62-72.Consider consolidating these edge case tests into the existing test classes rather than creating separate edge case classes, as the distinction between "edge cases" and regular tests is unclear.
Apply this diff to add missing decorators:
class TestCommitMessageValidatorEdgeCases: """Test edge cases for CommitMessageValidator.""" + @pytest.mark.benchmark def test_validate_with_valid_conventional_commit(self): class TestSubjectCapitalizationEdgeCases: """Test additional subject capitalization validator edge cases.""" + @pytest.mark.benchmark def test_validate_merge_commit_passes(self): """Test that merge commits pass capitalization check.""" + @pytest.mark.benchmark def test_validate_conventional_commit_capitalized(self): class TestSubjectImperativeEdgeCases: """Test additional imperative validator edge cases.""" + @pytest.mark.benchmark def test_validate_non_match_fallback(self): class TestSubjectLengthEdgeCases: """Test additional subject length validator edge cases.""" + @pytest.mark.benchmark def test_validate_unknown_check_type(self): class TestAuthorValidatorEdgeCases: """Test additional author validator edge cases.""" + @pytest.mark.benchmark def test_validate_regex_first_then_ignored(self): class TestMergeBaseEdgeCases: """Test additional merge base validator edge cases.""" + @pytest.mark.benchmark def test_find_target_branch_cleans_remote_prefix(self): class TestCommitTypeEdgeCases: """Test additional commit type validator edge cases.""" + @pytest.mark.benchmark def test_validate_merge_commit_when_not_allowed(self): + @pytest.mark.benchmark def test_validate_unknown_check_type(self):
1006-1301: Consolidate duplicate test coverage.This section duplicates significant test coverage from earlier in the file:
TestValidationResultComprehensive(lines 1007-1012) duplicatesTestValidationResult(lines 26-31)TestValidationContextComprehensive(lines 1015-1027) duplicatesTestValidationContext(lines 34-49)TestCommitMessageValidatorComprehensiveoverlaps heavily withTestCommitMessageValidator(lines 60-140)- Similar patterns for other validators
Consider consolidating these tests into the existing test classes rather than creating separate "comprehensive" test classes. The current organization makes it difficult to understand which tests provide unique coverage versus duplicates. If these tests were meant to replace earlier tests, the old tests should be removed.
1-1301: Overall assessment: Good coverage with organizational improvements needed.The test file provides comprehensive coverage of the engine module and follows the coding guideline to ensure tests run via pytest and cover commit, branch, author, and CLI behaviors. However, the file would benefit from:
Better organization: Having three separate test class hierarchies (base tests, edge case tests, and comprehensive tests) for the same validators creates confusion about which tests provide unique coverage.
Reduced duplication: Many tests in the "edge cases" and "comprehensive" sections duplicate existing test coverage. Consider consolidating into single, well-organized test classes per validator.
Consistent test markers: Ensure all tests have the
@pytest.mark.benchmarkdecorator for consistency.The tests themselves are well-written with appropriate mocking and assertions, but the file structure makes maintenance more difficult than necessary.
Based on learnings, ensuring tests run via pytest -v and cover commit, branch, author, and CLI behaviors.
tests/util_test.py (1)
373-393: Consider using mocks instead of manipulating function attributes.Directly setting
print_error_header.has_been_called(lines 375, 386) couples tests to implementation details and may be brittle.Consider using
mocker.patchor tracking calls throughcapsysoutput instead:- def test_print_failure_first_call_prints_header(self, capsys): + def test_print_failure_first_call_prints_header(self, capsys, mocker): """Test that header is printed on first failure.""" - print_error_header.has_been_called = False + mocker.patch("commit_check.util.print_error_header") check = {"check": "message", "error": "Invalid format", "suggest": "Use feat:"} _print_failure(check, "^feat:.*", "wrong message")tests/config_test.py (2)
230-234: Optional: Consider simplifying the mock error message.Line 232 raises a
PermissionErrorwith a detailed message. While this works, Ruff suggests keeping exception messages concise (TRY003).if "no_perms.toml" in str(args[0]): - raise PermissionError("Permission denied") + raise PermissionError()The test verifies the exception type, so the message content may not be critical.
362-365: Optional: Simplify mock exception message.Similar to line 232, line 364 has a detailed error message that triggers Ruff TRY003.
if name == "tomllib": - raise ModuleNotFoundError("No module named 'tomllib'") + raise ModuleNotFoundError()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
tests/config_edge_test.py(0 hunks)tests/config_fallback_test.py(0 hunks)tests/config_import_test.py(0 hunks)tests/config_test.py(2 hunks)tests/engine_comprehensive_test.py(0 hunks)tests/engine_test.py(1 hunks)tests/main_test.py(2 hunks)tests/rule_builder_test.py(1 hunks)tests/util_test.py(2 hunks)
💤 Files with no reviewable changes (4)
- tests/config_fallback_test.py
- tests/engine_comprehensive_test.py
- tests/config_edge_test.py
- tests/config_import_test.py
🧰 Additional context used
📓 Path-based instructions (1)
tests/**
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors
Files:
tests/util_test.pytests/main_test.pytests/engine_test.pytests/config_test.pytests/rule_builder_test.py
🧠 Learnings (4)
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to tests/** : Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors
Applied to files:
tests/util_test.pytests/main_test.pytests/engine_test.pytests/config_test.pytests/rule_builder_test.py
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to commit_check/main.py : Expose CLI flags: --message, --branch, --help, --version, --config, --dry-run; support combining checks
Applied to files:
tests/util_test.py
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to commit_check/commit.py : Support the set of commit types: build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test
Applied to files:
tests/util_test.py
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to .commit-check.yml : Maintain default validation rules for the tool in .commit-check.yml
Applied to files:
tests/util_test.pytests/engine_test.py
🧬 Code graph analysis (3)
tests/main_test.py (1)
commit_check/main.py (4)
StdinReader(14-26)main(127-225)_get_message_content(92-124)read_piped_input(18-26)
tests/engine_test.py (2)
commit_check/rule_builder.py (1)
ValidationRule(14-40)commit_check/engine.py (24)
CommitMessageValidator(97-131)ValidationContext(27-32)validate(42-44)validate(100-114)validate(137-145)validate(241-250)validate(287-303)validate(309-329)validate(356-370)validate(393-418)validate(441-456)ValidationResult(19-23)SubjectCapitalizationValidator(167-189)SubjectImperativeValidator(192-213)SubjectLengthValidator(216-235)AuthorValidator(238-281)MergeBaseValidator(306-350)_find_target_branch(331-350)CommitTypeValidator(438-516)ValidationEngine(519-563)BranchValidator(284-303)SignoffValidator(353-387)BodyValidator(390-435)validate_all(545-563)
tests/config_test.py (1)
commit_check/config.py (1)
load_config(21-37)
🪛 Ruff (0.14.8)
tests/engine_test.py
1065-1065: Unused method argument: mock_get_commit_info
(ARG002)
tests/config_test.py
219-219: Do not assert blind exception: Exception
(B017)
232-232: Avoid specifying long messages outside the exception class
(TRY003)
279-279: Unused noqa directive (non-enabled: F401)
Remove unused noqa directive
(RUF100)
364-364: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run benchmarks
🔇 Additional comments (5)
tests/util_test.py (2)
11-15: LGTM! Good coverage of internal utilities.Testing internal/private functions is appropriate for comprehensive test coverage.
408-514: Excellent comprehensive edge case coverage!These tests thoroughly cover TOML loading errors, config file discovery, and YAML fallback scenarios. The use of
tmp_pathfixtures and proper error path testing aligns well with the PR objectives.Based on coding guidelines, confirming tests follow pytest patterns and cover the intended behaviors.
tests/main_test.py (1)
323-380: LGTM! Solid edge case coverage for main flow.These additional tests provide good coverage of:
- Git format fallback behavior
- FileNotFoundError handling
- OSError handling in file operations
- Stdin whitespace trimming
The mocking patterns are appropriate and tests follow pytest conventions.
Based on coding guidelines, tests cover CLI behaviors and use pytest -v compatible patterns.
tests/rule_builder_test.py (1)
160-273: Excellent validation of edge cases and defensive coding!These tests thoroughly verify:
- Proper handling of invalid config types (non-integer, non-list, non-string)
- Empty value handling
- Complete and minimal field serialization in
to_dict()The assertions correctly verify that invalid configurations don't create rules, which is good defensive behavior.
tests/config_test.py (1)
282-435: Excellent comprehensive coverage of import fallback paths!These tests thoroughly exercise the tomllib→tomli fallback mechanism and properly handle module state restoration. The systematic approach to testing different import scenarios is well-structured.
Based on coding guidelines, tests use pytest patterns and provide comprehensive coverage.
| def test_load_config_invalid_toml(self, tmp_path): | ||
| """Test loading config with invalid TOML syntax.""" | ||
| invalid_config = tmp_path / "invalid.toml" | ||
| invalid_config.write_text("invalid toml [[[") | ||
|
|
||
| with pytest.raises(Exception): # Could be TOMLDecodeError or similar | ||
| load_config(str(invalid_config)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specify the expected exception type.
The broad Exception catch on line 219 makes the test less precise. Consider catching the specific TOML parsing exception.
Apply this diff to be more specific:
def test_load_config_invalid_toml(self, tmp_path):
"""Test loading config with invalid TOML syntax."""
invalid_config = tmp_path / "invalid.toml"
invalid_config.write_text("invalid toml [[[")
- with pytest.raises(Exception): # Could be TOMLDecodeError or similar
+ with pytest.raises((Exception,)): # TOMLDecodeError varies by library
load_config(str(invalid_config))Or better yet, import and use the specific exception from your TOML library.
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.8)
219-219: Do not assert blind exception: Exception
(B017)
🤖 Prompt for AI Agents
In tests/config_test.py around lines 214 to 220, the test uses a broad
pytest.raises(Exception) which is too general; replace it with the specific TOML
parsing exception (e.g., TOMLDecodeError or tomli.TOMLDecodeError depending on
which TOML library the project uses) and add the corresponding import at the top
of the test module so the test asserts the precise parsing error is raised when
loading invalid TOML.
tests/config_test.py
Outdated
| # Force reimport of config module to restore original state | ||
| if "commit_check.config" in sys.modules: | ||
| del sys.modules["commit_check.config"] | ||
| import commit_check.config # noqa: F401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused noqa directive.
The noqa: F401 directive on line 279 is not needed.
- import commit_check.config # noqa: F401
+ import commit_check.config📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import commit_check.config # noqa: F401 | |
| import commit_check.config |
🧰 Tools
🪛 Ruff (0.14.8)
279-279: Unused noqa directive (non-enabled: F401)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In tests/config_test.py around line 279, the import line currently includes an
unnecessary " # noqa: F401" directive; remove the " # noqa: F401" suffix so the
line is just "import commit_check.config" (keeping the import intact) and run
linters/tests to confirm the unused noqa has been removed.
CodSpeed Performance ReportMerging #331 will degrade performances by 40.01%Comparing Summary
Benchmarks breakdownFootnotes
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #331 +/- ##
==========================================
+ Coverage 87.17% 93.29% +6.12%
==========================================
Files 8 8
Lines 686 686
==========================================
+ Hits 598 640 +42
+ Misses 88 46 -42 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
tests/engine_test.py (1)
1061-1066: Fix@patchdecorator / test signature mismatch (currently breaks pytest).
You have 2 patches but only 1 injected mock parameter, matching the CI failure (“takes 2 positional arguments but 3 were given”). Also aligns with the prior review note: theget_commit_infopatch isn’t used here, so drop it.- @patch("commit_check.engine.get_commit_info") @patch("commit_check.engine.has_commits") @pytest.mark.benchmark - def test_commit_message_validator_failure(self, mock_has_commits): + def test_commit_message_validator_failure(self, mock_has_commits): """Test CommitMessageValidator failure case.""" mock_has_commits.return_value = Truetests/config_test.py (1)
214-221: Avoidpytest.raises(Exception); assert the TOML decode error specifically.
This is the same issue raised previously, and Ruff flags it (B017). PreferTOMLDecodeErrorvia tomllib/tomli fallback.import builtins import pytest @@ from unittest.mock import patch from commit_check.config import load_config, DEFAULT_CONFIG_PATHS + +try: + from tomllib import TOMLDecodeError +except ImportError: # pragma: no cover + from tomli import TOMLDecodeError @@ class TestConfigEdgeCases: @@ def test_load_config_invalid_toml(self, tmp_path): """Test loading config with invalid TOML syntax.""" invalid_config = tmp_path / "invalid.toml" invalid_config.write_text("invalid toml [[[") - with pytest.raises(Exception): # TOMLDecodeError varies by library + with pytest.raises(TOMLDecodeError): load_config(str(invalid_config))
🧹 Nitpick comments (1)
tests/engine_test.py (1)
865-1004: Consider deduplicating the new “EdgeCases” tests vs earlier ones to keep the suite lean.
Several of these overlap existing tests in the same module (e.g., commit message / capitalization / imperative basics). Not a blocker, but it’ll reduce maintenance and run time.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/config_test.py(2 hunks)tests/engine_test.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
tests/**
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors
Files:
tests/config_test.pytests/engine_test.py
🧠 Learnings (2)
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to tests/** : Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors
Applied to files:
tests/config_test.pytests/engine_test.py
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to .commit-check.yml : Maintain default validation rules for the tool in .commit-check.yml
Applied to files:
tests/engine_test.py
🧬 Code graph analysis (2)
tests/config_test.py (1)
commit_check/config.py (1)
load_config(21-37)
tests/engine_test.py (2)
commit_check/rule_builder.py (1)
ValidationRule(14-40)commit_check/engine.py (24)
CommitMessageValidator(97-131)ValidationContext(27-32)validate(42-44)validate(100-114)validate(137-145)validate(241-250)validate(287-303)validate(309-329)validate(356-370)validate(393-418)validate(441-456)ValidationResult(19-23)SubjectCapitalizationValidator(167-189)SubjectImperativeValidator(192-213)SubjectLengthValidator(216-235)AuthorValidator(238-281)MergeBaseValidator(306-350)_find_target_branch(331-350)CommitTypeValidator(438-516)ValidationEngine(519-563)BranchValidator(284-303)SignoffValidator(353-387)BodyValidator(390-435)validate_all(545-563)
🪛 GitHub Actions: main
tests/engine_test.py
[error] 1-200: TestCommitMessageValidatorComprehensive.test_commit_message_validator_failure() takes 2 positional arguments but 3 were given. TypeError occurred within test suite during pytest execution.
🪛 Ruff (0.14.8)
tests/config_test.py
219-219: Do not assert blind exception: Exception
(B017)
232-232: Avoid specifying long messages outside the exception class
(TRY003)
363-363: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run benchmarks
🔇 Additional comments (1)
tests/config_test.py (1)
3-3:import builtinsusage looks appropriate for the permission/import fallback tests.
|
@copilot fix ci failure |
|
@shenxianpeng I've opened a new pull request, #332, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/engine_test.py (2)
157-159: Remove duplicate assertions.Lines 157-159 repeat the same assertion three times. This appears to be a copy-paste error.
Apply this diff to fix:
result = validator.validate(context) assert result == ValidationResult.PASS - assert result == ValidationResult.PASS - assert result == ValidationResult.PASS
249-253: Remove duplicate assertions.Lines 249-253 duplicate the assertions from lines 246-248.
Apply this diff to fix:
assert result == ValidationResult.PASS # Called once for skip logic ("an"), once for value ("ae") assert mock_get_commit_info.call_count == 2 assert mock_get_commit_info.call_args_list[0][0][0] == "an" assert mock_get_commit_info.call_args_list[1][0][0] == "ae" - assert result == ValidationResult.PASS - # Called once for skip logic ("an"), once for value ("ae") - assert mock_get_commit_info.call_count == 2 - assert mock_get_commit_info.call_args_list[0][0][0] == "an" - assert mock_get_commit_info.call_args_list[1][0][0] == "ae"
♻️ Duplicate comments (1)
tests/engine_test.py (1)
1061-1083: Remove unused mock parameter (previously flagged).The
mock_get_commit_infoparameter is decorated but never used in the test body since the test providesstdin_textdirectly. This issue was previously identified but still appears in the code.Apply this diff to fix:
- @patch("commit_check.engine.get_commit_info") @patch("commit_check.engine.has_commits") @pytest.mark.benchmark def test_commit_message_validator_failure( - self, mock_has_commits, mock_get_commit_info + self, mock_has_commits ):
🧹 Nitpick comments (3)
tests/engine_test.py (3)
871-882: Add missing benchmark decorator and consider consolidation.This test is missing the
@pytest.mark.benchmarkdecorator that is consistently used throughout the file. Additionally, this test duplicatestest_commit_message_validator_valid_conventional_commitat lines 62-72.Apply this diff to add the decorator:
+ @pytest.mark.benchmark def test_validate_with_valid_conventional_commit(self):Consider whether this test class adds unique value or if it can be consolidated with the existing
TestCommitMessageValidatorclass.
1007-1028: Remove duplicate test classes.
TestValidationResultComprehensiveandTestValidationContextComprehensiveduplicate tests already covered byTestValidationResult(lines 26-31) andTestValidationContext(lines 34-49) respectively. These duplicates don't add new coverage.Consider removing these duplicate test classes to reduce maintenance burden:
-# Tests from engine_comprehensive_test.py -class TestValidationResultComprehensive: - @pytest.mark.benchmark - def test_validation_result_values(self): - """Test ValidationResult enum values.""" - assert ValidationResult.PASS == 0 - assert ValidationResult.FAIL == 1 - - -class TestValidationContextComprehensive: - @pytest.mark.benchmark - def test_validation_context_creation(self): - """Test ValidationContext creation.""" - context = ValidationContext() - assert context.stdin_text is None - assert context.commit_file is None - - context_with_data = ValidationContext( - stdin_text="test commit", commit_file="commit.txt" - ) - assert context_with_data.stdin_text == "test commit" - assert context_with_data.commit_file == "commit.txt"
1030-1101: Consider consolidating duplicate validator tests.
TestCommitMessageValidatorComprehensiveduplicates much of the coverage already provided byTestCommitMessageValidator(lines 60-140). While comprehensive testing is valuable, consolidating these into a single well-organized test class would improve maintainability and reduce duplication.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/engine_test.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
tests/**
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors
Files:
tests/engine_test.py
🧠 Learnings (2)
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to tests/** : Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors
Applied to files:
tests/engine_test.py
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to .commit-check.yml : Maintain default validation rules for the tool in .commit-check.yml
Applied to files:
tests/engine_test.py
🪛 Ruff (0.14.8)
tests/engine_test.py
1065-1065: Unused method argument: mock_get_commit_info
(ARG002)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run benchmarks
🔇 Additional comments (2)
tests/engine_test.py (2)
888-898: Verify merge commit format matches real-world usage.The test uses
"merge: something"which doesn't match typical git merge commit formats like"Merge branch 'feature'". Confirm whether this lowercase format is intentionally tested or if the test should use a realistic merge commit message.
1223-1248: LGTM - excellent comprehensive coverage.This test validates all expected validator mappings in the
VALIDATOR_MAP, ensuring completeness and correctness of the validator registration system. This is a valuable test for catching configuration errors.
9ed393c to
65a259b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
tests/config_test.py (1)
214-220: Specify the expected exception type.The broad
Exceptioncatch on line 219 makes the test less precise. The specific TOML parsing exception should be caught instead.This issue was already flagged in a previous review.
🧹 Nitpick comments (2)
tests/config_test.py (1)
337-434: Consider consolidating duplicate import fallback tests.The import fallback mechanism is tested multiple times across different test classes (lines 141-207, 240-274, 341-388, 389-434). While comprehensive coverage is valuable, consider consolidating these into a single, well-organized test class to reduce maintenance burden and improve clarity.
tests/main_test.py (1)
329-338: Consider using pytest-mock fixture for cleaner test setup.The deeply nested
with patch()statements (4 levels) make the test harder to read and maintain. Consider using themockerfixture (already used elsewhere in the file) for a cleaner approach:def test_main_message_validation_git_format_used(self, mocker): """Test message validation uses git format B.""" mocker.patch("sys.argv", ["commit-check", "--message"]) mocker.patch.object(StdinReader, "read_piped_input", return_value=None) mocker.patch("commit_check.util.get_commit_info", return_value="feat: test") mocker.patch("commit_check.util.has_commits", return_value=True) result = main() assert result == 0
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
pyproject.toml(1 hunks)tests/config_edge_test.py(0 hunks)tests/config_fallback_test.py(0 hunks)tests/config_import_test.py(0 hunks)tests/config_test.py(2 hunks)tests/engine_comprehensive_test.py(0 hunks)tests/engine_test.py(1 hunks)tests/main_test.py(2 hunks)tests/rule_builder_test.py(1 hunks)tests/util_test.py(2 hunks)
💤 Files with no reviewable changes (4)
- tests/config_edge_test.py
- tests/config_import_test.py
- tests/config_fallback_test.py
- tests/engine_comprehensive_test.py
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
🧰 Additional context used
📓 Path-based instructions (1)
tests/**
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors
Files:
tests/engine_test.pytests/config_test.pytests/main_test.pytests/util_test.pytests/rule_builder_test.py
🧠 Learnings (4)
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to tests/** : Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors
Applied to files:
tests/engine_test.pytests/config_test.pytests/main_test.pytests/util_test.pytests/rule_builder_test.py
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to .commit-check.yml : Maintain default validation rules for the tool in .commit-check.yml
Applied to files:
tests/engine_test.pytests/util_test.py
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to commit_check/main.py : Expose CLI flags: --message, --branch, --help, --version, --config, --dry-run; support combining checks
Applied to files:
tests/util_test.py
📚 Learning: 2025-10-03T10:28:06.793Z
Learnt from: CR
Repo: commit-check/commit-check PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-10-03T10:28:06.793Z
Learning: Applies to commit_check/commit.py : Support the set of commit types: build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test
Applied to files:
tests/util_test.py
🧬 Code graph analysis (4)
tests/engine_test.py (2)
commit_check/rule_builder.py (1)
ValidationRule(14-40)commit_check/engine.py (24)
CommitMessageValidator(97-131)ValidationContext(27-32)validate(42-44)validate(100-114)validate(137-145)validate(241-250)validate(287-303)validate(309-329)validate(356-370)validate(393-418)validate(441-456)ValidationResult(19-23)SubjectCapitalizationValidator(167-189)SubjectImperativeValidator(192-213)SubjectLengthValidator(216-235)AuthorValidator(238-281)MergeBaseValidator(306-350)_find_target_branch(331-350)CommitTypeValidator(438-516)ValidationEngine(519-563)BranchValidator(284-303)SignoffValidator(353-387)BodyValidator(390-435)validate_all(545-563)
tests/config_test.py (1)
commit_check/config.py (1)
load_config(21-37)
tests/main_test.py (1)
commit_check/main.py (4)
StdinReader(14-26)main(127-225)_get_message_content(92-124)read_piped_input(18-26)
tests/util_test.py (1)
commit_check/util.py (7)
_find_check(27-32)_print_failure(35-41)_find_config_file(164-181)_load_toml(151-161)validate_config(184-207)print_error_header(220-236)print_suggestion(257-269)
🪛 Ruff (0.14.8)
tests/engine_test.py
1065-1065: Unused method argument: mock_get_commit_info
(ARG002)
tests/config_test.py
219-219: Do not assert blind exception: Exception
(B017)
232-232: Avoid specifying long messages outside the exception class
(TRY003)
363-363: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: install (3.13, ubuntu-24.04)
- GitHub Check: install (3.14, ubuntu-24.04)
- GitHub Check: install (3.11, macos-latest)
- GitHub Check: install (3.9, ubuntu-24.04)
- GitHub Check: install (3.12, ubuntu-24.04)
- GitHub Check: install (3.9, windows-latest)
- GitHub Check: install (3.13, macos-latest)
- GitHub Check: install (3.13, windows-latest)
- GitHub Check: install (3.12, windows-latest)
- GitHub Check: install (3.14, macos-latest)
- GitHub Check: install (3.10, windows-latest)
- GitHub Check: install (3.9, macos-latest)
- GitHub Check: install (3.10, macos-latest)
- GitHub Check: install (3.14, windows-latest)
- GitHub Check: install (3.11, windows-latest)
- GitHub Check: install (3.11, ubuntu-24.04)
- GitHub Check: install (3.10, ubuntu-24.04)
- GitHub Check: Run benchmarks
🔇 Additional comments (5)
tests/config_test.py (1)
281-335: LGTM!The tomlib fallback test correctly verifies the import fallback mechanism by blocking
tomlliband ensuringtomliis used. The cleanup in the finally block is appropriate.tests/engine_test.py (1)
868-1301: LGTM!The comprehensive test classes added provide excellent coverage of edge cases and various validator behaviors. The tests are well-structured with:
- Appropriate use of mocking and patching
- Clear test names and docstrings
- Proper assertions
- Good coverage of pass/fail scenarios
Based on coding guidelines: Ensure tests run via pytest -v and cover commit, branch, author, and CLI behaviors.
tests/util_test.py (1)
350-514: LGTM!The additional test coverage for utility functions is comprehensive and well-structured:
- Proper use of
tmp_pathfixture for file operations- Good edge case coverage (nonexistent files, invalid TOML, permission errors)
- Clear test names and assertions
The tests properly exercise the utility functions' error handling paths.
tests/main_test.py (1)
363-380: LGTM!The additional test coverage for
_get_message_contentandStdinReaderproperly exercises edge cases likeOSErrorhandling and whitespace stripping.tests/rule_builder_test.py (1)
160-273: LGTM!The additional test coverage for
RuleBuilderandValidationRuleis comprehensive:
- Edge case handling for invalid config values (non-integer, non-list, non-string, empty values)
- Verification that invalid configs are gracefully handled by returning empty rule lists
- Complete coverage of
to_dict()serialization with all fields and minimal fieldsThe tests follow good practices with clear names and appropriate assertions.
|



Summary by CodeRabbit
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.