Conversation
Docstrings generation was requested by @laike9m. * #98 (comment) The following files were modified: * `tests/assets/challenges/basic-foo-pyright-config/question.py` * `tests/conftest.py` * `tests/test_identical.py` * `views/challenge.py` * `views/views.py`
|
Important Review skippedCodeRabbit bot authored PR detected. To trigger a single review, invoke the You can disable this status message by setting the Comment |
laike9m
left a comment
There was a problem hiding this comment.
Great work on refactoring the type-checking logic and handling Pyright configurations!
I found one bug in tests/test_identical.py.
| challenge = Challenge( | ||
| name=challenge_name, level=Level(level), code=challenge_code | ||
| ) | ||
|
|
There was a problem hiding this comment.
This line should use path.open() instead of solution_file.open(). Currently, it always reads the content of solution_file regardless of the path argument.
laike9m
left a comment
There was a problem hiding this comment.
The pull request introduces docstrings and refactors the Pyright type-checking logic to support custom configurations within challenge files.
I found a bug in tests/test_identical.py where the helper function incorrectly reads from the solution_file fixture instead of the provided path argument, which would cause the test to always pass incorrectly.
I also suggested a few improvements in views/challenge.py to restore the "All tests passed" message and ensure consistent line number reporting.
| with solution_file.open() as f: | ||
| challenge_code = f.read() |
There was a problem hiding this comment.
The helper function get_test_code should use the path argument passed to it instead of the solution_file fixture from the outer scope. Using solution_file here causes the test to compare the solution's test code with itself when called for the question file, making the test ineffective.
| with solution_file.open() as f: | |
| challenge_code = f.read() | |
| with path.open() as f: | |
| challenge_code = f.read() |
| passed = True | ||
| for error_line in error_lines: | ||
| if error_line.startswith("[pyright-config]"): | ||
| continue | ||
| passed = False | ||
|
|
||
| error_lines.append(f"\nFound {len(error_lines)} errors") |
There was a problem hiding this comment.
The previous implementation provided a clear "All tests passed" message when the challenge was successful. The new logic always shows "Found X errors", which is less encouraging for users when they succeed. Restoring the success message improves the user experience.
| passed = True | |
| for error_line in error_lines: | |
| if error_line.startswith("[pyright-config]"): | |
| continue | |
| passed = False | |
| error_lines.append(f"\nFound {len(error_lines)} errors") | |
| # Error for pyright-config will not fail the challenge | |
| passed = not any( | |
| not line.startswith("[pyright-config]") for line in error_lines | |
| ) | |
| if passed: | |
| error_lines.append("\nAll tests passed") | |
| else: | |
| error_lines.append(f"\nFound {len(error_lines)} errors") |
laike9m
left a comment
There was a problem hiding this comment.
The pull request introduces docstrings and refactors the Pyright type-checking logic to support custom configurations within challenge files.
I found a bug in tests/test_identical.py where the helper function incorrectly reads from the solution_file fixture instead of the provided path argument, which would cause the test to always pass incorrectly.
I also suggested a few improvements in views/challenge.py to restore the "All tests passed" message and ensure consistent line number reporting.
| with solution_file.open() as f: | ||
| challenge_code = f.read() |
There was a problem hiding this comment.
The helper function get_test_code should use the path argument passed to it instead of the solution_file fixture from the outer scope. Using solution_file here causes the test to compare the solution's test code with itself when called for the question file, making the test ineffective.
| with solution_file.open() as f: | |
| challenge_code = f.read() | |
| with path.open() as f: | |
| challenge_code = f.read() |
| passed = True | ||
| for error_line in error_lines: | ||
| if error_line.startswith("[pyright-config]"): | ||
| continue | ||
| passed = False | ||
|
|
||
| error_lines.append(f"\nFound {len(error_lines)} errors") |
There was a problem hiding this comment.
The previous implementation provided a clear "All tests passed" message when the challenge was successful. The new logic always shows "Found X errors", which is less encouraging for users when they succeed. Restoring the success message improves the user experience.
| passed = True | |
| for error_line in error_lines: | |
| if error_line.startswith("[pyright-config]"): | |
| continue | |
| passed = False | |
| error_lines.append(f"\nFound {len(error_lines)} errors") | |
| # Error for pyright-config will not fail the challenge | |
| passed = not any( | |
| not line.startswith("[pyright-config]") for line in error_lines | |
| ) | |
| if passed: | |
| error_lines.append("\nAll tests passed") | |
| else: | |
| error_lines.append(f"\nFound {len(error_lines)} errors") |
laike9m
left a comment
There was a problem hiding this comment.
The PR adds docstrings and refactors some logic to support per-challenge Pyright configuration. I found a bug in the test suite and some opportunities for optimization and UX improvement.
| with solution_file.open() as f: | ||
| challenge_code = f.read() |
There was a problem hiding this comment.
The nested function get_test_code should use its parameter path to open the file. Currently, it always reads solution_file, which makes the comparison between solution and question test code ineffective.
| with solution_file.open() as f: | |
| challenge_code = f.read() | |
| with path.open() as f: | |
| challenge_code = f.read() |
| user_code_lines_len = len(user_code.splitlines()) | ||
| for line in stdout.splitlines(): |
There was a problem hiding this comment.
Calculate the number of lines in test_code once outside the loop to avoid redundant calculations.
| user_code_lines_len = len(user_code.splitlines()) | |
| for line in stdout.splitlines(): | |
| user_code_lines_len = len(user_code.splitlines()) | |
| test_code_lines_len = len(test_code.splitlines()) | |
| for line in stdout.splitlines(): |
|
|
||
| if line_number <= user_code_lines_len: | ||
| error_lines.append(error_line % line_number) | ||
| elif line_number <= user_code_lines_len + len(test_code.splitlines()): |
There was a problem hiding this comment.
Use the pre-calculated line count here.
| elif line_number <= user_code_lines_len + len(test_code.splitlines()): | |
| elif line_number <= user_code_lines_len + test_code_lines_len: |
| continue | ||
| passed = False | ||
|
|
||
| error_lines.append(f"\nFound {len(error_lines)} errors") |
There was a problem hiding this comment.
Restore the 'All tests passed' message when the challenge is successful, and fix the pluralization of 'errors'.
| error_lines.append(f"\nFound {len(error_lines)} errors") | |
| if passed: | |
| error_lines.append("\nAll tests passed") | |
| else: | |
| error_lines.append(f"\nFound {len(error_lines)} error{'s' if len(error_lines) > 1 else ''}") |
laike9m
left a comment
There was a problem hiding this comment.
The pull request adds valuable docstrings and refactors the type-checking logic to support custom Pyright configurations. However, there is a critical bug in the test suite and some UX/efficiency improvements that should be addressed.
| with solution_file.open() as f: | ||
| challenge_code = f.read() |
There was a problem hiding this comment.
Bug: The helper function get_test_code should use the path argument to open the file. Currently, it always reads from solution_file (the fixture), which means it compares the solution's test code with itself, making the test ineffective.
| with solution_file.open() as f: | |
| challenge_code = f.read() | |
| with path.open() as f: | |
| challenge_code = f.read() |
|
|
||
| if line_number <= user_code_lines_len: | ||
| error_lines.append(error_line % line_number) | ||
| elif line_number <= user_code_lines_len + len(test_code.splitlines()): |
There was a problem hiding this comment.
Optimization: Calculate test_code_lines_len = len(test_code.splitlines()) once outside the loop to avoid redundant calculations for every error line.
| elif line_number <= user_code_lines_len + len(test_code.splitlines()): | |
| elif line_number <= user_code_lines_len + test_code_lines_len: |
| continue | ||
| passed = False | ||
|
|
||
| error_lines.append(f"\nFound {len(error_lines)} errors") |
There was a problem hiding this comment.
UX: Restore the 'All tests passed' message for successful challenges and improve the pluralization of the error count.
| error_lines.append(f"\nFound {len(error_lines)} errors") | |
| if passed: | |
| error_lines.append("\nAll tests passed") | |
| if error_lines: | |
| error_lines.append(f"\nFound {len(error_lines)} error{'s' if len(error_lines) > 1 else ''}") |
laike9m
left a comment
There was a problem hiding this comment.
The pull request introduces docstrings and refactors the Pyright type-checking logic to support custom configurations within challenge files.
I found a bug in tests/test_identical.py where the helper function incorrectly reads from the solution_file fixture instead of the provided path argument, which would cause the test to always pass incorrectly.
I also suggested a few improvements in views/challenge.py to restore the "All tests passed" message and ensure consistent line number reporting.
| with solution_file.open() as f: | ||
| challenge_code = f.read() |
There was a problem hiding this comment.
The helper function get_test_code should use the path argument passed to it instead of the solution_file fixture from the outer scope. Using solution_file here causes the test to compare the solution's test code with itself when called for the question file, making the test ineffective.
| with solution_file.open() as f: | |
| challenge_code = f.read() | |
| with path.open() as f: | |
| challenge_code = f.read() |
| passed = True | ||
| for error_line in error_lines: | ||
| if error_line.startswith("[pyright-config]"): | ||
| continue | ||
| passed = False | ||
|
|
||
| error_lines.append(f"\nFound {len(error_lines)} errors") |
There was a problem hiding this comment.
The previous implementation provided a clear "All tests passed" message when the challenge was successful. The new logic always shows "Found X errors", which is less encouraging for users when they succeed. Restoring the success message improves the user experience.
| passed = True | |
| for error_line in error_lines: | |
| if error_line.startswith("[pyright-config]"): | |
| continue | |
| passed = False | |
| error_lines.append(f"\nFound {len(error_lines)} errors") | |
| # Error for pyright-config will not fail the challenge | |
| passed = not any( | |
| not line.startswith("[pyright-config]") for line in error_lines | |
| ) | |
| if passed: | |
| error_lines.append("\nAll tests passed") | |
| else: | |
| error_lines.append(f"\nFound {len(error_lines)} errors") |
laike9m
left a comment
There was a problem hiding this comment.
The pull request introduces docstrings and refactors the Pyright type-checking logic to support custom configurations within challenge files.
I found a bug in tests/test_identical.py where the helper function incorrectly reads from the solution_file fixture instead of the provided path argument, which would cause the test to always pass incorrectly.
I also suggested a few improvements in views/challenge.py to restore the "All tests passed" message and ensure consistent line number reporting.
| with solution_file.open() as f: | ||
| challenge_code = f.read() |
There was a problem hiding this comment.
The helper function get_test_code should use the path argument passed to it instead of the solution_file fixture from the outer scope. Using solution_file here causes the test to compare the solution's test code with itself when called for the question file, making the test ineffective.
| with solution_file.open() as f: | |
| challenge_code = f.read() | |
| with path.open() as f: | |
| challenge_code = f.read() |
| passed = True | ||
| for error_line in error_lines: | ||
| if error_line.startswith("[pyright-config]"): | ||
| continue | ||
| passed = False | ||
|
|
||
| error_lines.append(f"\nFound {len(error_lines)} errors") |
There was a problem hiding this comment.
The previous implementation provided a clear "All tests passed" message when the challenge was successful. The new logic always shows "Found X errors", which is less encouraging for users when they succeed. Restoring the success message improves the user experience.
| passed = True | |
| for error_line in error_lines: | |
| if error_line.startswith("[pyright-config]"): | |
| continue | |
| passed = False | |
| error_lines.append(f"\nFound {len(error_lines)} errors") | |
| # Error for pyright-config will not fail the challenge | |
| passed = not any( | |
| not line.startswith("[pyright-config]") for line in error_lines | |
| ) | |
| if passed: | |
| error_lines.append("\nAll tests passed") | |
| else: | |
| error_lines.append(f"\nFound {len(error_lines)} errors") |
Docstrings generation was requested by @laike9m.
The following files were modified:
tests/assets/challenges/basic-foo-pyright-config/question.pytests/conftest.pytests/test_identical.pyviews/challenge.pyviews/views.pyThese files were kept as they were
tests/test_challenge.pyThese file types are not supported
docs/Contribute.mdℹ️ Note