Skip to content

📝 Add docstrings to feat_patch#131

Open
coderabbitai[bot] wants to merge 1 commit intomainfrom
coderabbitai/docstrings/6cab7ca
Open

📝 Add docstrings to feat_patch#131
coderabbitai[bot] wants to merge 1 commit intomainfrom
coderabbitai/docstrings/6cab7ca

Conversation

@coderabbitai
Copy link

@coderabbitai coderabbitai bot commented Jan 12, 2026

Docstrings generation was requested by @laike9m.

The following files were modified:

  • tests/assets/challenges/basic-foo-pyright-config/question.py
  • tests/conftest.py
  • tests/test_identical.py
  • views/challenge.py
  • views/views.py
These files were kept as they were
  • tests/test_challenge.py
These file types are not supported
  • docs/Contribute.md
ℹ️ Note

CodeRabbit cannot perform edits on its own pull requests yet.

Docstrings generation was requested by @laike9m.

* #98 (comment)

The following files were modified:

* `tests/assets/challenges/basic-foo-pyright-config/question.py`
* `tests/conftest.py`
* `tests/test_identical.py`
* `views/challenge.py`
* `views/views.py`
@coderabbitai
Copy link
Author

coderabbitai bot commented Jan 12, 2026

Important

Review skipped

CodeRabbit bot authored PR detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Comment @coderabbitai help to get the list of available commands and usage tips.

laike9m

This comment was marked as outdated.

laike9m

This comment was marked as outdated.

laike9m

This comment was marked as outdated.

Copy link
Owner

@laike9m laike9m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work on refactoring the type-checking logic and handling Pyright configurations!

I found one bug in tests/test_identical.py.

challenge = Challenge(
name=challenge_name, level=Level(level), code=challenge_code
)

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line should use path.open() instead of solution_file.open(). Currently, it always reads the content of solution_file regardless of the path argument.

laike9m

This comment was marked as off-topic.

Copy link
Owner

@laike9m laike9m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pull request introduces docstrings and refactors the Pyright type-checking logic to support custom configurations within challenge files.

I found a bug in tests/test_identical.py where the helper function incorrectly reads from the solution_file fixture instead of the provided path argument, which would cause the test to always pass incorrectly.

I also suggested a few improvements in views/challenge.py to restore the "All tests passed" message and ensure consistent line number reporting.

Comment on lines +23 to +24
with solution_file.open() as f:
challenge_code = f.read()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The helper function get_test_code should use the path argument passed to it instead of the solution_file fixture from the outer scope. Using solution_file here causes the test to compare the solution's test code with itself when called for the question file, making the test ineffective.

Suggested change
with solution_file.open() as f:
challenge_code = f.read()
with path.open() as f:
challenge_code = f.read()

Comment on lines +282 to +288
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False

error_lines.append(f"\nFound {len(error_lines)} errors")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous implementation provided a clear "All tests passed" message when the challenge was successful. The new logic always shows "Found X errors", which is less encouraging for users when they succeed. Restoring the success message improves the user experience.

Suggested change
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False
error_lines.append(f"\nFound {len(error_lines)} errors")
# Error for pyright-config will not fail the challenge
passed = not any(
not line.startswith("[pyright-config]") for line in error_lines
)
if passed:
error_lines.append("\nAll tests passed")
else:
error_lines.append(f"\nFound {len(error_lines)} errors")

Copy link
Owner

@laike9m laike9m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pull request introduces docstrings and refactors the Pyright type-checking logic to support custom configurations within challenge files.

I found a bug in tests/test_identical.py where the helper function incorrectly reads from the solution_file fixture instead of the provided path argument, which would cause the test to always pass incorrectly.

I also suggested a few improvements in views/challenge.py to restore the "All tests passed" message and ensure consistent line number reporting.

Comment on lines +23 to +24
with solution_file.open() as f:
challenge_code = f.read()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The helper function get_test_code should use the path argument passed to it instead of the solution_file fixture from the outer scope. Using solution_file here causes the test to compare the solution's test code with itself when called for the question file, making the test ineffective.

Suggested change
with solution_file.open() as f:
challenge_code = f.read()
with path.open() as f:
challenge_code = f.read()

Comment on lines +282 to +288
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False

error_lines.append(f"\nFound {len(error_lines)} errors")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous implementation provided a clear "All tests passed" message when the challenge was successful. The new logic always shows "Found X errors", which is less encouraging for users when they succeed. Restoring the success message improves the user experience.

Suggested change
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False
error_lines.append(f"\nFound {len(error_lines)} errors")
# Error for pyright-config will not fail the challenge
passed = not any(
not line.startswith("[pyright-config]") for line in error_lines
)
if passed:
error_lines.append("\nAll tests passed")
else:
error_lines.append(f"\nFound {len(error_lines)} errors")

Copy link
Owner

@laike9m laike9m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR adds docstrings and refactors some logic to support per-challenge Pyright configuration. I found a bug in the test suite and some opportunities for optimization and UX improvement.

Comment on lines +23 to +24
with solution_file.open() as f:
challenge_code = f.read()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nested function get_test_code should use its parameter path to open the file. Currently, it always reads solution_file, which makes the comparison between solution and question test code ineffective.

Suggested change
with solution_file.open() as f:
challenge_code = f.read()
with path.open() as f:
challenge_code = f.read()

Comment on lines +252 to 253
user_code_lines_len = len(user_code.splitlines())
for line in stdout.splitlines():
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calculate the number of lines in test_code once outside the loop to avoid redundant calculations.

Suggested change
user_code_lines_len = len(user_code.splitlines())
for line in stdout.splitlines():
user_code_lines_len = len(user_code.splitlines())
test_code_lines_len = len(test_code.splitlines())
for line in stdout.splitlines():


if line_number <= user_code_lines_len:
error_lines.append(error_line % line_number)
elif line_number <= user_code_lines_len + len(test_code.splitlines()):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the pre-calculated line count here.

Suggested change
elif line_number <= user_code_lines_len + len(test_code.splitlines()):
elif line_number <= user_code_lines_len + test_code_lines_len:

continue
passed = False

error_lines.append(f"\nFound {len(error_lines)} errors")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Restore the 'All tests passed' message when the challenge is successful, and fix the pluralization of 'errors'.

Suggested change
error_lines.append(f"\nFound {len(error_lines)} errors")
if passed:
error_lines.append("\nAll tests passed")
else:
error_lines.append(f"\nFound {len(error_lines)} error{'s' if len(error_lines) > 1 else ''}")

Copy link
Owner

@laike9m laike9m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pull request adds valuable docstrings and refactors the type-checking logic to support custom Pyright configurations. However, there is a critical bug in the test suite and some UX/efficiency improvements that should be addressed.

Comment on lines +23 to +24
with solution_file.open() as f:
challenge_code = f.read()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: The helper function get_test_code should use the path argument to open the file. Currently, it always reads from solution_file (the fixture), which means it compares the solution's test code with itself, making the test ineffective.

Suggested change
with solution_file.open() as f:
challenge_code = f.read()
with path.open() as f:
challenge_code = f.read()


if line_number <= user_code_lines_len:
error_lines.append(error_line % line_number)
elif line_number <= user_code_lines_len + len(test_code.splitlines()):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optimization: Calculate test_code_lines_len = len(test_code.splitlines()) once outside the loop to avoid redundant calculations for every error line.

Suggested change
elif line_number <= user_code_lines_len + len(test_code.splitlines()):
elif line_number <= user_code_lines_len + test_code_lines_len:

continue
passed = False

error_lines.append(f"\nFound {len(error_lines)} errors")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UX: Restore the 'All tests passed' message for successful challenges and improve the pluralization of the error count.

Suggested change
error_lines.append(f"\nFound {len(error_lines)} errors")
if passed:
error_lines.append("\nAll tests passed")
if error_lines:
error_lines.append(f"\nFound {len(error_lines)} error{'s' if len(error_lines) > 1 else ''}")

Copy link
Owner

@laike9m laike9m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pull request introduces docstrings and refactors the Pyright type-checking logic to support custom configurations within challenge files.

I found a bug in tests/test_identical.py where the helper function incorrectly reads from the solution_file fixture instead of the provided path argument, which would cause the test to always pass incorrectly.

I also suggested a few improvements in views/challenge.py to restore the "All tests passed" message and ensure consistent line number reporting.

Comment on lines +23 to +24
with solution_file.open() as f:
challenge_code = f.read()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The helper function get_test_code should use the path argument passed to it instead of the solution_file fixture from the outer scope. Using solution_file here causes the test to compare the solution's test code with itself when called for the question file, making the test ineffective.

Suggested change
with solution_file.open() as f:
challenge_code = f.read()
with path.open() as f:
challenge_code = f.read()

Comment on lines +282 to +288
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False

error_lines.append(f"\nFound {len(error_lines)} errors")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous implementation provided a clear "All tests passed" message when the challenge was successful. The new logic always shows "Found X errors", which is less encouraging for users when they succeed. Restoring the success message improves the user experience.

Suggested change
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False
error_lines.append(f"\nFound {len(error_lines)} errors")
# Error for pyright-config will not fail the challenge
passed = not any(
not line.startswith("[pyright-config]") for line in error_lines
)
if passed:
error_lines.append("\nAll tests passed")
else:
error_lines.append(f"\nFound {len(error_lines)} errors")

Copy link
Owner

@laike9m laike9m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pull request introduces docstrings and refactors the Pyright type-checking logic to support custom configurations within challenge files.

I found a bug in tests/test_identical.py where the helper function incorrectly reads from the solution_file fixture instead of the provided path argument, which would cause the test to always pass incorrectly.

I also suggested a few improvements in views/challenge.py to restore the "All tests passed" message and ensure consistent line number reporting.

Comment on lines +23 to +24
with solution_file.open() as f:
challenge_code = f.read()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The helper function get_test_code should use the path argument passed to it instead of the solution_file fixture from the outer scope. Using solution_file here causes the test to compare the solution's test code with itself when called for the question file, making the test ineffective.

Suggested change
with solution_file.open() as f:
challenge_code = f.read()
with path.open() as f:
challenge_code = f.read()

Comment on lines +282 to +288
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False

error_lines.append(f"\nFound {len(error_lines)} errors")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous implementation provided a clear "All tests passed" message when the challenge was successful. The new logic always shows "Found X errors", which is less encouraging for users when they succeed. Restoring the success message improves the user experience.

Suggested change
passed = True
for error_line in error_lines:
if error_line.startswith("[pyright-config]"):
continue
passed = False
error_lines.append(f"\nFound {len(error_lines)} errors")
# Error for pyright-config will not fail the challenge
passed = not any(
not line.startswith("[pyright-config]") for line in error_lines
)
if passed:
error_lines.append("\nAll tests passed")
else:
error_lines.append(f"\nFound {len(error_lines)} errors")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant