Skip to content

Comments

feat: enable coverage check for all languages#1525

Open
mohammedahmed18 wants to merge 1 commit intomainfrom
feat/enable-coverage-check-all-languages
Open

feat: enable coverage check for all languages#1525
mohammedahmed18 wants to merge 1 commit intomainfrom
feat/enable-coverage-check-all-languages

Conversation

@mohammedahmed18
Copy link
Contributor

Summary

  • Remove the Python-only gate (if is_python() else True) on coverage checking in function_optimizer.py
  • JS/TS optimizations now also require test coverage >= 60% threshold before proceeding, matching Python behavior
  • Coverage infrastructure for Jest and Vitest already exists and is functional — only the validation was skipped

Test plan

  • Verify coverage collection already works for Jest (--coverage --coverageReporters=json) and Vitest (--coverage --coverage.reporter=json)
  • Verify coverage_critic() and JestCoverageUtils handle JS/TS coverage data correctly
  • Run JS/TS optimization end-to-end to confirm coverage is checked and reported

🤖 Generated with Claude Code

Remove the Python-only gate on coverage checking so JS/TS optimizations
also require adequate test coverage before proceeding.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@claude
Copy link
Contributor

claude bot commented Feb 18, 2026

PR Review Summary

Prek Checks

All checks passed. Ruff check and ruff format both passed with no issues.

Mypy

No new type errors introduced by this PR. The 146 mypy errors reported on function_optimizer.py are all pre-existing (confirmed by checking the same lines on main).

Code Review

No critical issues found. This is a clean, minimal change:

  • First hunk (line ~1907): Removes if is_python() else True guard, so coverage_critic() is now called unconditionally for all languages.
  • Second hunk (line ~2350): Removes if is_python() and guard, so coverage validation now applies to all languages in the LSP/force path as well.

Both changes are consistent. coverage_critic() accepts CoverageData | None and is language-agnostic — it simply checks coverage >= COVERAGE_THRESHOLD. The PR description confirms JS/TS coverage infrastructure (Jest --coverage / Vitest --coverage) already exists and produces CoverageData objects.

Test Coverage

File PR Branch Main Branch Delta
codeflash/optimization/function_optimizer.py 18% 18% 0%
  • No coverage regression. Coverage is unchanged at 18% for the modified file.
  • The low absolute coverage is pre-existing — function_optimizer.py is a 3000+ line orchestration module that's difficult to unit test in isolation.
  • The changed lines (removing is_python() guards) are in code paths that require full optimization pipeline setup to exercise, so they are not covered by unit tests on either branch.
  • Test suite: 2374 passed, 57 skipped, 8 failed (all failures are pre-existing in test_tracer.py, unrelated to this PR).

Last updated: 2026-02-18

@aseembits93
Copy link
Contributor

@claude why are some ci workflows failing?

@claude
Copy link
Contributor

claude bot commented Feb 18, 2026

Claude finished @aseembits93's task in 2m 32s —— View job


CI Failure Analysis

  • Check CI status for this PR
  • Investigate failing workflows
  • Summarize findings

4 out of 19 workflows failed. All 4 failures are caused by the CodeFlash AI backend service (app.codeflash.ai) returning errors — not by the code changes in this PR. Here's the breakdown:

Workflow Root Cause
E2E - Tracer Replay Backend returned 500 - Internal server error for test generation and optimization requests. The final failure is an AssertionError: Failed to find coverage data for funcA because the optimization never completed successfully.
E2E - Async Backend returned 500 - Failed to generate valid async test code and later hit ReadTimeout (app.codeflash.ai port 443, read timeout=90s). All 3 retry attempts failed.
E2E - JS CommonJS Function Backend hit ReadTimeout on app.codeflash.ai during test generation for fibonacci. All 3 retry attempts failed with "NO TESTS GENERATED".
E2E - JS ESM Async Backend hit ReadTimeout on app.codeflash.ai during test generation for processItemsSequential. All 3 retry attempts failed with "NO TESTS GENERATED".

Summary: These are all infrastructure/backend flakiness issues — the CodeFlash API service was returning 500 errors and timing out during this CI run. None of the failures are related to the code change in this PR (removing the is_python() guard on coverage checking). The 15 other workflows (including Lint, Mypy, unit tests, Windows unit tests, and several other E2E tests) all passed.

These E2E tests are inherently flaky since they depend on an external AI service. A re-run should resolve them if the backend is healthy.


| Branch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants