Conversation
for more information, see https://pre-commit.ci
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1071 +/- ##
=======================================
Coverage 91.93% 91.93%
=======================================
Files 51 51
Lines 7664 7664
=======================================
Hits 7046 7046
Misses 618 618 🚀 New features to boost your workflow:
|
|
@melonora I summon you as a "Windows x Dask" expert. Do you have any insight on why this specific Windows job fails (the dask computational graph shows double the size): https://github.com/scverse/spatialdata/actions/runs/21940345294/job/63754609282?pr=1071. The test failing means that there may be some performance penalty in that environment. Anyway since the result is correct and it the test doesn't fail for other OS versions or more recent Dask, that will not block the merge. |
|
I will check when back at home, but yes I have my suspicion |
|
Thanks I will fix the docs meanwhile. |
|
Docs are green! |
|
Reporting: I think the failing test is overall faulty, depending on the actual purpose. We are actually testing more dask internals here instead of ensuring that we don't have more than expected entry points in our codebase to dask. This means that My suggestion would be to either ensure that there are no more expected calls from the spatialdata side, e.g. if you write there should be only one entry point or we test that the amount of times that chunks are accessed fall within a range of 2*chunks and if that test fails we report it upstream to dask. @LucaMarconato WDYT? |
Fixes #1039
I changed the test setup to use
uv add, which actually makes sense and results in a single solve per env, as I’ve been preaching since the dawn of time.Don’t use
[uv] pip installin CI people. Ever. Why does nobody trust me on that?