Skip to content

Comments

[docs] add MathOptInterface.jl to the list of packages#92

Merged
giordano merged 1 commit intoJuliaTesting:mainfrom
odow:patch-1
Feb 23, 2026
Merged

[docs] add MathOptInterface.jl to the list of packages#92
giordano merged 1 commit intoJuliaTesting:mainfrom
odow:patch-1

Conversation

@odow
Copy link
Contributor

@odow odow commented Feb 23, 2026

Hey!

I started using this in https://github.com/jump-dev/MathOptInterface.jl

Some may find our workflow interesting.

There are a lot of tests in MOI, and it is almost entirely bottlenecked by compilation time. (I should probably experiment with Base.Experimental.@optlevel.)

We shard the tests into directories in /test, and then use ParallelTestRunner to parallelise over the files in each directory.

Each file is a module that automatically detects and runs all of the functions named test_.

I find this works much better than @testset, where I kept leaking stuff into setup and global scope and it was hard to maintain.

Now adding a test is: choose a file, add a function.

You can improve parallelisation by splitting tests into a new file

The biggest downside is that it's easy to add two functions with the same name that overwrite each other. So there's a test to avoid that:
https://github.com/jump-dev/MathOptInterface.jl/blob/ea3cf8ca0060d776bb351fe4f7e1698d308f105c/test/General/test_errors.jl#L430-L453

@giordano
Copy link
Collaborator

Thanks!

@giordano giordano merged commit 10225a7 into JuliaTesting:main Feb 23, 2026
2 checks passed
@vchuravy
Copy link
Member

@odow
Copy link
Contributor Author

odow commented Feb 23, 2026

On 32-bit it parallelised only over a single thread. Forcing more made it much slower. And it was faster still to not use PTR

@odow odow deleted the patch-1 branch February 23, 2026 20:29
@giordano
Copy link
Collaborator

On 32-bit it parallelised only over a single thread.

You can always pass --jobs to force however many parallel jobs you want. That's what I usually do on macOS: https://github.com/NumericalEarth/Breeze.jl/blob/63af70c3fbb3934a57467ff03f74e2a4156134bf/.github/workflows/CI.yml#L96-L100. But it's not clear to me why the default would be one with a 32-bit Julia, maybe it's because available_memory() in

function default_njobs(; cpu_threads = Sys.CPU_THREADS, free_memory = available_memory())
jobs = cpu_threads
memory_jobs = Int64(free_memory) ÷ (2 * 2^30)
return max(1, min(jobs, memory_jobs))
end
reports 2 GB?

@giordano
Copy link
Collaborator

giordano commented Feb 23, 2026

Ah, this is hilarious, the problem is

memory_jobs = Int64(free_memory) ÷ (2 * 2^30)

julia> 2 * 2^30
-2147483648

This needs to be fixed 😃

@giordano
Copy link
Collaborator

giordano commented Feb 23, 2026

Forcing more made it much slower.

Wait, I just noted this: that's surprising: there's some extra compilation latency involved when using separate parallel processes, but why would the overall speedup be substantially different between 32- and 64-bit julia builds? 🤔

@odow
Copy link
Contributor Author

odow commented Feb 23, 2026

Nah I meant: given PTR used one thread, then there is overhead to using PTR instead of not. There's a similar overhead in 64-bit, but the parallelism was a win.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants