Conversation
|
I sent a PR for documentation purposes. I'm not planning to merge it for now. |
|
It may be something very specific to the Remote codebase, or at least I am not seeing any significant improvements over main at compiling my work project, compile times are pretty much the same, maybe slightly faster test loading (9.8s vs 10.3s in |
|
@michallepicki I could not measure such a large difference in Livebook as well. My measurements tell me that this is faster because it avoids the single core bottleneck of the code server, so the more modules and the more cores, the more likely this will yield positive results. The overall goal is to expose this as an option. |
|
Have you measured memory usage of this change? This seems to significantly use more memory than the current behaviour. At least with the project I'm trying. When I try to compile the project inside an Ubuntu VM with 16 giga memory, the OOM killer always reaps the compilation. In htop I see memory usage of up to 21 GiB. Whereas the current behaviour barely uses 500 MiB of memory.
Tested with latest OTP 27 and 28 release. |
This explores a different approach to execute module definitions that uses the evaluator (interpreted) rather than the compiler (compiled). I have tried this a long time ago but it was never faster, but I assume a combination of the JIT and different optimizations have made it viable.
Early experiments are very promising. This mode is 5x faster for Remote's codebase compared to 1.19 and 3x faster than main (which already has other improvements:
The next steps is to expose this as an option and do some additional testing to verify stacktraces are solid. Note this does not change the generated artefact in any way. Each function in the module is still compiled and optimized within the generated
.beamfile.