You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently I checked Profile-Guided Optimization (PGO) improvements on multiple projects. The results are here. According to the multiple tests, PGO can help with improving performance in many cases. That's why I think trying to optimize the Blaze (its Rust part) with PGO can be a good idea.
I can suggest the following action points:
Perform PGO benchmarks on Blaze. And if it shows improvements - add a note about possible improvements in Blaze performance with PGO.
Providing an easier way (e.g. a build option) to build scripts with PGO can be helpful for the end-users and maintainers since they will be able to optimize Blaze according to their own workloads.
Optimize pre-built binaries
Since it's the library, I can suggest the following way to optimize with PGO:
Prepare a binary with the Blaze's Rust part (don't know how hard is to implement this for Blaze)
Compile this binary with cargo-pgo (the link is below)
Run this binary on a sample workload
Collect all profiles and recompile the library once again with the collected profiles
Benchmark usual vs PGO-optimized builds
Maybe testing Post-Link Optimization techniques (like LLVM BOLT) would be interesting too (Clang and Rustc already use BOLT as an addition to PGO) but I recommend starting from the usual PGO.
For the Rust projects, I recommend starting experimenting with PGO with cargo-pgo.
Here are some examples of how PGO optimization is integrated in other projects:
it sounds attractive, however it's not easy to build a binary for profiling because the native code is built into a lib and dynamically loaded in spark through JNI. we also tried some profiling ways like flamegraphing which does not need a binary, i doubt whether pgo can work like that?
it sounds attractive, however it's not easy to build a binary for profiling because the native code is built into a lib and dynamically loaded in spark through JNI
You can try to build a "wrapping" binary only for the native part (without JNI stuff), run it on a sample workload, collect the profiles, and then use these profiles during the normal Blaze compilation (PGO-optimize native part + build JNI stuff around it).
we also tried some profiling ways like flamegraphing which does not need a binary, i doubt whether pgo can work like that?
In theory yes, it's possible to do this via Sampling PGO (Clang docs). The same instruction should be valid for Rustc as well but I didn't test such scenario yet - most of my experience is with instrumentation PGO.
UPD: According to the Clang documentation, it's also possible to do instrumentation PGO with shared libraries - the PGO profiles will be dumped for each library. The Pydantic-core project uses this approach for building the PGO-optimized library version.
Hi!
Recently I checked Profile-Guided Optimization (PGO) improvements on multiple projects. The results are here. According to the multiple tests, PGO can help with improving performance in many cases. That's why I think trying to optimize the Blaze (its Rust part) with PGO can be a good idea.
I can suggest the following action points:
Since it's the library, I can suggest the following way to optimize with PGO:
cargo-pgo
(the link is below)Maybe testing Post-Link Optimization techniques (like LLVM BOLT) would be interesting too (Clang and Rustc already use BOLT as an addition to PGO) but I recommend starting from the usual PGO.
For the Rust projects, I recommend starting experimenting with PGO with cargo-pgo.
Here are some examples of how PGO optimization is integrated in other projects:
configure
scriptThe text was updated successfully, but these errors were encountered: