Recently, I expanded my contribution from JS surface layer to lower layer compiler pipeline, which takes a bytecode and produce machine instructions or produce a new graph for the further compiler.
These are called maglev and turbolev.
This post will describe how I expanded my contribution area and merged my first four CLs for the Turbolev project.
Backgrounds
V8 and JIT
V8 is the Javascript Engine that powers Chrome, NodeJS, Deno, … and more. It has multi tiering JIT compiler. A just-in-time (JIT) compilation is compilation (of computer code) during execution of a program (at run time) rather than before execution. [FYI: Wikipidiea]
For example, below code will create bytecode and executed in the Virtual Machine. I.e., Not a actual machine instruction.
Code:
Bytecode:
This bytecode acts as machine instructions for V8’s Virtual Machine.
When the function becomes “hot” enough (is called frequently), the JIT compiler optimize it into native machine instructions for faster execution.
Function add compiled as:
The demonstration above used V8’s current top-tier JIT compiler: Turbofan and Turboshaft.
Maglev and Turbofan
As mentioned, V8 has multi-tier JIt compiler. Maglev is a mid-tier JIT compiler. Turbofan and Turboshaft produce highly optimized code but takes longer to compile.
| Maglev | Fast | Lower optimization | Simple |
| Turbofan | Slower | Higher optimization | Complex |
V8’s tiering manager decides which compiler to use for given function by its decision logic.
Turbolev
The name comes from maglev + turboshaft. As you might guess, it’s a project aimed at replacing the Turbofan compiler frontend, using Maglev’s IR as the new starting point.
The previous top-tier JIT compiler used Turbofan as its frontend and Turboshaft as its backend. Turbofan employed a Sea of Nodes for optimization, rather than the more conventional CFG (Control Flow Graph)-based approach. While theoretically powerful, but it had some practical drawbacks in complexity and maintainability.
To address these issues, the idea is build the Turboshaft backend graph from the mid-tier compiler, Maglev.
For more read about backgrounds and beginning of this project: Land ahoy: leaving the Sea of Nodes · V8 by Darius :).
How I join this project?
For a few years, I contributed JS feature surface layer area of the V8 project. However, I realized that memory managements and actual JIT compiler were still ‘black box’ to me. As opportunities to implement new ECMAScript features became harder to find as an external contributor. So I decided to expand my contribution area to look inside the black box and expand my opportunities.
I remembered that Darius had reviewed the JIT part of my Float16Array CLs. And he recently published the article on v8.dev about turbolev. I sent him an email to see if I can help something.
Hi Darius,
I’ve noticed that it’s getting a bit tough to find contribution opportunities related to ECMAScript features as an external contributor. So I was wondering—are there any areas in the compiler side (like Turboshaft or Maglev), or GC/memory stuff, where I could possibly help out?
I’d love to learn more and hopefully get involved in a broader range of work in V8.
–
Regards,
Seokho
He generously suggests the turbolev project! (With so so detailed explanation :))
Following his guidance, I decided to join the project. Firstly, I’d like to find out easiest one.
The first target was Math.atan2. It seemed like a good candidate because there was an existing code path for Math.pow that handled similar IEEE754 binary operation.
And I created a draft CL and emailed some questions about it to Darius, Marja, Victor, and Leszek. They kindly explained it to me and helped me to run pinpoint the performance measurement tool for Googler.
Contributions
Starting from optimizing Math.atan2, I contributed about 4 CLs.

I optimized:
- Math.atan2
- Math.sqrt
- Array.prototype.at
… and some refactors.
I’ll introduce one of them:
Implement Math.sqrt turbolev - Support Math.sqrt function to mid and top tier compiler in V8.
There are 61% of performance enhanced.

In the code review, Victor said it was amazing!

Let’s explain how do I optimize this:
I added a maglev IR node.
And registered MathSqrt in maglev graph builder maglev-graph-builder.h
That macro creates reducer function declaration . So C++ compiler told me to create a implementation.
This reducer attempts to simplify or convert a high-level operation (like a JS call to Math.sqrt) into a more optimized, lower-level IR node (Float64Sqrt). Which will be used in “turbolev” for top tier JIt and “maglev” for mid-tier JIT.
For Mid-Tier Maglev - arm64 arch:
Now, when it JIT compiles this code, the Float64Sqrt node will generate machine instruction fsqrt for arm64 arch. Similarly, for x64, it will generate sqrtsd SSE instruction.
For Top-Tier turbolev compilation:
The Float64Sqrt node consumed by turbolev-graph-builder.
This creates Float64Sqrt Turboshaft Graph Node, using the coresponding node from the Maglev as input. From here, Turboshaft’s powerful CFG-based backend can perform further optimizations like Loop Unrolling, Store Elimination, and advanced Register Allocation.
So, When Math.sqrt becomes hot and stable, the tiering manager can trigger a Turbolev compilation. (While this will likely generate the same machine instruction, the key benefit is that Turboshaft can now apply its powerful optimizations to all the surrounding code.)
Concluding
Through this work, I’ve gained much more familiarity with the Maglev and Turbolev areas of V8. There are still many unimplemented functions and opportunities to help - There are many yet unimplemented functions. I’ll continue contributing to get more comfortable here, and the, expand other areas like WASM or memory or something else!
Thanks Darius, Marja, Victor, Leszek to gives great guide and code review :).
To Reader
Thank you for reading this post! If you find some error, misspelling, or something awkward, please feel free to contribute here!
.png)


