PHP 8 adds a JIT compiler to PHP’s core which has the potential to speed up performance dramatically. There are some side notes to be made about the actual impact on real-life web applications, which is why I ran some benchmarks on how the JIT performs (I’ve listed all relevant references in the footnotes as well).
I wanted to dedicate a blog post on how to setup the JIT as well, since there’s a few things to talk about.
Honestly, setting up the JIT is one of the most confusing ways of configuring a PHP extension I’ve ever seen. Luckily there are some configuration shorthands available so that it’s more easy to set up. Still it’s good to know about the JIT config in depth, so here goes.
First of all, the JIT will only work if opcache is enabled, this is the default for most PHP installations, but you should make sure that opcache.enable
is set to 1 in your php.ini
file. Enabling the JIT itself is done by specifying opcache.jit_buffer_size
in php.ini
.
Note that if you’re running PHP via the commandline, you can also pass these options via the -d
flag, instead of adding them to php.ini
:
php -dopcache.enable=1 -dopcache.jit_buffer_size=100M
If this directive is excluded, the default value is set to 0, and the JIT won’t run. If you’re testing the JIT in a CLI script, you’ll need to use opcache.enable_cli
instead to enable opcache:
php -dopcache.enable_cli=1 -dopcache.jit_buffer_size=100M
The difference between opcache.enable
and opcache.enable_cli
is that the first one should be used if you’re running, for example, the built-in PHP server. If you’re actually running a CLI script, you’ll need opcache.enable_cli
.
Before continuing, let’s ensure the JIT actually works, create a PHP script that’s accessible via the browser or the CLI (depending on where you’re testing the JIT), and look at the output of opcache_get_status()
:
var_dump(opcache_get_status()['jit']);
The output should be something like this:
array:7 [
"enabled" => true
"on" => true
"kind" => 5
"opt_level" => 4
"opt_flags" => 6
"buffer_size" => 4080
"buffer_free" => 0
]
If enabled
and on
are true, you’re good to go!
Next, there’s several ways to configure the JIT (and this is where we’ll get into the configuration mess). You can configure when the JIT should run, how much it should try to optimise, etc. All of these options are configured using a single (!) config entry: opcache.jit
. It could look something like this:
opcache.enable=1
opcache.jit=1255
Now, what does that number mean? The RFC lists the meaning of each of them. Mind you: this is not a bit mask, each number simply represents another configuration option. The RFC lists the following options:
#O — Optimization level
0 | don’t JIT |
1 | minimal JIT (call standard VM handlers) |
2 | selective VM handler inlining |
3 | optimized JIT based on static type inference of individual function |
4 | optimized JIT based on static type inference and call tree |
5 | optimized JIT based on static type inference and inner procedure analyses |
#T — JIT trigger
0 | JIT all functions on first script load |
1 | JIT function on first execution |
2 | Profile on first request and compile hot functions on second request |
3 | Profile on the fly and compile hot functions |
4 | Compile functions with @jit tag in doc-comments |
5 | Tracing JIT |
#R — register allocation
0 | don’t perform register allocation |
1 | use local liner-scan register allocator |
2 | use global liner-scan register allocator |
#C — CPU specific optimization flags
0 | none |
1 | enable AVX instruction generation |
One small gotcha: the RFC lists these options in reverse order, so the first digit represents the C
value, the second the R
, and so on. Why there simply weren’t four configuration entries added is beyond my comprehension, probably to make configuring the JIT faster… right? Noticed a typo? You can submit a PR to fix it. If you want to stay up to date about what’s happening on this blog, you can follow me on Twitter or subscribe to my newsletters
Anyways, internals propose 1255
as the best default, it will do maximum jitting, use the tracing JIT, use a global liner-scan register allocator — whatever that might be — and enables AVX instruction generation.
So your ini settings (or -d
flags) should have these values:
opcache.enable=1
opcache.jit_buffer_size=100M
opcache.jit=1255
Keep in mind that opcache.jit
is optional by the way. The JIT will use a default value if that property is omitted.
Which default, you ask? That would be opcache.jit=tracing
.
Hang on, that’s not the strange bitmask-like structure we saw earlier? That’s right: after the original RFC passed, internals recognised that the bitmask-like options weren’t all that user-friendly, so they added two aliases which are translated to the bitmask under the hood. There’s opcache.jit=tracing
and opcache.jit=function
.
The difference between the two is that the function JIT will only try to optimize code within the scope of a single function, while the tracing JIT can look at the whole stack trace to identify and optimize hot code. Internals recommends to use the tracing JIT, because it’ll probably almost always give the best results.
So the only option you actually need to set to enable the JIT with its optimal configuration is opcache.jit_buffer_size
, but if you want to be explicit, listing opcache.jit
wouldn’t be such a bad idea:
opcache.enable=1
opcache.jit_buffer_size=100M
opcache.jit=tracing
Leave a Reply