For the past few days, I've been diving deep into testing Drupal 8's experimental new BigPipe feature, which allows Drupal page requests for authenticated users to be streamed and loaded in stages—cached elements (usually the majority of a page) are loaded almost immediately, meaning the end user can interact with the main elements on the page very quickly, then other uncacheable elements are loaded in as Drupal is able to render them.
Here's a very quick demo of an extreme case, where a particular bit of content takes five seconds to load; BigPipe hugely improves the usability and perceived performance of the page by streaming the majority of the page content from cache immediately, then streaming the harder-to-generate parts as they become available (click to replay):
Drupal BigPipe demo - click to play again.
BigPipe takes advantage of streaming PHP responses (using flush()
to flush the output buffer at various times during a page load), but to ensure the stream is delivered all the way from PHP through to the client, you need to make sure your entire webserver and proxying stack streams the request directly, with no buffering. Since I maintain Drupal VM and support Apache and Nginx as webservers, as well as Varnish as a reverse caching proxy, I experimented with many different configurations to find the optimal way to stream responses through any part of this open source stack.
And because my research dug up a bunch of half-correct, mostly-untested assumptions about output buffering with PHP requests, I figured I'd set things straight in one comprehensive blog post.
Testing output buffering
I've seen a large number of example scripts used to test output_buffering on Stack Overflow and elsewhere, and many of them assume output buffering is disabled completely. Rather than doing that, I decided to make a little more robust script for my testing purposes, and also to document all the different bits for completeness:
<?php
// Set a valid header so browsers pick it up correctly.
header('Content-type: text/html; charset=utf-8');
// Emulate the header BigPipe sends so we can test through Varnish.
header('Surrogate-Control: BigPipe/1.0');
// Explicitly disable caching so Varnish and other upstreams won't cache.
header("Cache-Control: no-cache, must-revalidate");
// Setting this header instructs Nginx to disable fastcgi_buffering and disable
// gzip for this request.
header('X-Accel-Buffering: no');
$string_length = 32;
echo 'Begin test with an ' . $string_length . ' character string...<br />' . "\r\n";
// For 3 seconds, repeat the string.
for ($i = 0; $i < 3; $i++) {
$string = str_repeat('.', $string_length);
echo $string . '<br />' . "\r\n";
echo $i . '<br />' . "\r\n";
flush();
sleep(1);
}
echo 'End test.<br />' . "\r\n";
?>
If you place this file into a web-accessible docroot, then load the script in your terminal using PHP's cli, you should see output like (click to replay):
PHP response streaming via PHP's CLI - click to play again.
And if you view it in the browser? By default, you won't see a streamed response. Instead, you'll see nothing until the entire page loads (click to replay):
PHP response not streaming via webserver in the browser - click to play again.
That's good, though—we now have a baseline. We know that the script works on PHP's CLI, but either our webserver or PHP is not streaming the response all the way through to the client. If you change the $string_length
to 4096, and are using a normal PHP/Apache/Nginx configuration, you should see the following (click to replay):
PHP response streaming via webserver in the browser - click to play again.
The rest of this post will go through the steps necessary to ensure the response is streamed through your entire stack.
PHP and output_buffering
Some guides say you have to set output_buffering = Off
in your php.ini configuration in order to stream a PHP response. In some circumstances, this is useful, but typically, if you're calling flush()
in your PHP code, PHP will flush the output buffer immediately after the buffer is filled (the default value is 4096
, which means PHP will flush it's buffer in 4096 byte chunks).
For many applications, 4096 bytes of buffering offers a good tradeoff for better transport performance vs. more lively responses, but you can lower the value if you need to send back much smaller responses (e.g. tiny JSON responses like {setting:1}
).
One setting you definitely do need to disable, however, is zlib.output_compression
. Set it to zlib.output_compression = Off
in php.ini and restart PHP-FPM to make sure gzip compression is disabled.
There are edge cases where the above doesn't hold absolutely true... but in most real-world scenarios, you won't need to disable PHP's output_buffering
to enable streaming responses.
Nginx configuration
I recommend using Nginx with PHP-FPM for the most flexible and performant configuration, but still run both Apache and Nginx in production for various reasons. Nginx has a small advantage over Apache for PHP usage in that it doesn't have the cruft of the old mod_php
approach where PHP was primarily integrated with the webserver, meaning the proxied request approach (using FastCGI) has always been the default, and is well optimized.
All you have to do to make streaming responses work with Nginx is set the header X-Accel-Buffering: no
in your response. Once Nginx recognizes that header, it automatically disables gzip
and fastcgi_buffering
for only that response.
header('X-Accel-Buffering: no');
You can also manually disable gzip (gzip off
) and buffering (fastcgi_buffering off
) for an entire server
directive, but that's overkill and would harm performance in any case where you don't need to stream the response.
Apache configuration
Because there are many different ways of integrating PHP with Apache, it's best to discuss how streaming works with each technique:
mod_php
Apache's mod_php
seems to be able to handle streaming without disabling deflate/gzip for requests out of the box. No configuration changes required.
mod_fastcgi
When configuring mod_fastcgi
, you must add the -flush
option to your FastCgiExternalServer
directive, otherwise if you have mod_deflate
/gzip enabled, Apache will buffer the entire response and delay until the end to deliver it to the client:
# If using PHP-FPM on TCP port 9000.
FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -flush -host 127.0.0.1:9000 -pass-header Authorization
mod_fcgi
I've never configured Apache and PHP-FPM using mod_fcgi
, and it seems cumbersome to do so; however, according to the Drupal BigPipe environment docs, you can get output buffering disabled for PHP responses by setting:
FcgidOutputBufferSize 0
mod_proxy_fcgi
If you use mod_proxy_fcgi
with PHP-FPM, then you have to disable gzip in order to have responses streamed:
SetEnv no-gzip 1
In all the above cases, PHP's own output buffering will take effect up to the default output_buffering
setting of 4096 bytes. You can always change this value to something lower if absolutely necessary, but in real-world applications (like Drupal's use of BigPipe), many response payloads will have flushed output chunks greater than 4096 bytes, so you might not need to change the setting.
Varnish configuration
Varnish buffers output by default, and you have to explicitly disable this behavior for streamed responses by setting do_stream
on the backend response inside vcl_backend_response
. Drupal, following Facebook's lead, uses the header Surrogate-Control: BigPipe/1.0
to flag a response as needing to b streamed. You need to use Varnish 3.0 or later (see the Varnish blog post announcing streaming support in 3.0), and make the following changes:
Inside your Varnish VCL:
sub vcl_backend_response {
...
if (beresp.http.Surrogate-Control ~ "BigPipe/1.0") {
set beresp.do_stream = true;
set beresp.ttl = 0s;
}
}
Then make sure you output the header anywhere you need to stream a response:
header('Surrogate-Control: BigPipe/1.0');
Debugging output streaming
During the course of my testing, I ran into some strange and nasty networking issue with a VMware vagrant box, which was causing HTTP responses delivered through the VM's virtual network to be buffered no matter what, while responses inside the VM itself worked fine. After trying to debug it for an hour or two, I gave up, rebuilt the VM in VirtualBox instead of VMware, couldn't reproduce the issue, then rebuilt again in VMware, couldn't reproduce again... so I just put that there as a warning—your entire stack (including any OS, network and virtualization layers) has to be functioning properly for streaming to work!
To debug PHP itself, and make sure PHP is delivering the stream even when your upstream webserver or proxy is not, you can analyze packet traffic routed through PHP-FPM on port 9000 (it's a lot harder to debug via UNIX sockets, which is one of many reasons I prefer defaulting to TCP for PHP-FPM). I used the following command to sniff port 9000 on localhost while making requests through Apache, Nginx, and Varnish:
tcpdump -nn -i any -A -s 0 port 9000
You can press Ctrl-C
to exit once you're finished sniffing packets.
Comments
Awesome info. I have definably had these issues in the past.
Have you switched to VMWare for your go-to for virtual boxes? I've been tempted to pay for it and the vagrant plugin.
Yes, for the most part, due to slightly better performance (see Is VMware better than VirtualBox for Vagrant web development?); however, with VMware's decision to fire most of it's desktop/fusion staff lately, and what looks like the decision to put the desktop product on mothballs, I'm not sure if I'll stick with it too much longer.
VirtualBox has gotten so much more stable/reliable that I don't feel too bad using it when I have to anymore.
Is there any form of compression or plans for new ways of doing compression that would work with streaming responses? It would be awesome if the response could still be delivered compressed, even if that was just compressing each chunk that's sent back. Or is the size of those chunks so small that compression ends up not being worth it anyway?
That's a good question... I haven't actually dug down too deep there, but it seems like it might be what you suspect—the overhead from compressing such small chunks might not be worth the tiny gain in transfer efficiency. But I'd rather hear from someone who has dug further into the code in Apache and Nginx.
Which version of NGINX are you using? We are trying to reproduce this use case but with no success. Could you maybe publish the whole nginx configuration? Thank you. Btw. we are using the 1.10.1 version.
I've been doing some testing of my own in the past week with some interesting results. I used PHP7.0 with php7.0-fpm and mod_proxy_fcgi to run both nginx 1.10 and apache 2.4.23 in a Ubuntu 16.04-based Docker container.
I tested both the script from this post, but also the drupal big_pipe module itself.
TL;DR
- mod_proxy_fcgi works with gzip and deflate enabled
- drupal big_pipe module over HTTP/2 fails on Apache
Long version:
- Apache mod_proxy_fcgi works just fine with gzip and deflate enabled here. For me, the php output_buffering setting seems to be the big limiter. Especially for the big_pipe_demo module, this made a big difference, since the demo Blocks are smaller than 4096 and with the default setting, the bigpipe streaming isn't visible. Lowering to 512 gives expected results for big_pipe_demo. Nginx also just worked out-of-the-box for me.
Tips for changing/checking output_buffering:
nginx: make .user.ini file in root directory (next to .htaccess) with contents: output_buffering=512
apache: change/make .htaccess to include php_value output_buffering 512 and make sure your apache config file includes AllowOverride all in the appropriate
-> test via phpinfo() (search for output_buffering)
- It works over HTTP/2 with the testscript from this post, but NOT with the big_pipe/big_pipe_demo module in Apache (nginx works fine in both cases). So it seems Drupal/big_pipe is doing something strange that clashes with mod_http2. The strange thing is that it doesn't even stream the main content: page remains white until all placeholders have been resolved (as if big_pipe/chunking was disabled). I haven't found the core issue yet, but will post another comment here when I do and I am open to suggestions ;)
Thank you Jeff for this blogpost, it was very helpful in getting big_pipe up and running!
What about ob_start() as the very first line?
Is this not recommended anymore?
<?php
ob_start();
Header('. . .');
// php code to execute
// end of script
You seem to be the only one talking about the buffering and flushing issues! I'm using a proxy fcgi and sending a response via text/event-stream (for server sent events), and gzip enabled/disabled really does the same thing. It used to work with mod_fastcgi (with flush option), but it is driving me nuts as I *have* to switch to proxy fcgi now that mod_fastcgi isn't include in Ubuntu 17.04. The ONLY way that fixes it, is to send about 32K (Based on some testing? Could actually be 64k). Altering PHP's buffering has NO effect.
FcgidOutputBufferSize 0 work
Disabling gzip isn’t necessary anymore in order for BigPipe to work with mod_proxy_fcgi. Just set the mod_proxy_fcgi parameter flushpackets=on (Apache 2.4.31 and later). The d.o. documentation now includes the new information - https://www.drupal.org/docs/8/core/modules/big-pipe/bigpipe-environment…
you shoud use both ob_flush() and flush() when you're testing in the browser, cause output buffering is hardcode to Off in the CLI SAPI, that's why you can see output immediately.
Dude!.... Much Much Thanks for this article.
You helped me sort my issue with nginx.
3 days of search and this is the only article that explains sse and configuration requirements for it!..
Thanks a ton!!..
Thanks for this . It is about the only document I found that helped me understand .
Hey there, Brother Jeff! Let me tell ya, I've been flexing my brain muscles big time, trying to get these wild events to stream live and in real-time, instead of them sneaking up on me in bursts, brother. Then, BAM! I stumbled upon your bodacious post, and guess what, brother? I found my championship move: header('X-Accel-Buffering: no');, oh yeah!
In my arena, I'm tag-teaming with Nginx and PHP-FPM, handling a Hulk-sized load of thousands of sales orders every minute, brother. But here's the twist, turning off output buffering and compression server-wide? That's like stepping into the ring with both hands tied, brother – it would totally body slam our system's performance. Then, out of nowhere, the header('X-Accel-Buffering: no'); move came to the rescue, and it was a perfect slam dunk!
So, I just had to drop a massive leg drop of thanks on you, Jeff! You saved the day, brother! THANK YOU!!!!
Haha I'm glad I could help body slam that buffering config option then :)
Hi Jeff
I think this is a silly question but after looking around, I am not sure what file I need to put "header('X-Accel-Buffering: no');" in. I am guessing it is in nginx.conf but I am not sure where in there to put it.