lighty's life

lighty developer blog

More Threaded Io

After a long night we finally have everything in place for a threaded stat() calls. Not only that, we also have a new network backend for all those platforms which have problems with the posix-aio on. You need to have glib2-2.6.0 or higher installed.

The new options are:

server.max-stat-threads = 4
server.max-write-threads = 8 = “gthread-aio”

Depending on the backend, your OS and the number of disks you might want to raise the two values, but keep in mind that you will get problems if you raise them too much. Performance will decrease again at a given point.

The performance of the different backends is: linux-aio-sendfile, posix-aio, gthread-aio, …

On the way linux-aio-sendfile and posix-aio should behave better under high concurrent load now. They even got some stats: 1261 551

Time for benchmarks, check my earlier article about lighty-1-5-0-and-linux-aio and try to generate the same set of testfiles and take http_load to generate random load. It is important that you use more files then you can cache in memory.

Threaded Stat()

Just as a proof of concept I implemented a threaded stat() call. It is a bit of a hack currently, but it looks promising when I look at the performance data:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.00    0.00   26.60   68.40    0.00    0.00

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda          0.00   0.60 66.90  1.60 13019.20   22.40     6.36     0.01   190.39     6.10   88.20  14.49  99.28
sdb          0.00   0.60 66.60  1.60 13061.60   22.40     6.38     0.01   191.85    14.09  208.82  14.67 100.04

In we tried the same without a async stat() and with fcgi-stat-accel. With the threaded stat() I moved the code into lighttpd itself which reduces the external communicating and manages everything in lighttpd itself.

name              Throughput  util% iowait%
----------------- ------------ ----- ------------
no stat-accel     12.07MByte/s 81%  
stat-accel (tcp)  13.64MByte/s 99% 45.00%
stat-accel (unix) 13.86MByte/s 99% 53.25%
threaded-stat     14.32MByte/s 99% 68.40%

(larger is better)

Accelerating Small File-Transfers

Thanks to some help from our #lighttpd IRC-channel we solved another long-standing problem:

As lighttpd is event-based web-server we have problems when it comes to blocking operations. In 1.5.0 we add async sendfile() operations which helps for large files alot. For small files most of the time is spent on the initial stat() call which has no async interface.

Fobax submitted a nice solution for this problem: move the stat() to a fastcgi app which returns with X-LIGHTTPD-send-file: and hands the request back to lighttpd. The fastcgi can block and spend some time while lighttpd moves on the with other requests. When the fastcgi returns the information for the stat() call is in the fs-buffers and lighttpd doesn’t block on the stat() anymore.

All this is documented by darix in the wiki at HowtoSpeedUpStatWithFastcgi

This works with mod_fastcgi in 1.4.0 or with mod-proxy-core in 1.5.0 + aio.

Lighttpd Powers 6 Alexa Top 250 Sites

Reading the last statistics from "netcraft’s Webserver Survey ": lighttpd is #12 of the most used webserver software packages.

But who is running lighttpd and for what purpose ?

Compression of Dynamic Content

It looks like a few changes won’t make it into trunk/ before I leave for vacation. But you should know what is in the pipeline and what you want to wait for:

  • HTTP Response filtering is implemented
  • HTTP/1.1 chunking becomes a module
  • compression of dynamic content

This will add compression not only for mod_proxy_core and its backends (FastCGI, SCGI, HTTP, AJP13) but also to internally generated content like the directory listings.

With these changes we will become more and more stream based. Or like JDD called it: The Web is a Pipe

Voy a Buenos Aires

… with only a few words of Spanish learnt so far I’ll be on my way to Buenos Aires on Dec 30th, celebrating New Years Eve their with some friends.

If you want to get patches into trunk/ ping darix or jakabosky on IRC. They can will decide if the patch can go in or has to wait for me.

If you would like to get a session on lighty or just want to meet me and invite me to a asado somewhere in Argentina between Dec 31st and Jan 20th drop me a mail at :)

Can someone tell me if the topic was correct Spanish ?

1.5.0 Goes Cmake

It is a tradition now to change the build-system from lighttpd on each major release
For now we have the autotools as the user-visible build-system and scons as the system for the developers.

Currently we are testing cmake as a replacement for the scons part.

Faster Web 2.0

In Faster FastCGI I talked about using temp-files in /dev/shm to reduce the overhead of large FastCGI requests. Robert implemented it right away and it is available in the latest pre-release

Woken up far too early and having the first coffee I shared some ideas on how this could be useful to accelerate AJAX based applications.

1.5.0 Works on Win32 Natively - Again

Half a year ago I was traveling a bit and tried to get lighty to compile natively on win32

Some time has passed and I concentrated on the other stuff in the 1.5.0 tree, leaving the nasty win32 code in place for someone to pick up. Ben Harper aka rogojin has picked it up and released a win32 installer for the latest pre-release

A simple tests shows that staticfiles are working nicely and that http-proxying with mod-proxy-core works too. Nice work, Ben.

PRE-RELEASE: lighttpd-1.5.0-r1477.tar.gz


Robert Jakabosky fixed and improved mod-proxy-core alot since the last pre-release:


I added native support for POSIX AIO which might bring async io for more platforms. While Linux AIO is pretty stable the POSIX aio support is pretty experimental. Perhaps it compiles for you.

I tried to compile it on Linux and FreeBSD. = "posix-aio"

Try it

Check if it compiles and works for you.