lighty's life

lighty developer blog

Faster Web 2.0

In Faster FastCGI I talked about using temp-files in /dev/shm to reduce the overhead of large FastCGI requests. Robert implemented it right away and it is available in the latest pre-release

Woken up far too early and having the first coffee I shared some ideas on how this could be useful to accelerate AJAX based applications.

1.5.0 Works on Win32 Natively - Again

Half a year ago I was traveling a bit and tried to get lighty to compile natively on win32

Some time has passed and I concentrated on the other stuff in the 1.5.0 tree, leaving the nasty win32 code in place for someone to pick up. Ben Harper aka rogojin has picked it up and released a win32 installer for the latest pre-release

A simple tests shows that staticfiles are working nicely and that http-proxying with mod-proxy-core works too. Nice work, Ben.

PRE-RELEASE: lighttpd-1.5.0-r1477.tar.gz

mod-proxy-core

Robert Jakabosky fixed and improved mod-proxy-core alot since the last pre-release:

POSIX Async IO

I added native support for POSIX AIO which might bring async io for more platforms. While Linux AIO is pretty stable the POSIX aio support is pretty experimental. Perhaps it compiles for you.

I tried to compile it on Linux and FreeBSD.

server.network-backend = "posix-aio"

Try it

Check if it compiles and works for you.

www.lighttpd.net/download/lighttpd-1.5.0-r1477.tar.gz

Faster FastCGI

While I was throwing away from bogus data-copy operations from the mod-proxy-core code I stumbled over a simple question:

Why do we copy the HTTP response data from the backends at all ?

We are just forwarding them in most cases without touching them.

COMET Meets Mod_mailbox

Some time ago we got a request on how to implement COMET with lighttpd. I responded with a idea about a mod_multiplex which would allow the let the client open a COMET-channel and give the backend the possibility to feed multiple channels at once with the client to poll for new data.

Basicly it would separate the HTTP Request-Response cycle from the underlying connection. HTTP would be used to open the connection and reopen it in case it went away, but otherwise it would be just a data-channel for your JavaScript/AJAX content we want to send to the client when WE (the content-provider) want.

Linux AIO and Large Files

The benchmarks only showed results for small files (100kbyte). Time to add larger files to the pool and talk about the chunk-size.

Lighty 1.5.0 and Linux-aio

1.5.0 will be a big win for all users. It will be more flexible in the handling and will have huge improvement for static files thanks to async io.

The following benchmarks shows a increase of 80% for the new linux-aio-sendfile backend compared the classic linux-sendfile one.