Finally,
this is the third attempt from my side to solve the upload problem (buffering the full content in memory)
I tried to attack the problem in several way:
- implement flow-control by disabling the fd-events for a while on the incoming socket. This would have added nicer traffic shaping too. But it was too complex to do it in one step. This delayed the release of 1.4.0 alot.
- split the receiving phase into 2 steps: read header and start backend process and as second step read the request-content and forward it to the backend. This will be done again in one of the next releases and will bring us back the upload-progress bar.
- finally I implemented the idea that was raise several times on the list and the IRC channel: do it simple and buffer to disk.
This was added the the subversion tree now and will be part of 1.4.5.
When someone wants to send more than 64kb to the server the content will not be buffered in memory as before but stored in tempfiles in /var/tmp/ in chunks of 1Mb size. The full content will be buffered before it is forward to the backend. As soon as the chunk is forwarded to the backend it will be freed either removing the tempfile on disk or by freeing the buffer in memory. This allows us to handle a upload of a ISO file pretty easily without jumping over the 3Mb border in my case. In 1.4.4 this resulted in a huge lighttpd process.
One problem is left with FastCGI which was seen earlier already: The backend might terminate the connection with a EPIPE (connection closed by remote side) at the last packet that we have to send. Up to now I have no idea why. From the protocol level that we send it looks fine. That is the only problem that is left.
I encourage everyone to test it out even if it is currently only available from SVN.