Hi David,
>> Does this mean NT is now writing the uploaded file to disk as it comes in instead of storing the entire file in memory first?
nope. It was searching for a header value in the incoming post, even when the post header had been completely received. As the incoming post got bigger the search took longer, which in turn placed a demand on the CPU. Since there was already a value set when the end of header was reached, it was trivial to stop looking after the whole header had arrived.
>> If not, is that something that could be added to the NT7 enhancement list?
I understand the appeal of this, but it's a very non-trivial amount of work if you think about it. Remember the incoming POST has a bunch of fields attached to it, but in order to start "saving the file" before the end of the post has arrived means parsing the incoming stream as it's incoming. Then, I presume filtering out the actual file contents and replacing them with some value so the rest of the program keeps on working. Even this description is a gross simplification - there are lots of edge cases to handle as well.
Clearly Ram is at a premium, especially if folk start uploading very large files to the server (potentially crashing it if it runs out of Ram). I have started implementing a system that would allow you to set the maximum size of a Post / Incoming File (thus at least avoiding some of the problem) but what you are suggesting is somewhat more complex.
Probably an easier solution is to stream large posts, as a Post, to a file while it is coming in - but then it still needs to be parsed etc, and it may be hard to do that without bringing it into memory at some point.
Never say never though, perhaps it is possible to handle large posts in a separate (memory friendly) way. It's certainly something I'll think about.
cheers
Bruce