>> If this is correct and we were contemplating deploying with say 2000 concurrent users then that would seem to imply that we would need lots of smaller servers
no, this is an incorrect conclusion. Remember we're not talking about ram per _user_ here, but ram per thread. A thread lasts around one tenth of a second, so if you had support for say 100 threads that's roughly 1000 requests per second. Your 2000 users would need to be generating a request every 2 seconds to get near to that number.
If you planned to have 2000 users "using" the system (at the "same time") then let's assume they do something every 10 seconds. In truth that's a LOT faster than they're likely to be using it. So that's about 200 per second.
but if the 2000 users are really spread out over the day, each using it for say 2 hours in the day, then you can divide that by 5 again (assuming a 10 hour day).
In other words what you're really planning for is "number of requests per second". - and that has to do with usage patterns, not the absolute number of users.
But wait, there's more.
Each _process_ on the computer can consume up to 2 gigs of ram, but you can run multiple processes on the same box. For example, if your box had multiple IP addresses you could use
www.whatever.com and www2.whatever.com and then run a copy of the exe, each bound to that IP address. So you don't necessarily need to have multiple serves to support multiple instances of the EXE.
If you can't get multiple IP addresses, then multiple ports can be used (although that's not as "clean").
>> But I agree, we need a 64 bit application and not having one is a negative. I will write to SV.
Be careful of over worrying about this limitation at this point in time. I've mentioned it because it's the only "absolute" limit that I'm aware of (everything else depends ultimately on the power of the box, and the size of your bandwidth.)
Our server is currently handling around 20000 hits per day, and it barely gets above idle. (We have a large number of devices that "phone home" with diagnostic and backup information every day - or sometimes multiple times in a day).
>> Remember, there's only one program shared by all and if a new version requires changes to existing data bases you must make the changes "all at once" if possible. Otherwise you'll have to work out a plan for doing it in a way that doesn't disturb the customers (too much).
In practical terms timing is everything. And some of the other effects of scale ameliorate this to some extent. For example;
client A now has (effectively) their own DB and Server instances. So they can be updated independent of the others, at a time most suitable for them.
Bear in mind that Clarion allows for the program to use a "subset" of the fields on the server (in SQL). So if you are needing a new field, BX then that can be added to the databases (using a simple SQL script) well before you actually deploy the new web server exe.
If you have a load-balancer then you can
a) update the backends so they are ready
b) take one server offline, update the exe, restart the exe
c) rinse and repeat with other servers.
If you have a single web server then yes, at some point you need to
a) stop it
b) copy on new exe
c) restart it.
This can happen quite quickly (within a few seconds) especially if the data is "pre-prepared". And it's possible to "dump" the sessionqueue to say an XML file when closing the server, and re-load it on open, so users don't actually notice the changeover (even if they are literally doing something at the time.)
Of course getting this fancy pre-supposes that you don't have "quiet times", perhaps weekends or at night, when actually pretty much no-one uses the server anyway.
In short - it's not that hard to handle this case, and if you are scaling up that large there are pretty straight-forward things you can do to make it relatively comfortable.
cheers
Bruce