> A general question in regard to HTTP (web) servers and how they perform
> on multicore hardware.
> The most popular web servers are Apache and Microsoft IIS. Is anyone
> involved with heavy traffic web sites and has problems with web server
> reliability, scalability or heavy load?
For static content it is entirely irrelevant; for serving high-volume static
content, you use a content-delivery network; they typically use neither
Apache nor IIS, nor do you care what systems they use.
> What about dynamic content and
> the back-ends that support it, i.e. CGI, Server Pages, etc. Which
> technology is the most practical and scalable?
Dynamic content is what you actually care about. In practice, the
application-server "front end" scales pretty well, being as all it really
has to do is issue requests to middle tier / back-end systems and stick bits
of html together with a bit of application logic. This tends to be quite
memory-intensive, so you end up having to buy boxes with relatively few
cores and a big load of ram.
The trick really is scaling middle tier and databases, which is nowhere near
so obvious. Anything which involves getting distributed locks tends to break
in a big enough system, as you just can't do it scalably or reliably.
Likewise, if you want complete consistency, there is a limit to just how big
it can get. How big? Probably bigger than you care about (unless you're
facebook, Google etc).
So those who have a back-end scaling requirement higher than they can
achieve with "normal" database systems, don't have the same consistency
requirements, which works out quite nicely for them.
RDBMSs which support ACID such as Oracle, MySQL etc, are geared towards
high-consistency rather than high scale applications. There are a lot of
technologies for handling less consistent ("eventual consistency")
applications - too many; no standards and it's all a bit confusing.
Apologies if I have digressed somewhat :)
This message was posted to the following mailing lists: