[MUD-Dev] Re: TECH: Distributed Muds

Brian Hook bwh at wksoftware.com
Fri Apr 20 12:20:58 CEST 2001


At 12:02 PM 4/20/01 -0400, Derek Snider wrote:

> I don't see how you lose scalability with a system that was
> specifically designed to be scalable.

You're scalable up to the limits defined by the hardware architecture.
Once you reach that point, you need to buy another mega-uber-system.
The level of granularity is extremely coarse, even though this may not
be what you'd intuitively expect.

Let's say you design your server to support 1000 users on an
8-processor SMP system.  Later on you find that you need to support
more like 1300 users.  You either buy another SMP system (possibly
with just 2 processors and you expand later) or you cap the load on
that system.

If you architect everything from the outset to use TCP/IP has your
backbone, you suffer some obvious inefficiencies but at least you've
architected things such that adding a new resource involves plugging
in a new machine onto a sub-net.

> Well, we were talking about a system with a high maximum (512) or
> hopefully no maximum.

There are very, very few systems that offer scalability that high.
That's entering supercomputer territory, not just enterprise servers.
The turnaround time between order to delivery on that is extremely
high, and they tend to run less-than-mainstream operating systems.

The high end SUN HPC systems have relatively slow processors and scale
up to 64 processors.  The new IBM 430 also caps out at 64 processors.
The IBM zSeries is primarily cluster based.  HP and Compaq are pretty
much out of the race.  SGI's 3800 caps out at 512 processors, but that
is NOT a cheap system (many millions of dollars) and the turnaround
for assembly is fairly high.

And, of course, presumably you have this money to buy, maintain and
develop for the system before you have a commercial product to provide
income.

Also, in terms of clock speed, the high-end SMP systems lag behind
commodity products fairly significantly (no, clock speed isn't
everything, but the difference between 333MHz in an UltraSparc
vs. even a 750Mhz XEON is definitely felt).

> A system built to handle 100,000 simultaneous players shouldn't
> require all 512 processors.

You'd be surprised.

> If you have a sudden need for resources, it is doubtful that you
> would remove those resources when demand goes down -- you know that
> demand is likely to return at some point in time, and it's better to
> have extra, than to not have enough.

That's not necessarily true.  If you have systems that can handle
typical peak loads, but you're aware that you may encounter flash
heavy traffic (e.g. in EQ when a new server would go up), you could
reroute resources to that cluster until the furor died down then
remove that resource.

Obviously having maxed out resources at all times is the ideal, as
would a 1024 processor supercomputer, but sometimes these aren't
practical solutions.

The _practical_ solution, to me, is to target clusters because they
can scale from 1 to N machines.  Not to mention they force you to be
aware of implicit assumptions about topology, bandwidth and latency at
the get go.

Brian Hook


_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list