[MUD-Dev] Re: MUD Development Digest
J C Lawrence
claw at under.engr.sgi.com
Wed Apr 8 13:22:21 CEST 1998
On Sat, 4 Apr 1998 09:39:55 PST8PDT
Justin McKinnerney<xymox at toon.org> wrote:
> First, I agree with your first comment. I have heard people claim
> their disk-based muds run faster than memory-based, but this all
> semantics. If a disk-based game does run notably faster than it's
> memory-based equivilant, it's simply due to the fact the operating
> system is doing a better job of memory managment (perhaps doing
> cache alignment, for instance), which wouldn't be suprising
> considering some of the code for MUDs I have seen.
No. Typically it has nothing to do with the OS'es memory supports,
but has to do with working __around__ the OS'es memory model as it is
inefficient for the task at hand. The performance gains are due to
the fact that the disk-based game uses intelligent cacheing which
forces a (reasonably) minimum number of pages for the game's working
set. Its not very difficult to put the entire world in RAM, and then
chew all your performance with page faults -- or to get it back again
by keeping some of the disk IO, but losing the constant page faults.
> I agree with you that a RAID is a good thing to have, but I do not
> believe that it will completely eliminate the possible need for a
> seprate process to do disk IO on a large multiuser system. While
> it's true most RAID systems have hardware level memory caching, the
> operating system still blocks all accesses to the hardware. This
> causes a sleep state in your process, and if you have enough access
> going on, even a large buffer can be heavily burdened.
Not if you configure the RAID sub-system to do lazy writes (ie report
success back to the caller as soon as the data is recei ved, not after
it is written). At that point the latency of the IO calls is strictly
controlled by the latency of your communication stream with the RAID
sub-system and has no dependencies on the physical disk speed at all
(presuming you are not saturating the cache at the disk end -- which
you'd better not be (need a bigger cache in that case)).
Note also that several of the more intelligent OS/disk support designs
allow disk IO to be multi-plexed at the user-space call level, only
serialising (if at all) the calls (or data streams as they are then
cache derived) as they head down the wire (not necessary under SCSI).
Ergo no contention.
> Going with a seprate process for all blocking IO guarentees that you
> will not loose any processor cycles due to sleep states you did not
> design your application to have. This enables you to continue
> executing, say, the command of another user while disk IO relative
> to the first user is being processed. Then once you've done your
> roundrobin queue (or however you set it up) and come back to the
> first user, the disk IO should be ready.
<kof>
> And remember! Non-blocking syncronous IO is your friend. :)
True.
>> -----Original Message-----
Writing as list owner:
Please use the standard quoting format of this list and intersperse
your new text with selected quotes from the original message (example
above). Quoting the entire message again is a waste of bandwidth and
is *NOISE*.
--
J C Lawrence Internet: claw at null.net
(Contractor) Internet: coder at ibm.net
---------(*) Internet: claw at under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...
More information about the mud-dev-archive
mailing list