[MUD-Dev] Re: MUD Development Digest
Justin McKinnerney
xymox at toon.org
Mon Apr 6 08:29:56 CEST 1998
> -----Original Message-----
> From: mud-dev [mailto:mud-dev at null.net]On Behalf Of Nathan F Yospe
> Sent: Sunday, April 05, 1998 7:46 PM
> To: mud-dev at null.net
> Subject: [MUD-Dev] Re: MUD Development Digest
>
> [Stuff deleted]
>
> Just to note that I use a mixed approach, with my own memory management...
I
> have not gotten enough of the engine done to give it a real test, but...
it
> seems that reliance of system paging is a doomed approach, unless you are
> very careful about how you allocate memory. Likewise, writing small bits
to
> disk is going to slow everything down. This can be sidestepped by
offloading
> the writing of small bits to a seperate thread, but... writing big chunks
is
> nice.
How would relying on system paging be a doomed approach, eactly?
It seems to me that dealing with memory managment at user level rather than
allowing the system to do it at kernel level would be far less efficent.
Espicially on the more proven operating systems (aka Solaris or IRIX).
This is beside the fact that unless you know for certain that the total size
of all running processes (and whatever tables the kernel is handling) is
definately less than the size of total physical memory (meaning you should
give yourself 8-16 megs slack in most operating systems for the file system
itself). It seems to me that it would be a pipe dream to try to make sure
everything stays running in memory only. And even if you do make sure it's
smaller, many UN*X implentations are smart about paging "dead" memory to
keep it free for running processes. The only exception that I can think of
would be Linux, where I don't think they do any smart paging to keep memory
clear for any running or potential new processes that may need it (they only
page when forced, unless smart paging is something in the 2.1 kernel?).
> [Stuff deleted]
>
> I'm not sure what version of UN*X you are using... or which version of NT,
> for that matter! This is quite the opposite of my experiences with
> commercial, network serving operating systems. The threads in IRIX 6.2
seem
> to work quite well, as do these Solaris threads. Heck, even the ones in
the
> developer's version of Rhapsody I've testrun worked. The ones in NT 4.0
had
> some itty bitty problems... like the eleventh thread and on never getting
a
> reasonable share of the processor, and in fact never getting ANY of it
> unless one of the others deliberately gives it up. But this is NOT the
place
> for an OS war... one simple statement should suffice: If you are on
Solaris,
> use threads. On IRIX, go ahead and use them, you might not be able to tell
> if they are threads or processes (sorry, J C, but that's been my
experience),
> and on Linux... pray. Can't speak for the others.
Yes, I agree that Solaris and IRIX threads are quite nice. However, this is
not always the case. Plus, the threads for those respective operating
systems are only supported by their respective compilers (to my knowledge,
GCC does not support the thread libraries on those platforms). And
personally, I'm a big fan of GCC.
Outside of that, however, there are other UN*X variants which aren't so nice
about threads. And some, while they are heading that way (Linux), still
aren't quite there.
Threads quite often also make debugging something of an adventure. This is
actually something I'm currently dealing with as I am doing some work on
Flight Unlimited 2. (Getting threadlock under one compiler, getting a
complete bailout on the other)
Threads *can* be quite a good thing at times, but I tend to avoid them
unless there is some specfic purpose that I can put to them. In this case,
using seprate processes for blocking IO calls is not uncommon in any way,
and is quite proven to do the job rather nicely. So I really don't see the
purpose in going so balistic to optimize that sanity be put at risk. I see
this all the time in the game industry, and is one of those things you would
think experince would have proven aginst by now. :)
As for NT, my statment wasn't that their threads are partcularly good in any
way, just that attempting to port a multi-process application to NT is a
pain in the <insert favorite exclamatory here>. So you're probably safer
trying to use threads under NT than trying to use their rather broken IPC
implementation.
> [Stuff deleted]
>
> There are non-blocking disk access calls for most flavors of unix...
It's my understanding that the way this works under those un*x flavors which
support it is that it makes a request to the file system, at which point
whatever part of the file is cached is returned, and a request is queued to
cache the remainder of the file.
This seems a bit risky and has the potential of taking a long time to read a
file, since the file cache is generally the first thing to be removed from
memory when needed (and is generally removed, as paging the file cache would
be silly).
Yes, this may be reasonable with a RAID (as the file cache on the hardware
is not likely to go anywhere), but it sill has to be paged to physical
memory before it is returned to the user space, and if you are eating enough
physical memory, once again you are left with the same problem.
I would easily first recomend blocking file IO on a thread before I would
recomend non-blocking file IO. But my recomendation of blocking IO in a
seprate process still remains, as non-blocking syncronous IPC is completely
risk-free. :)
- Justin -
> Nathan F. Yospe - Aimed High, Crashed Hard, In the Hanger, Back
> Flying Soon
> Jr Software Engineer, Textron Systems Division (On loan to
> Rocketdyne Tech)
> (Temporarily on Hold) Student, University of Hawaii at Manoa,
> Physics Dept.
> yospe#hawaii.edu nyospe#premier.mhpcc.af.mil
http://www2.hawaii.edu/~yospe/
More information about the mud-dev-archive
mailing list