[MUD-Dev] Multi-threading ( was: TECH DGN: Re: a few mud server design questions (long))

Sean Kelly sean at ffwd.cx
Mon Jul 30 20:42:57 CEST 2001


From: "Jon Lambert" <tychomud at ix.netcom.com>
>
> The ideal is NOT "one thread per CPU" no more than the ideal is "one
> process per CPU".  The ideal is that a CPU should always have a unit
> of work available and ready to run.  No more than one unit of work
> and it should be ripe for the plucking at the exact moment when the
> CPU becomes available.  Naturally we're always going to fall short
> of that ideal.  :-)

Good point.  A more careful re-statement of what I'd intended would be: one
active thread per process.  The idea is to avoid unneccessary
context-switching, not pine for the days of DOS :)

> What you want
> to maximize is utilization.  And that occurs when there are enough
> threads to hide latency not cause latency.  It's more than 1, and
> it's not like you have a real choice in the matter of context
> switches anyways if you're running NT, *nix, VMS or OS/390.  You
> probably run them because they context switch. :-P

Definately.  Pre-emptive multithreading is a Good Thing.  My original goal
was not to discount it so much as advise against unneccessarily dividing
tasks.  In a single-CPU system, the only difference between doing a bunch of
tasks sequentially and doing them "simultaneously" within a process is that
in the latter case the CPU is breaking the tasks into little bits which it
does sequentially, with a measurable cost to switch between tasks.  So
effectively the total time it takes to complete "simultaneous" processing is
slightly greater than the total time to complete the same process in
explicitly sequential order.

But things get sticky once you consider the fact that some things have
significant delays (disk and database lookups) where the program could be
doing other things.  And the fact that sometimes the illusion of
simultaneity is more important than total processing time.

> The measure of how overloaded a system is the length of the run
> queue not the number of threads or processes in the system.  Most of
> our desktops will rarely hit a run queue length of over 2.  Right
> now as I'm typing this there are 119 threads running on my system
> with an average run queue length of slightly less than 1.  CPU is
> trivial around 5%.  4% of that the mud BTW.

Yup.  The thread-pool scheme is powerful and mimics this behavior --
generally you've only got one thread running at a given time while all the
others are waiting for something to finish.

> Now if one designs an application in such a way to attempt to
> guarantee that all threads are busy then that design will likely
> guarantee the worst possible performance on a single-processor
> machine.  That's an embarrassingly parallel case.

Exactly.  And this is often the situation programmers envision when they
first learn about multithreading.  "Wow, I can devote a separate thread to
every task in the system!"  This isn't even considering the problem of
synchronization.

> I use thread pooling for ALL operations that do I/O whether it be
> database or network I/O.  It's not one thread per user or one thread
> per I/O request, it's a fixed pool of N pre-created threads serving
> all the requests for a particular class of I/O.  N is of course
> something that can be tuned depending where and what I eventually
> end up running it on.  :-)

Heck yes.  Thread pooling is good.  I'll say it again: "thread pooling is
Good."

>   "Threads are for people who understand how their apps should work
>   better than the brain dead OS does." - me

Exactly :)  Which is why my first advice to someone who mentions threads is
"hold your horses, do you *really* need threads to do this?"  It's one thing
to know what they are.  It's another to use them correctly.


Sean

_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list