[MUD-Dev] Multi-threaded mud server.

Caliban Tiresias Darklock caliban at darklock.com
Mon May 17 03:13:24 CEST 1999


On 05:23 AM 5/17/99 +0000, I personally witnessed Ross Nicoll jumping up to
say:
>On Sun, 16 May 1999, Caliban Tiresias Darklock wrote:
>
>> So your context switching immediately goes through the roof, doesn't it? 
>
>Sorry, this is me not explaining what I meant very well, I think. As I
>understood it, most MUDs allocated a new thread per user. However, your
>later suggestions do seem definately better than mine...

I think that's the most obvious initial model to want when you do this sort
of thing, but I also think it's been beaten to death on the
context-switching issue. Consider if you have forty players on forty
threads; when ten of them go and talk, when each of them says something in
sequence you have ONE HUNDRED context switches. One for each of the players
who hear the speech, and one to return success to the initial thread,
making ten context switches for each statement times ten players. Yuck yuck
yuck.

Now, the way I understood your approach was more like: if you have forty
players online, ten in each of four threads, then if the ten players in
thread 1 all go talk to one another in the same place: NO context
switching. All in the same thread. Big win. But if three from thread 1 and
three from thread 2 and two each from threads 3 and 4 go talk, you have
four context switches per statement. Big win over the thread-per-player
situation, certainly, but still problem-ridden and still a big loss over
all players being in one thread.

>> If I'm correct in this understanding, and please correct me if I'm not,
>> then the "proper" way to implement such a scheme would be for each thread
>> to accomplish specific *tasks* instead of handling specific *objects*. 
>
>An interesting idea, but I think it would require too much inter-process
>communication, and in particular may not load balance very well...

If amount of IPC is a problem, why not stay monolithic and not have any? ;)	

>> A related thought (and the one I'm still considering) would be a thread for
>> each area. When no one is in area X, that area can be safely unloaded from
>> memory; thus, with five players online, you don't need to have eighty areas
>> loaded. Just five, at most. All the other threads can be halted altogether. 
>
>Now this I like, as it kinda goes right around most of the database write
>problems... however, load balancing again becomes a problem, and player
>would have to be moved between threads, which is a problem in itself...

The problem here, as I see it, is that you don't know how much of a load
any given player represents until long after you've assigned him to a
thread. Maybe he's logged on and doing nothing except hanging out and
watching the chat messages scroll up the screen; he needs less processor
time than the guy who's ripsawing through one area after another with his
fear-me-or-die character that has the Great and Powerful Oz in a bottle
around his neck. But you don't know this until you've seen the history for
a while. 

Now, it is probably a lot more efficient to take the guys watching the
scrolling chat messages and shove them all into one thread. But first you
have to watch them for a while. And the guy who's tearing across the MUD
raising Cain, Abel, Methuselah, and Gandhi would probably benefit greatly
from having his *own* thread. That way, when he goes off to the bathroom or
stops to watch Hercules on UPN, you can swap him out of the way and get
other people working on that same processor. 

Or, the short form of the above: How do you balance the load before you
measure it?

-----
| Caliban Tiresias Darklock            caliban at darklock.com 
| Darklock Communications          http://www.darklock.com/ 
| U L T I M A T E   U N I V E R S E   I S   N O T   D E A D 
| 774577496C6C6E457645727355626D4974H       -=CABAL::3146=- 


_______________________________________________
MUD-Dev maillist  -  MUD-Dev at kanga.nu
http://www.kanga.nu/lists/listinfo/mud-dev




More information about the mud-dev-archive mailing list