[MUD-Dev] Re: async i/o and threads (was: Re: lurker emerges)
James Wilson
jwilson at rochester.rr.com
Mon Aug 10 22:37:21 CEST 1998
-----Original Message-----
From: J C Lawrence <claw at under.engr.sgi.com>
To: mud-dev at kanga.nu <mud-dev at kanga.nu>
Date: Monday, August 10, 1998 3:04 PM
Subject: [MUD-Dev] Re: lurker emerges
>On Sun, 9 Aug 1998 02:06:56 -0400
>James Wilson<jwilson at rochester.rr.com> wrote:
>
>> my first issue is basic server control flow: select() vs. threading
>> vs...? There are some startling numbers at the thttpd web site
>> (<url: http://www.acme.com/software/thttpd>) showing the vast
>> difference between single-threaded, select()-based http servers and
>> servers based on other models (using one-thread-per-connection or a
>> thread pool). This was a bit of a shock to me as I am quite enamored
>> of threading (probably more because of its challenges than because
>> of its necessity, I am forced to admit). I am curious as to what
>> approaches the list readers feel are tenable, and what their pros
>> and cons are.
>
>You need to read the following:
>
> http://www.kanga.nu/~petidomo/lists/mud-dev/1998Q2/msg01208.html
>
>The concerns and the mechanics are not unique to Linux.
Thanks for the reference. I read it a couple of days ago when I was poking
through the archive, but it's more useful to me now after the recent
discussions.
Portability question: does the select() available with the mingw32 system
(which
might come straight out of winsock, I don't know) have any gotchas? I can
get about 20-30 connections accepted per second, with the client on the
local
machine; that seems plenty fast to me.
With respect to the idea of two sets of sockets, one 'active' and one
'inactive',
should one divide the active sockets into active-read and active-write
sockets?
That is, perhaps a socket's getting sent a lot of data on a regular basis
but doesn't send much back; it would be a bit of a waste to check it for
read-availability
as often as write-availability.
I've been trying to suss out how the async i/o model would work, especially
if
any given request might be cpu-intensive... please bear with me if this is a
'duh'.
I am imagining:
1. the i/o thread(s) use(s) select() to find the readable and writeable
sockets.
for a readable socket, it does a non-blocking read() and whatever amount of
data comes in goes into a bucket. if the data in the bucket is a complete
request,
something is done with it (see 2, below). for a writeable socket, there's a
buffer
containing stuff you want to send out; if it's nonempty, you do a
non-blocking write()
and adjust the buffer according to how much actually got write()ed.
2. (a) if you're single-threading, then every time you get a complete
request you go
off somewhere and process it. possibly you could do something with timeouts
so
this processing time is bounded, but this seems pretty hairy. otherwise I
guess you
have to ensure that not much time will be spent in the 'real work'. When you
do output
to sockets, you just dump the data in the socket's buffer; I'm not down with
the
technicalities of buffering, so I'm not sure if the buffer should grow
dynamically, drop
the data, or fail in some way that allows the data to be resent when the
buffer's
got some room.
(b) if you're multi-threading, things are much the same; threads respond
to complete
requests and write their output into per-socket buffers. the catch here is
synchronizing
with the main thread; you could mutex each socket buffer, but then a popular
socket (where lots of worker threads are writing into it) could make the i/o
thread
sit there waiting for the lock for a long time. Maybe there's a way around
this?
The reason I'm paranoid about the amount of time spent in processing is user
scripts; I'm sure I can't trust people not to write inefficient or
infinite-looping code,
and it'd sure be nice to have the latter locked away in a thread so they
don't
prevent others from logging in (or me from killing the runaway). I'm not
sure how
this could be accomplished cleanly in a single thread. Also, if one uses a
disk-based
system, 'zone faults' would seem to be a great thing to put in a thread
separate
from the socket i/o thread.
I'm a bit concerned about the overhead of using a thread pool, though, since
my two
target platforms (win32 and linux) use heavyweight threads. Has anyone tried
a bytecode vm that implements user-level threads, setting up a scheduler
that timeslices between 'processes', switches context on blocking i/o, and
so on? Maybe this could give you lightweight, portable threading while
keeping the whole process in a single thread? I'm imagining something where
the main thread is either chomping bytecode or checking for io-ready
sockets.
James
More information about the mud-dev-archive
mailing list