[MUD-Dev] Re: lurker emerges
Chris Gray
cg at ami-cg.GraySage.Edmonton.AB.CA
Sun Aug 9 10:05:44 CEST 1998
[James Wilson:]
>hello, I've been perusing some of your fascinating archives for a little
>while and thought I'd add my two cents (or maybe more than that). I sent a
>similar message a day or so ago, not noticing the big bold message in the
>list policy that told me I couldn't post yet, so that message is lost in
>/dev/null. Duh. Fortunately I have had the opportunity to do a little more
>research on what I want to talk about. I have been puttering around with mud
>development for years, and keep going around and around on the same
>questions. Here are two of them, in a hopefully-not-too-blathery degree of
>verbosity.
Welcome! Don't worry about blathering. Some of us are famous for it. My
boss even told me that I need to write less, the other day. And he's
trying very hard to be polite (we just restructured - he's new for me).
>As I see it, not having gone the whole nine and tested it out, using a
>single, select()ing thread to do all your stuff would work fine if each
>operation is bounded by some small
>amount of time. That is, if you spend too much time in processing the
>received action,
>your responsiveness to pending requests goes down. I'm not sure how or if
>this issue is solved in the select()-based http servers, and am looking at
>the source code to try to suss it out. How do they deal with a request for a
>bigass file? Do all the ripe sockets wait to be select()ed while the bigass
>file is sent on its merry way? News at 11.
As you say in your later post, this is handled by making the data sockets
be non-blocking, and when the send would block, adding that socket to
the 'select's set of 'writeFDs' to find out when it is unblocked. Note
however, that that doesn't completely answer your question. If the system
has lots of network buffers available, it could take quite a bit of
getting data and pushing into that socket before it gets full. All of
that time is latency in handling other connection requests.
I was surprised to see the note on the thttp page that Linux can't
pass fd's from one process to another. That was a subject that came up
earlier this week. The man pages on Linux contain info on how to do it,
so I had assumed it worked. Perhaps not, or perhaps the thttp people
had only looked at a quite old version of Linux.
In general, I think its clear that threading will only actually increase
your speed if your system has multiple CPUs. You may well choose to use
threading for other reasons (e.g. on WIN32 you often have no choice).
Threading has overhead over non-threading, and there isn't anything you
can do about that.
How you do things would depend on your goals, I'd say. If the goal of
an http server is to maximize throughput, then the 'select' based model
with nonblocking sockets would seem to be it. If the goal is to minimize
latencies, and correspondingly reduce the 'connection refused' messages,
then you'd want to alter the model a bit by not writing all of a really
big output to any socket. I.e. don't wait until the socket blocks, but
have some other limit, determined by processing times, and when that is
hit, switch from that socket back to your main one, and then to any
other ready sockets, before returning to the write-pending one.
On the issues of problems with a non-locking database model, I'll leave
that to others, who have put a lot more thought into it than I have. My
one little thought is that you want to be clear on where the responsibility
for consistency lies. Typically, the database layer is responsible
for consistency only within the objects it knows about, and not among
them. It is some upper layers (e.g. scenario code) that is responsible
for keeping inter-object consistency correct.
More information about the mud-dev-archive
mailing list