Event handling (was: request for comments)
s001gmu at nova.wright.edu
s001gmu at nova.wright.edu
Fri Jan 9 11:12:10 CET 1998
On Thu, 8 Jan 1998, Vadim Tkachenko wrote:
> s001gmu at nova.wright.edu wrote:
> >
> > On Mon, 5 Jan 1998, Vadim Tkachenko wrote:
> > >
> > >Jon A. Lambert wrote:
> >
> > ...
> >
> > >> Variations on event-driven design
> > >
> > >Can you please elaborate on that?
> >
> > I've spent the last month or so of my free time tossing around ideas
> > for our event driver, and I think I've finally settled on a general
> > model. I have yet to implement and test all of it, so I'm sure I'll
> > be fine-tuning it, but here's where it stands for the moment:
> >
> > Definately Multi-threaded, though not as heavily as JC's model (so far).
> > I'm aiming for a statically sized pool of about 10 threads to
> > handle events, though that number is most certainly not set in
> > stone.
>
> Well, in my model there are no limit on the number of threads, usually
> the number will consist of:
>
> - Some small number of threads to wrap everything up (main server,
> logger, console, etc.)
I was ignoring threads outside the event driver, as the original question
was a request for expansion on event drivers, not MT techniques... ;)
FWIW, I'll have at least 3-4 other threads running, total.
> - One thread per connection listener (it's possible to install more than
> one protocol adaptor, and therefore, connection listener)
> - One thread per incoming connection
so... at least 2 threads per user? What happens if you have a couple
hundred users? This wouldn't work on a unix system, unless someone has a
thread lib w/o the system imposed max # of threads.
> - N threads for db connections (implemented as a resource pool with
> minSpare/maxSpare)
> - As many threads as needed to implement some timed actions.
>
> > Pending events are stored in a queue wrapped in a pretty simple structure
> > containing a reference to the event object, an event ID, and the event's
> > timestamp (along with all the stuff necessary to make a doubly linked list).
> > Call it a Tuple if you like.
>
> May I call it a Queue? This is a basic element in my model (as well as
> everybody else's, I guess), but on the lower level.
Queue for the over-all structure (a bunch of <things> in a list). The
<things> is what I was saying you could call a Tuple. I called it:
class EWrapper {
public:
event *E;
EWrapper *next, *prev;
EWrapper(event *e);
~Ewrapper();
};
the list of EWrapper instances is a Queue, the EWrapper itself is a Tuple,
techincally.
> > Each 'tick', all events scheduled to be handled are pulled off the main
> > queue into a secondary queue, where the thread pool attacks them (in no
> > guaranteed order), each thread grabbing an event and calling it's Handle
> > method. Threads go back to sleep once all events in the secondary queue
> > have been handled.
>
> No ticks in my model - everything is completely asynchronous (though
> synchronized). Each object takes care about itself, performing any
> operations as soon as possible. The bottleneck is expected at the
> database transaction (request? Sorry, I'm not as familiar with DBs as I
> wish), so the abovementioned resource pool engages scheduling (some time
> in the future I may think about PriorityQueue instead of simple Queue).
I believe either transaction or request is perfectly valid in the
context. :)
The decision to go with ticks and a limited number of threads to handle
the events for a tick stems directly from concerns of system load. I
don't want to design a game that _requires_ it be the only significant
process running on a server. Having large numbers of threads, especially
if the end implimentation uses a kernel threads library, only adds to the
cpu usage for that process.
> > How many threads, executing potentially I/O intensive DB Queries, can run
> > simultaineously w/o bogging down the server?
>
> Well, a while ago I've been told to implement a Web crawler for a site,
> and the first thought was - just to go to the root URL, read and parse
> it and as links appear, spawn other threads using the same algorithm.
>
> Obviously, you can parse the HTTP stream much faster than server
> produces it, and, well, www.best.com (a guinea pig for that task)
> started to fail when, say, something like 50th request thread was
> started. Funny, but on my side (Linux, JDK 1.02) nothing bad happened.
As I said, it's very much a result of a lot of factors, including, but not
limited to cpu speed, threads implementation, compiler, OS, the DB in
use... etc.
> > but for my sanity I decided on a static pool, with a boot-time
> > configurable number of threads. heck, I could prolly pretty easily
> > add a command to allow admin to add some threads to the pool on the
> > fly, but I prefer to minimize the cost of thread creation,
> > localizing it to a boot-time expense.
>
> I'd suggest to shift to minSpare/maxSpare thread management strategy.
> Maybe, with some limit.
> Once again, the source code is available :-)
I am unfamiliar with the minSpare/maxSpare management strategy... the
O'reily PThreads book didn't discuss it, and as that's the only book I
really have avail... :) Care to give a brief tutorial, or point me to a
web page? I am a tinkerer at heart, and would prefer an algorithm/design
strategy to source code, as I prefer to code my own stuff. :)
> > For the lockless v. locked model, my main concern was/is how many threads,
> > executing at the same tick, are going to be handling events targeting the
> > same data? I figgure there won't be too many events executed
> > simultaineously that target the same data.
>
> Why bother? Change the event handling strategy to asynchronous.
mmm.. even in an asynchronous event handler, you can get two events
targetting the same data executing at 'the same time'. This requires
either a lockless/rollback method or a data locking method.
I should probably restate my main argument, because the more I think about
it, the more the above argues for a lockless model... ;) I think my real
motivation is that a locked model is easier to implement... IMHO.
> BTW, question to all: Why the timeslice-, or tick-, -based event
> handling model is still out there? See, I started to work seriously with
> parallel tasks on OS/2, which is multithreaded by default, unlike UNIX
> until lately, and I see no problems doing asynchronous event handling -
> for 4 years now.
>
> Why?
Well, when you come down to it, even a so-called asynchronous model is
still time-slice or tick-based. the system's clock just has a very fine
grain. I suppose it's a throw-back to paper RPGs. I played a lot of them
for many years (really haven't played much the past couple... *sigh*), and
it's hard to break out of the mold they establish. Most (if not all?) use
some form of a turn-based system, to allow for a changing time scale w/i
the game. At times, whole weeks of game time pass in mere minutes of real
time, and at times it takes hours to work through a 20 second combat.
We are building our game as a turn-based game, so it makes sense that all
events w/i the game are handled on a turn-based event driver. I see no
problem with asynchronous event handling for some admin commands, and will
prolly flag them as "to be handled immediately", but when I tell my
character to "cast fireball at bubba", it becomes trivial to implement the
delay inherent to the action by scheduling the event to go off 3 ticks
down the road.
yes, you could just as easily let bubba's character wait for however long
and then generate a 'fireball is cast at bubba from boffo' event, to be
handled immediately. Either way works. It's just a matter of where you
choose to do your time and potential interrupt handeling.
> > It will happen, there's no doubt about that, but
> > I feel the majority of events (executing at the same tick) will be aimed
> > at different targets. This assumption may well be proven wrong,
> > but for now I can only go with it and see what happens. :)
>
> Let me point out one possible common target: if you use the concept of
> context, or environment, and it's not a fixed value, then ...
>
> Like, time, weather in the area, gravity, etc.
mmm.. definately something to consider. Thanks. Anything else I missed,
anyone? ;)
> > If it proves to be the case
> > that a large enough % of the events target the same data, I may adopt some
> > schemes to try and shuffle events targeting the same data into the same
> > threads, so they are executed sequentially by that thread, removing the
> > issue.
>
> Then caching data makes a good sense.
I plan on having active data in memory... The point of my original comment
was that rather than having two threads contending for a lock on a piece
of commonly used data, I can simplify by having all events for that tick,
targetting that data, handled sequentially by one thread, freeing one
thread for use by other events, targeting other data.
> > I haven't spent much time on gracefull shut-down of the queue, or
> > storing of the event list to rebuild a queue, but they are next on the
> > hit parade.
>
> Two options:
>
> - save the event list and process it on the startup
> - process all the list on shutdown. But, this situation may be more
> difficult because events being processed may produce some more effects
> and so on - for example, if you shout (RL) and then try to 'process the
> event while shutting down', you're likely to:
>
> - Hear the echoes,
> - Which may, in turn, cause the avalanche,
> - Which may, in turn, kill a lot of people around,
> - Which, in turn, creates a LOT of other events to process,
> - and so on.
*nod* good point. I just realized that one of the other things we
planned to do would work nicely with saving events and processing them at
re-start... We planned on setting it up so that if you logout outside of
an inn or other "long-term storage" type safe-room, your character just
sits there... so, in the event of a... less than expected shutdown, all
events are saved, all characters out and about are saved as such. reboot,
all characters are put back, and all events are reloaded. Rather nicely
takes care of problems like "cast fireball boffo" being reloaded with
neither the caster or the target being around. I can see players being
unhappy with the way things would work out, though. It might be better
to just let the events die.
-Greg
More information about the mud-dev-archive
mailing list