request for comments (was: Mud-Dev FAQ)
s001gmu at nova.wright.edu
s001gmu at nova.wright.edu
Thu Jan 8 11:44:01 CET 1998
On Mon, 5 Jan 1998, Vadim Tkachenko wrote:
>
>Jon A. Lambert wrote:
...
>> Variations on event-driven design
>
>Can you please elaborate on that?
I've spent the last month or so of my free time tossing around ideas for our
event driver, and I think I've finally settled on a general model. I have yet
to implement and test all of it, so I'm sure I'll be fine-tuning it, but here's
where it stands for the moment:
Definately Multi-threaded, though not as heavily as JC's model (so far).
I'm aiming for a statically sized pool of about 10 threads to handle events,
though that number is most certainly not set in stone.
All events are descended from a base event class, and the driver has
knowledge of only that base class. (A fundamental of OO design, but worth
stating.)
Pending events are stored in a queue wrapped in a pretty simple structure
containing a reference to the event object, an event ID, and the event's
timestamp (along with all the stuff necessary to make a doubly linked list).
Call it a Tuple if you like.
Each 'tick', all events scheduled to be handled are pulled off the main
queue into a secondary queue, where the thread pool attacks them (in no
guaranteed order), each thread grabbing an event and calling it's Handle
method. Threads go back to sleep once all events in the secondary queue
have been handled.
I will most likely use a locked model for access to the DB, but I need to
do a bit -o- research and thinking before I commit to one model or the other.
The main design concerns I dealt with in comming to the decisions I've made:
How many threads, executing potentially I/O intensive DB Queries, can run
simultaineously w/o bogging down the server? This is obviously tied to
the machine/os/thread implementation/DB version the end design is run on,
but for my sanity I decided on a static pool, with a boot-time configurable
number of threads. heck, I could prolly pretty easily add a command to
allow admin to add some threads to the pool on the fly, but I prefer to
minimize the cost of thread creation, localizing it to a boot-time expense.
For the lockless v. locked model, my main concern was/is how many threads,
executing at the same tick, are going to be handling events targeting the
same data? I figgure there won't be too many events executed simultaineously
that target the same data. It will happen, there's no doubt about that, but
I feel the majority of events (executing at the same tick) will be aimed at
different targets. This assumption may well be proven wrong, but for now I
can only go with it and see what happens. :) If it proves to be the case
that a large enough % of the events target the same data, I may adopt some
schemes to try and shuffle events targeting the same data into the same
threads, so they are executed sequentially by that thread, removing the
issue.
My reasoning is that, combined with ballancing the number of threads
correctly, this will allow the overhead of locking data to be outweighed
by the preformance gain of multithreading. Too much locking, and the
threads execute in series, loosing all preformance increase, too few
threads, and locking overhead outweighs performance gain.
Current, unanswered questions:
What happens if it takes more than one tick to process all the events in
the secondary queue? This says either that there are not enough threads to
process all the events quickly enough, or that there are too many events! :)
I'm thinking I'll just put the events in the same secondary queue, and have
the threads do a bit of prioritizing... only if there are no threads
executing events for the previous tick will they start processing the new
tick's events... dunno. I need to think about it some more. :)
I haven't spent much time on gracefull shut-down of the queue, or storing of
the event list to rebuild a queue, but they are next on the hit parade.
Did I miss anything glaring? :)
-Greg
More information about the mud-dev-archive
mailing list