[MUD-Dev] Re: TECH: Distributed Muds

J C Lawrence claw at 2wire.com
Fri Apr 27 19:19:12 CEST 2001


On Fri, 27 Apr 2001 00:04:43 -0600 
Christopher Kohnert <cjkohner at brain.uccs.edu> wrote:

> J C Lawrence wrote:
>> I've never liked the base concept of a select() looping process
>> that tried to do real work between select() calls.  It can work
>> to be sure, but it has always smelled of playing with fire.
 
>> I prefer having a thread dedicated to socket IO, pulling in and
>> buffering the IO as received, and then handling compleated blocks
>> off to one or more queues (depends on whether you do pre-sorts)
>> for the other game systems to pick up and process appropriately.
>> There are several advantages, not the least of which is that you
>> decouple your command/entry model from your processing model.

> I've considered this model before. It seems to me that unless you
> truly use a lightweight process model (e.g. a pthreads or similar)
> with some shared memory, it could lead to wasted time and space in
> copying these blocks to the server processes which handle
> them. 

The key word in my text above is "thread", as such the data segment
is shared and I can get away with only a single buffer copy from the
socket to the command buffer (and visa versa).  Going for a
multi-processing route wouldn't make sense in this case as there
would be little gain, and much added complexity.

> I'd love the ability to separate the network IO from the server
> process entirely, ie.  separate processes altogether, but the
> copying problem has sort of turned me off from it. Named pipes,
> regular pipes, SHM... ugh (let alone the maintenance). Unless I'm
> really missing something about the IPC capabilities of our good
> ol' unixes.

For the multi-processing tack, one approach is to use ring buffers.
Its easiest to consider these in terms of files, and then to just
transplant that model into a SHM block.

  Basically you have a shared memory block which maintains a head
  and tail pointer into a file.  The head is where new records are
  written.  The tail is where records are read.  ie a FIFO.  The
  tail chases the head.  If the tail catches the head it sleeps for
  a bit, hoping the head will run away.

  The simple approach gives an endlessly growing file as the head
  and tail pointers run forward across an ever growing file.

  Now of course each record in the file has a record delimiter which
  indicates the length of the record and other details.  Extend this
  record format so as to also be able to indicate a jump to a new
  offset.  Such a jump message would be be to a literal file offset
  at which the next record will be written.

  Now, if the head gets too far in front og the tail, and the tail
  has a nice lot of space between it and that start of the file,
  instead of the head writing a new record on the end fo the file
  like before, instead it writes a record that points back to the
  beginning of the file, and then writes the new rcord there.

  As such you end up with something that looks like:

    ......TrrrrrrrrrrrrrrrrrrrrH   (Tail is chasing head)

    'H' is the head, 'T' is the tail, 'r' are records, and '.' are
    records that the tail has already processed.

    rH....TrrrrrrrrrrrrrrrrrrrrJ   (Jump back to start/loop).

  'J' is a jump back to the start of the file.

  Now it gets a bit nasty if the head catches up on the tail:

    rrrrrrrrrrHTrrrrrrrrrrrrrrrJ

  In which case just put in a new jump which goes back to the end of
  the file:

    rrrrrrrrrrKTrrrrrrrrrrrrrrrJrH

  'K' is a jump back to the end of the file.

  The tail will now finally catch up to the jump 'J':

    rrrrrrrrrrK...............TJrH

  And will then jump back to the beginning:

    .TrrrrrrrrK.................rH

  And chase frward and jump at 'K':

    .........TK.................rH
    ............................TH

Using file IO, this is pretty simple and easy.  Using shared memory
its not quite so easy as unlike files, shared memory blocks have
definite size limits and can't be grown.  If you can safely
guarantee that the record buffer represented by the ring buffer will
never grow larger than X (the size of your allocatin), this is not a
problem (and you don't need to do the extra complexity fo the 'K'
jump above).

--
J C Lawrence                                       claw at kanga.nu
---------(*)                          http://www.kanga.nu/~claw/
--=| A man is as sane as he is dangerous to his environment |=--
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list