Of disk swapping, database structure & project management..

clawrenc at cup.hp.com clawrenc at cup.hp.com
Sun Apr 13 08:58:24 CEST 1997


In <199704120607.GAA344071 at out1.ibm.net>, on 04/12/97 
   at 02:05 AM, "Jon A. Lambert" <jlsysinc at ix.netcom.com> said:

>> From: Greg Munt <greg at uni-corn.demon.co.uk>

>> Has anyone experimented with swapping unused parts of the mud database to 
>> disk, reading it back in when needed? What sort of format is it stored in 
>> (eg binary, ASCII, etc)?

>Yes!  Take a look a the COLD project for this in action.  It works
>remarkably well.  Their DB is binary, with a compression option. It
>uses NDBM as an indexing method,( a glorified? ISAM).  It also uses
>an object caching mechanism  that implements reference counting and
>writing of dirty objects.  A backup  mechanism is in place and works
>semi-asynchronously.   Recovery of a corrupted  database is not for
>the faint of heart.  Practical use of the database by external
>applications is not likely.  The DB can be decompiled and recompiled
>to ascii files.  There are some good ideas here.  

Some very good ideas -- many of which I tried to steal.  The tack I'm
taking now is a little different however (and partly illuminated in a
couple recent posts to the r.g.m.* loop vs event driven thread).  

As mentioned earlier everything is lives in the DB.  This encludes all
the event data, such as the Event Queue.  Fairly obviously however
with a decently high event rate, the physical IO rate to the DB would
become extreme.  Soo, my database code is getting reworked for the
umptieth time.

The background is that I want a fully transactional database which
supports nested transactions, rollbacks, and what I call
dependancy-specific rollbacks (ie the ability to rollback a compleat
backward dependancy tree of events, each of which was required for a
later event WITHOUT having to rollback the entire DB.  Thus one
specific object (and everything it interacted with) could be rolled
back while leaving all the rest of the game untouched),

How this is going to work:  The DB of course caches aggressively.  I
want to change mere aggression into flat out ferocity.  This thing is
going to cache like its bleedin' life depended on it.  An important
side effect is that the maximum life of a cache member without being
written to disk is going to grow to the multi-decaminute mark.  The
primary candidate for such long cache length will of course be the
objects containing the Event Queue and Event List data.  Alligned with
this will be a periodic full cache flush (hourly?) -- everything in
the cache will be commited, but the cache entries will remain.

Now, presuming that the server is so silly as to crash, or someone
trips over the power cord and yanks it out, what will happen is that
the server will load the DB and rollback the entire DB to the last the
last full cache flush.  Bingo!  Everything restarts exactly as it was
then, with all the pending/processing event data restored.

The reason for the dependancy-specific rollbacks is purely for
debugging.  It allows the question of, "How did this object get into
this state?" to be resolved without bring the entire rest of the game
to a stop.  With the very loosely assocaited event model I have (ie
little to no ability to track the physical or causal source of logged
events) this seems necessary.

>I am working on similar ideas that involve integrating an object
>persistent store  with a relational database.  

*This* I am interested in, but have yet to see a persistant store I
like.  Texas has a really good base idea with its runtime pointer
swizzling, but a few of the philips are real killers.

--
J C Lawrence                           Internet: claw at null.net
(Contractor)                           Internet: coder at ibm.net
---------------(*)               Internet: clawrenc at cup.hp.com
...Honorary Member Clan McFUD -- Teamer's Avenging Monolith...




More information about the mud-dev-archive mailing list