[MUD-Dev] [Tech] MUDs, MORPGs, and Object Persistence

brian price brianleeprice at hotmail.com
Thu May 10 19:26:30 CEST 2001


Before considering what type of backend datastore to use for a
MUD/MORPG server, it may prove useful to (re)examine what the
requirements are.

It is possible today to approach the 'ideal' condition where nearly
all necessary server data can be maintained in RAM.  In backend store
related terms, this implies that the majority of operations will be
writes rather than reads.  Reads, when necessary, will generally
result in pulling in an entire set of objects at once rather than
querying for a single object or record.  (ex. reading in a zone)

In order to approach this ideal condition, it helps to use objects
with small, efficient, memory footprints.  The design of such objects
generally results in at least part of the object model having a 'tall'
class hierarchy rather than a flat one, but in any case - such a
design will generally require a large number of classes.

For purposes of fault tolerance, we need a datastore that can
periodically be backed up in a fast and efficient manner, preferably
without stalling the server.  Note that transaction capability (in db
terms) is *not* a requirement, the capability of generating
checkpoints by writing out the 'dirtied' (changed) portions of the
database periodically will satisfy the datastore backup requirement.
Checkpoints can be restored simply by starting with the last full
backup and applying (in order) the saved changes since that backup
occurred (can be done offline).

Thus our datastore requirements are:

  1) frequent writes of dirty objects
  2) infrequent reads of collections of objects
  3) large number of classes
  4) deep inheritance trees
  5) fast and efficient backups thru use of checkpoints or equivalent

An OTS or OS RDBMS is not overkill given these requirements, in fact,
the entire class of available RDBMS solutions are underpowered,
inefficient, slow, and expensive.  An oft touted feature of most RDBMS
- SQL - is, in this case, completely unnecessary *and* imposes a
significant performance hit for zero gain.  Worse, the use of
efficient objects and resultant class bloat is practically impossible
to represent in RDB terms without investing an insane amount of
development time.

The Object Relational DBMS approach is nearly as bad - even if you can
afford one.  While they will automate the class mappings, thus saving
dev time, their general purpose implementations and the mapping
systems will still impose unneccessary performance penalities.

IMO, the best solution is a persistent object store.  Not a full
fledged OODB (whatever that is), but a collection based storage and
retrieval system for serializable objects.  In C++, such a system is
fairly easy to develop: combine a RTTI based object persistence layer
with the idea of 'data objects' using proxy/accessor pattern (to hide
object memory presence and control object memory lifetime) with an
object cache and a simple db store consisting of one index (object
ids) and one table with records of the form: <object id>, <class id>,
<serialized object data>.

If a map of dirty datastore regions is maintained, a simple datastore
can create a checkpoint by: locking the datastore; copying the map and
clearing the 'in-use' map; locking all dirty regions; unlocking the
datastore; copying and unlocking each dirty region in the copied map.
This can be implemented as a fast and efficient operation which
minimizes stall potential.

I've heard all the arguments against OODBMS over the years and all the
arguments for RDBMS, and in this case at least, *none* of them hold
any water.

Brian Price
-= have compiler, will travel =-

_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list