[MUD-Dev] Comments on the DB layer
Jon A. Lambert
jlsysinc at ix.netcom.com
Wed May 7 01:34:16 CEST 1997
> From: clawrenc at cup.hp.com
<stuff snipped - you know what you spaketh, I hope I remembereth>
This poses some interesting problems:
1) How long do you keep old objects in the DB. If your transactions are
many you might end up with a large DB of mostly old objects.
2) If the full 128 bits is part of the key your indexes, trees, hashes, or whatever
your using, they could get larger and your searches could be longer. Also
longer searches if many old objects from 1) above.
3) I can see how you would get numbers of killed mobiles by checking
how many old objects of the type were dead. I don't see how you
XREF with the weapons or spells, unless you store this info with the dead
mobile object or the weapon or spell object undergoes a state change
requiring it to be stored with the same transaction time. Perhaps logging
certain events might be easier, though limited because you are guessing
at what your potential queries will be.
I have 64-bit ObjIDs and they are generated by the RDB now (convenient and
consistent, but some overhead on object creation). I use a timestamp field in
the RDB, also automatic but it is not part of the "loaded" object. It exists
solely in the RDB and is very efficient.
Class Versioning happens through SQL DDL. Attributes that are removed
are removed from all instanced objects. Attributes that are added
are added to all objects as nulls. Methods reside in the class along with
class instance attributes. (That ColdC vs "real OOP" thing we discussed
earlier ;-) ) Versioning can be expensive if done late in a class's life, but
this is part of interactive programming and not a runtime thing.
I've been having a real bitch of a time with the DB recovery thing myself.
This is distantly related to your transactional recovery, I think.
I have been trying to keep a log that contains TranID, ObjectImage which
is interspersed with Tranid Commits finally Object Cache to Disk Commits
(ala DB2). The theory is that if I pull the plug on the machine, upon reboot I
can read the log back to the last Object Cache to Disk Commits that are
encompass completed transactions (assuming the disk head doesn't take
a big bite).
Two problems are apparent. The log buffer may not be completely flushed.
This I can handle since I can rollback to the previous Object Cache to Disk
Commit updating the RDB with the last valid ObjectImage. The other
problem is rather embarrassing. My lovely OS decides that files open for
write access at the time of crash are no longer viable. There must be a
way around this. My trusty mainframe never made this decision. I don't
really want to keep closing and reopening a log file. Perhaps I've
missed a simple concept here?
More information about the mud-dev-archive
mailing list