[MUD-Dev] Persistant storage.... My current idea.
J C Lawrence
claw at under.engr.sgi.com
Fri Apr 3 16:04:31 CEST 1998
On Fri, 3 Apr 1998 15:47:57 PST8PDT
J C Lawrence<claw at under.engr.sgi.com> wrote:
> Forwarded to the list for Cynbe:
> Every object is identifed by an abstract pointer.
...
> This abstract pointer is in fact an integer containing two
> bitfields: one specifies the object's size to the nearest power of
> two.
> For each size octave, I maintain a db file which is essentially an
> array of records of that size.
> Another bitfield in the abstract pointer gives the offset of the
> record in question within the relevant file.
I'll note without comment (sorry, not enough time right now for a real
value add): This is interestingly similar to the systems used by the
Texas Persistant Store, and not terribly far from some of the
underpinnings of Arjuna.
Good Stuff.
> The encoding I use actually allocates bigger offset fields to the
> smaller octaves, since one is more likely to have (say) 1,000,000
> 8-byte objects in a system than 1,000,000 1-megabyte objects. This
> is pretty important in the 32-bit implementation, less so in the
> 64-bit implementation.
I also found the size performance trade-offs and design considerations
charted in the tdbm docs of interest in this particular area.
> The resulting design lets me read any object in the db into memory
> in -exactly- one read(), which is pretty much provably optimal, and
> likely to be at least twice the performance of many index-file based
> designs. (A factor of two in disk access speed just might matter to
> some people... :)
This is very sewwt. Damn. I've been trying to polish my persistant
store model to reduce the bandwidth requirements and have been walking
about this basic idea in very tight concentrated circles without ever
cottoning on to it for weeks.
<<Yes, I'm going to be opeing Murkle to the public if I can get the
damned thing working again....>>
> Meanwhile, I keep a cache buffer in ram of recently accessed
> objects: I'm not comfortable with the time and space overhead of LRU
> management, so instead I manage them essentially as a FIFO queue
> that objects cycle through once loaded into cache: They are loaded
> at one end and deleted at the other. Since deleted objects will
> still be in the OS file system cache in general, frequently used
> objects will get 'read' back in very quickly, giving (I hope, no
> performance studies yet) much of the performance of an LRU system in
> practice, without much of the overhead.
I basically use a lightweight trellis table for the LRU cache stuff,
and then evaluate it only as needed. I haven't profiled it, but it
*seems* cheap. (Note to self: run profiler on DB state code)
--
J C Lawrence Internet: claw at null.net
(Contractor) Internet: coder at ibm.net
---------(*) Internet: claw at under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...
More information about the mud-dev-archive
mailing list