[DGD] 1.2.133
Shentino
shentino at gmail.com
Fri Dec 21 18:41:10 CET 2007
On Dec 20, 2007 5:26 PM, Felix A. Croes <felix at dworkin.nl> wrote:
> Shentino <shentino at gmail.com> wrote:
>
> >[...]
> > Would this include "mopping up" references to destructed objects?,
> > like, say, swapped out object X has references to newly destructed
> > object Y. Does your "garbage collection" cover combing out old Y refs
> > from X's dataspace? Also, if so how aggressively are they mopped up?
> > Is it like a "shoot on sight" deal on swapin? I'm just curious,
> > that's all.
>
> "Full" garbage collection is about self-referential datastructures
> composed of arrays, mappings and light-weight objects. These can
> only be removed when the object they're in is swapped out, so a
> healthy swaprate is actually a good thing for a mud.
>
> Persistent objects have special handling. Check what the
> DESTRUCTED macro in interpret.h does and you'll understand.
>
>
> > Also, something I wanted to ask you.
> >
> > I did some snooping in the source and found something very surprising.
> >
> > Namely, that swap_fragment seems to have more influence on runtime
> > memory usage than cache_size. The d_swapout seems to pick some
> > certain fraction to "swap out" ALL THE TIME, not just when it's over
> > cache_size.
>
> Swap_fragment works the way you think it does, and cache_size is
> the size of the sector cache which is something altogether different.
>
> Most people expect DGD swapping to work like OS swapping. They
> want to see a lower limit on the resident set size below which
> swapping does not occur, and a higher limit having to do with
> the maximum amount of memory DGD is supposed to allocate. The
> problem here is that OS swapping doesn't work well. As I've
> mentioned above, it is a good thing in DGD that swapping happens
> whenever there is activity, and a maximum resident set size would
> be too abrupt a barrier. In the case of an operating system, you
> really don't want it to swap at all since the impact on performance,
> once it starts, is large.
>
> What this means is that you're stuck with a vague parameter and
> a process that will use more memory when it's busy. I plan to
> tweak it in the future, though. :)
I was actually hoping to be able to tweak it in both directions :)
>From this (= swapout, - not swapout)
(frag 4, bias 0)
= - - -
= = - - - - - -
= = = - - - - - - - -
To this:
(frag 4, bias -4)
- - - -
= - - - - - - -
= = - - - - - - - - - -
OR this:
(frag 4, bias +4)
= = - -
= = = - - - - -
= = = = - - - - - - -
Or maybe even this if I'm just plain mean (swap fragment very high,
but also a strong bias)
(frag ()(), bias +8)
= = - -
= = - - - - - -
= = - - - - - - - - - -
Or perhaps this if I'm just plain demented
(frag 1, bias -16)
- - - -
= = = = - - - -
= = = = = = = = - - - -
You know those "more bars than one" commercial we got in the US for
cellphones? Well, it's sorta like that. I'd like to not just change
the SLOPE of the swapouts/nobjects graph, but also the Y-intercept.
I would actually enjoy having a "minimum of 1 dataspace swapout per
exec round" rule, even if the number of "resident objects" is well
below swap_fragment...which is exactly what the current "n /=
fragment" is preventing given the "chop" effect in integer division :P
Maybe something like "(n +/- bias) /= fragment" with a properly sane
upper bound on the bias.
I made a tweak recently to (n + fragment - 1) / fragment, and got a
very tidy "vaccuum of laziness" that didn't have "swap_fragment"'s
worth of objects sitting around in limbo.
In the above case with the new paramater "swap_bias", having a
positive swap bias equal to the swap fragment would "fake" the object
count to a sufficient degree that the quotient would be at least equal
to 1.
I hope my crappy ascii bar graphs are illustrating what I fear my
words may not :P
More information about the DGD
mailing list