[DGD] Object upgrading
Raymond Jennings
shentino at gmail.com
Thu Jul 5 07:29:41 CEST 2018
On Tue, Jul 3, 2018 at 6:11 AM <bart at wotf.org> wrote:
> On Tue, 3 Jul 2018 05:59:00 -0700, Dread Quixadhal wrote
> > Just out of curiosity, aside from memory requirements, is there any
> > real issue with just keeping a data structure (in a daemon) to track
> > both a list of clones of objects, as well as the last updated
> > timestamp for each?
>
> On DGD? well, you can't really 'asside' memory requirements because those also
> directly affect the cost of swapping in/out the specific object/daemon (and
> DGD has a need to swap out every object, even if it is just to the swap cache,
> because part of its maintenance/garbage collection happens swapout/in).
>
> Hence, on DGD it can have some real performance impact beyond just using a bit
> more memory.
>
> But beyond that, on Hydra, following this approach results in threads
> colliding on that shared data, and hence potentially getting a lot of
> rollbacks when cloning a lot of objects.
>
> >
> > It seems like you should be able to sort such a data set by the last
> > updated timestamp and then cull anything whose time is newer than
> > the last time you ran an update cycle. Assuming you’re not
> > creating new objects faster than you can get around to updating old
> > ones, each pass would make that list shorter and shorter until it
> > got to them all.
>
> Well, you do want to call_touch all objects and clones before allowing any
> other thing to run. The simple reason is that not doing so will result in
> objects getting called and having updated code but outdated data. The entire
> call_touch and upgrade function calling is to ensure this data gets migrated
> on the first call to an object/clone.
>
> So, you can do 'lazy' data upgrading by means of using call_touch, but you
> can't do 'lazy' call_touch.
>
> >
> > If there’s a fast way to find clones without having to keep track
> > of them, perhaps you could simply build the list of “yet-to-be-
> > updated” clones as you compile an object (and thus need it to be
> > updated), and throw the list away when the last clone is done, to
> > save memory.
>
> Fast is relative, but Shentino's approach works and does not have the memory
> penalty.
Just to give credit where it's due.
The "find_object loop" is easy to come up with on your own, but it was
actually an idea provided to the DGD mailing list by then Skotos
engineer Par Winzell on April 4, 2004. It was given as an alternate
method of keeping track of clones when there were more than N of them,
where N is some power of 2 and I surmise identical to the maximum
array size specified in the config file.
Kudos to Par Winzell.
> Bart
> --
> https://www.bartsplace.net/
> https://wotf.org/
> https://www.flickr.com/photos/mrobjective/
>
> ____________________________________________
> https://mail.dworkin.nl/mailman/listinfo/dgd
More information about the DGD
mailing list