[DGD] Object upgrading

Dread Quixadhal quixadhal at gmail.com
Tue Jul 3 14:59:00 CEST 2018


Just out of curiosity, aside from memory requirements, is there any real issue with just keeping a data structure (in a daemon) to track both a list of clones of objects, as well as the last updated timestamp for each?

It seems like you should be able to sort such a data set by the last updated timestamp and then cull anything whose time is newer than the last time you ran an update cycle.  Assuming you’re not creating new objects faster than you can get around to updating old ones, each pass would make that list shorter and shorter until it got to them all.

If there’s a fast way to find clones without having to keep track of them, perhaps you could simply build the list of “yet-to-be-updated” clones as you compile an object (and thus need it to be updated), and throw the list away when the last clone is done, to save memory.

Sent from Mail for Windows 10

From: bart at wotf.org
Sent: Tuesday, July 3, 2018 4:34 AM
To: All about DGD and Hydra
Subject: Re: [DGD] Object upgrading

On Mon, 2 Jul 2018 23:55:28 -0700, Raymond Jennings wrote
> 
> After doing the paperwork for patchers, when a non inheritable object
> is compiled, the next thing I do is atomically enter a loop to
> call_touch the object and all of its clones so that the next attempt
> to access them will get it pulled over for a JIT patch.
> 
> Once the clones are marked, the last thing I do before I end the task
> is suspend the system and arrange to have the object's upgrade
> function called, after which the system releases itself.

I do not understand this particular step, wouldn't this be triggered due to
the object that needs a call to its upgrade function also getting handled by
call_touch?

Second question, why suspend the system here, and not at the beginning of this
entire process?

<snip> 

> 
> My own concern is that the most obstructive part of the process is 
> the loop that marks all the clones.  Actually finding all the clones 
> is a bit of a chore.  For the time being I'm using a loop that does 
> a find_object sweep through the object table.

I've looked at this approach, and use it as a fallback to reconstruct object
and clone data. The advantage of this approach is of course not ending up
pulling in all clones, which my current alternative does, as it uses a linked
list which is maintained in the objects/clones, hence it has to call a clone
to find the pointer to the next clone if any. I use a loop that only pulls in
a limited number of objects/clones, and after this, uses a call_out to itself
for the next batch. (note that the object handling this is excluded from
call_out blocking while the system is suspended).

Obviously the above loop first gets the next clone pointer before doing a
call_touch on the current clone.

I considered using this setup to just call the appropriate upgrade functions,
but that did not work out very well, those functions can be relatively
expensive and can fail.

But.. that solution is not exactly perfect as it ends up pulling in all
objects and clones affected by the recompile.

Being able to find all clones relatively quickly is something I also use
elsewhere, and it does not require a global data structure for keeping track
of this.

Bart.
--
https://www.bartsplace.net/
https://wotf.org/
https://www.flickr.com/photos/mrobjective/

____________________________________________
https://mail.dworkin.nl/mailman/listinfo/dgd




More information about the DGD mailing list