[DGD] Persistent Users
Blain
blain20 at gmail.com
Thu Sep 22 16:05:40 CEST 2016
The most efficient way to handle read/write perms is to not have any and
allow only admin users and admin and system code to run in your game
instance. Builders and coders can create game content on another instance
with read/write permissions in play.
I see memory, be it RAM or swap, as what's active and the data stored on
disk as what's inactive. A player that isn't logged in is inactive and
should be removed from active memory.
The ability to keep all players in memory seems neat and may simplify some
code, but it's just a novelty to me. I'd much rather save a player's data
in one central location, unique to that player.
On Sep 22, 2016 8:32 AM, <bart at wotf.org> wrote:
> On Thu, 22 Sep 2016 13:17:43 +0100, Gary wrote
>
> <snip>
>
> > The other issue is one of technical limitations. Assuming a given
> > user account can have multiple characters, or that players are
> > allowed multiple user accounts. The number of users and player
> > bodies to persist will (on a successful mud anyway) continue to grow.
> >
> > DGD appears to limit array_size to a maximum of unsigned short / 2,
> > 32768 items[^1]. That looks like a compile time limit and a prior
> > post by Dworkin said:
> >
> > > The limit exists for muds with guest programmers.
>
> Iirc its actually < USHRT_MAX/2, which is 32767 max :-)
>
> But anyway, there is a bit more to this. Making arrays very large makes
> them
> very inefficient, and generally it is a good idea to keep arrays smaller
> then
> 512 items, or at least smaller then 1024 items for performance reasons.
> First
> of all, modifying arrays (adding/removing elements) may result in
> reallocations, which simply become more expensive and difficult when arrays
> get very large.
>
> It is much better to look for a solution where you distribute this data
> over
> multiple arrays when it becomes too big to reasonably fit a single array.
>
> Additionally, making very big arrays also makes the object containing that
> array big. This makes swapping less efficient, so after some point, it
> really
> pays to distribute those multiple arrays over multiple objects.
>
> This exact issue set is what triggered me to implement a database like
> setup
> for containing the account/user data. My setup is configured to support
> upto
> 16777216 users right now, and can be reconfigured to support more users
> then
> there exist humans on this planet (and hence more users then can be counted
> with an unsigned int :-). That may be complete overkill for most purposes,
> but
> it means that 1. I never have to worry about it again, and 2. for more
> 'normal' use, supporting maybe a few 100k users, all arrays stay rather
> small
> and very efficient, same for the objects containing those arrays (or
> actually,
> mappings, but that doesn't matter for this discussion)
>
> >
> > So increasing that would solve for all practical purposes the limit
> > of user accounts in userd without changing the kernel library to use
> > a mapping of arrays on the assumption that malicious or innocently
> > negligent guest programmers are not an issue.
>
> Making things big like that introduces more potential for issues, even when
> you do not have a malicious guest programmer. It at the very least causes a
> situation where you must be very aware of the impact on performance of
> every
> array you manage, as large arrays are expensive, much more so then having a
> nested array. You will also have to ensure you are not by accident causing
> objects to become very large. The issue of restoring a very large object
> from
> a state dump, or reading in a .o file from a really huge object, was solved
> recently after I ran into a variety of such issues triggering crashes in
> DGD,
> but eventho DGD does not crash on those things anymore, it really is a very
> very good idea to avoid those situations completely.
>
> Splitting up and distributing largish data sets is really the only good
> solution here, and when data sets get really large, splitting up means more
> then a nested array or mapping, it means spreading the data over multiple
> objects.
>
> >
> > Some policy would need to be in place to prune accounts should the
> > number of inactive accounts grow so large that the storage space or
> > sheer object count begins to cause issues.
>
> Increasing object count takes some memory, but tends to have far less
> negative
> effect on overall performance and behavior.. but yes, you'll have to take
> care
> of removing stale data after some point.
>
> >
> > I don't see disk storage as a realistic issue, whether object count
> > would be however I'm not sure at this time.
>
> Both can be an issue, but object count is easy to increase at the expense
> of
> some memory. WHen your swap file gets very big, you'll at some point (for
> me
> on my hardware at around 2.2GB swap) run into the issue that you are
> spending
> too much time on copying data back from the state dump to the swap file
> after
> having done a state dump. This is mostly a problem when there is too little
> time between 2 state dumps to fully copy back everything to swap, but it
> does
> have an impact on performance of your system in general.
>
> >
> > As for why I'm considering persisting users rather than saving password
> > and other account information to files is partly that it just avoids
> > load/save and permissions.
> >
> > To upgrade accounts or do other processing you can just loop over all
> > users in userd (likely spread over execution threads if user counts are
> > high enough that swapping out would be important). As opposed to looping
> > over all the account files and creating/destroying objects.
>
> Yes, that is one of the big advantages I also noticed, eventho I do not
> keep
> the actual user objects loaded, the data is there in a loaded state, and I
> can
> just go over it without having to create/destroy a user object to access
> the
> data for each account.
>
> >
> > Ignoring the array size limit under the assumption that a rebuilt driver
> > with increased limits will solve that, the issue I can think of is that
> > should a state dump need to be reverted for any reason, users who
> > changed their passwords may suddenly find they need to use their old
> > password rather than latest. They may not remember the old password.
> >
>
> I mostly see an issue wanting to start the system without a state dump, bt
> that is partially a design issue. In my setup it makes sense to be able to
> do
> this without account and player information getting lost, but the entire
> concept of such a restart may not apply to a truely persistent world.
>
> > I see that as a minor issue as long as there's any form of password
> > reset support which would be useful for general forgetfulness anyway.
> >
>
> That is a good idea to have anyway.
>
> Regarding making things scalable.. don't overdo it, but also don't count on
> things always staying small. The moment they outgrow what you planned for
> is
> likely going to coincide with high popularity of your mud, and that is the
> worst possible moment for trying to fix such issues.
>
> Bart.
> --
> http://www.flickr.com/photos/mrobjective/
> http://www.om-d.org/
>
> ____________________________________________
> https://mail.dworkin.nl/mailman/listinfo/dgd
More information about the DGD
mailing list