[DGD] Persistent Users
Blain
blain20 at gmail.com
Thu Sep 22 17:02:54 CEST 2016
Bart, if your password and permissions info is stored in a database
separate from the user object, then that's still not in the swap. ;) It's
just in a large database file.
I get that people don't mind putting all of their eggs in one basket.
Keeping individual files on disk is still one big basket that is the disk.
And it's not like the OS or HDD itself can't cause corruption. But on
Ancient Anguish, we had a rash of problems for nearly a year where
individual player files were getting corrupted because the HDD was caching
the write job but never fulfilling it in some cases. Imagine if one large
file was being corrupted due to this. A lot of this really depends on the
technology involved and whether or not the data is recoverable or
replaceable from backup. If you can retrieve the info for a single user
from a corrupted swapfile, then there's no problem. I can't. I'm not
savvy enough. But I can handle individual files all day long.
On Sep 22, 2016 9:30 AM, <bart at wotf.org> wrote:
> On Thu, 22 Sep 2016 09:05:40 -0500, Blain wrote
> > The most efficient way to handle read/write perms is to not have any
> > and allow only admin users and admin and system code to run in your game
> > instance. Builders and coders can create game content on another
> instance
> > with read/write permissions in play.
>
> That goes a long way at reducing those issues, but does not solve them.
> It is still pretty trivial to create some code which in a very unobvious
> way
> affects the game on request from a non admin user, get it tested and
> introduced in the live game, and then trigger the non-obvious behavior.
>
> While what you describe can be a very good policy whenever practical, and
> one
> I'd recommend strongly for every mud, it is wrong to believe this removes
> the
> issue with read/write access. Instead of having this issue with interactive
> users and code, you are left with having this issue with code.
>
> Being able to restrict what code does, ie, managing the read/write
> permissions
> that code has, is the other half of the solution, and this requires
> managign
> read/write access still.
>
>
> >
> > I see memory, be it RAM or swap, as what's active and the data
> > stored on disk as what's inactive. A player that isn't logged in is
> > inactive and should be removed from active memory.
>
> DGD is a swap/disk based system. The 'memory is active, disk is inactive'
> approach is really not very appropriate. Tho you can make things work that
> way, you are trying to fight the design of the driver instead of trying to
> use
> it that way.
>
> >
> > The ability to keep all players in memory seems neat and may
> > simplify some code, but it's just a novelty to me. I'd much rather
> > save a player's data in one central location, unique to that player.
>
> That data is in a central place, unique to that player. The place is called
> 'player object' :-)
>
> But yeah, I get where you are coming from, and this really does take
> rethinking a few things a bit when coming from a more traditional lpmud
> style
> approach for example.
>
> As to the user subsystem I'm using.. the user/account data is logically
> still
> in one central place, and kept unique per user. It completely acts like a
> real
> database (the central place) with a per user row with data (the per user
> unique part). All data can be exported in json notation to a file so I can
> throw away the state dump and restore users still if I want to. The
> advantages
> are that it scales to support more users then there are people on this
> planet,
> allows uniform access and access control on all user/account data, lets you
> access and modify that data using a subset of SQL, and makes that a user
> object does not contain data which would be interesting to 'steal' for
> others,
> nor does a user object have any data in it which is persistent and could be
> used to subtly impact the system. No, the user object simply does not have
> access to the permission data or password data directly, it can only access
> this data through controlled means instead of it being variables in the
> object.
>
> Bart.
>
> >
> > On Sep 22, 2016 8:32 AM, <bart at wotf.org> wrote:
> >
> > > On Thu, 22 Sep 2016 13:17:43 +0100, Gary wrote
> > >
> > > <snip>
> > >
> > > > The other issue is one of technical limitations. Assuming a given
> > > > user account can have multiple characters, or that players are
> > > > allowed multiple user accounts. The number of users and player
> > > > bodies to persist will (on a successful mud anyway) continue to grow.
> > > >
> > > > DGD appears to limit array_size to a maximum of unsigned short / 2,
> > > > 32768 items[^1]. That looks like a compile time limit and a prior
> > > > post by Dworkin said:
> > > >
> > > > > The limit exists for muds with guest programmers.
> > >
> > > Iirc its actually < USHRT_MAX/2, which is 32767 max :-)
> > >
> > > But anyway, there is a bit more to this. Making arrays very large makes
> > > them
> > > very inefficient, and generally it is a good idea to keep arrays
> smaller
> > > then
> > > 512 items, or at least smaller then 1024 items for performance reasons.
> > > First
> > > of all, modifying arrays (adding/removing elements) may result in
> > > reallocations, which simply become more expensive and difficult when
> arrays
> > > get very large.
> > >
> > > It is much better to look for a solution where you distribute this data
> > > over
> > > multiple arrays when it becomes too big to reasonably fit a single
> array.
> > >
> > > Additionally, making very big arrays also makes the object containing
> that
> > > array big. This makes swapping less efficient, so after some point, it
> > > really
> > > pays to distribute those multiple arrays over multiple objects.
> > >
> > > This exact issue set is what triggered me to implement a database like
> > > setup
> > > for containing the account/user data. My setup is configured to support
> > > upto
> > > 16777216 users right now, and can be reconfigured to support more users
> > > then
> > > there exist humans on this planet (and hence more users then can be
> counted
> > > with an unsigned int :-). That may be complete overkill for most
> purposes,
> > > but
> > > it means that 1. I never have to worry about it again, and 2. for more
> > > 'normal' use, supporting maybe a few 100k users, all arrays stay rather
> > > small
> > > and very efficient, same for the objects containing those arrays (or
> > > actually,
> > > mappings, but that doesn't matter for this discussion)
> > >
> > > >
> > > > So increasing that would solve for all practical purposes the limit
> > > > of user accounts in userd without changing the kernel library to use
> > > > a mapping of arrays on the assumption that malicious or innocently
> > > > negligent guest programmers are not an issue.
> > >
> > > Making things big like that introduces more potential for issues, even
> when
> > > you do not have a malicious guest programmer. It at the very least
> causes a
> > > situation where you must be very aware of the impact on performance of
> > > every
> > > array you manage, as large arrays are expensive, much more so then
> having a
> > > nested array. You will also have to ensure you are not by accident
> causing
> > > objects to become very large. The issue of restoring a very large
> object
> > > from
> > > a state dump, or reading in a .o file from a really huge object, was
> solved
> > > recently after I ran into a variety of such issues triggering crashes
> in
> > > DGD,
> > > but eventho DGD does not crash on those things anymore, it really is a
> very
> > > very good idea to avoid those situations completely.
> > >
> > > Splitting up and distributing largish data sets is really the only good
> > > solution here, and when data sets get really large, splitting up means
> more
> > > then a nested array or mapping, it means spreading the data over
> multiple
> > > objects.
> > >
> > > >
> > > > Some policy would need to be in place to prune accounts should the
> > > > number of inactive accounts grow so large that the storage space or
> > > > sheer object count begins to cause issues.
> > >
> > > Increasing object count takes some memory, but tends to have far less
> > > negative
> > > effect on overall performance and behavior.. but yes, you'll have to
> take
> > > care
> > > of removing stale data after some point.
> > >
> > > >
> > > > I don't see disk storage as a realistic issue, whether object count
> > > > would be however I'm not sure at this time.
> > >
> > > Both can be an issue, but object count is easy to increase at the
> expense
> > > of
> > > some memory. WHen your swap file gets very big, you'll at some point
> (for
> > > me
> > > on my hardware at around 2.2GB swap) run into the issue that you are
> > > spending
> > > too much time on copying data back from the state dump to the swap file
> > > after
> > > having done a state dump. This is mostly a problem when there is too
> little
> > > time between 2 state dumps to fully copy back everything to swap, but
> it
> > > does
> > > have an impact on performance of your system in general.
> > >
> > > >
> > > > As for why I'm considering persisting users rather than saving
> password
> > > > and other account information to files is partly that it just avoids
> > > > load/save and permissions.
> > > >
> > > > To upgrade accounts or do other processing you can just loop over all
> > > > users in userd (likely spread over execution threads if user counts
> are
> > > > high enough that swapping out would be important). As opposed to
> looping
> > > > over all the account files and creating/destroying objects.
> > >
> > > Yes, that is one of the big advantages I also noticed, eventho I do not
> > > keep
> > > the actual user objects loaded, the data is there in a loaded state,
> and I
> > > can
> > > just go over it without having to create/destroy a user object to
> access
> > > the
> > > data for each account.
> > >
> > > >
> > > > Ignoring the array size limit under the assumption that a rebuilt
> driver
> > > > with increased limits will solve that, the issue I can think of is
> that
> > > > should a state dump need to be reverted for any reason, users who
> > > > changed their passwords may suddenly find they need to use their old
> > > > password rather than latest. They may not remember the old password.
> > > >
> > >
> > > I mostly see an issue wanting to start the system without a state
> dump, bt
> > > that is partially a design issue. In my setup it makes sense to be
> able to
> > > do
> > > this without account and player information getting lost, but the
> entire
> > > concept of such a restart may not apply to a truely persistent world.
> > >
> > > > I see that as a minor issue as long as there's any form of password
> > > > reset support which would be useful for general forgetfulness anyway.
> > > >
> > >
> > > That is a good idea to have anyway.
> > >
> > > Regarding making things scalable.. don't overdo it, but also don't
> count on
> > > things always staying small. The moment they outgrow what you planned
> for
> > > is
> > > likely going to coincide with high popularity of your mud, and that is
> the
> > > worst possible moment for trying to fix such issues.
> > >
> > > Bart.
> > > --
> > > http://www.flickr.com/photos/mrobjective/
> > > http://www.om-d.org/
> > >
> > > ____________________________________________
> > > https://mail.dworkin.nl/mailman/listinfo/dgd
> > ____________________________________________
> > https://mail.dworkin.nl/mailman/listinfo/dgd
>
>
> --
> http://www.flickr.com/photos/mrobjective/
> http://www.om-d.org/
>
> ____________________________________________
> https://mail.dworkin.nl/mailman/listinfo/dgd
More information about the DGD
mailing list