[MUD-Dev] [Tech] MUDs, MORPGs, and Object Persistence

Daniel.Harman at barclayscapital.com Daniel.Harman at barclayscapital.com
Tue May 15 11:27:09 CEST 2001


-----Original Message-----
From: brian price [mailto:brianleeprice at hotmail.com]
Sent: 14 May 2001 15:36
To: mud-dev at kanga.nu
Subject: RE: [MUD-Dev] [Tech] MUDs, MORPGs, and Object Persistence


> From: Daniel.Harman at barclayscapital.com
> To: mud-dev at kanga.nu
> Subject: RE: [MUD-Dev] [Tech] MUDs, MORPGs, and Object Persistence
> Date: Fri, 11 May 2001 13:35:16 +0100
> Reply-To: mud-dev at kanga.nu

> -----Original Message-----
> From: brian price [mailto:brianleeprice at hotmail.com]
> Sent: 11 May 2001 00:27
> To: mud-dev at kanga.nu
> Subject: [MUD-Dev] [Tech] MUDs, MORPGs, and Object Persistence

>> For purposes of fault tolerance, we need a datastore that can
>> periodically be backed up in a fast and efficient manner,
>> preferably without stalling the server.  Note that transaction
>> capability (in db terms) is *not* a requirement, the capability of
>> generating checkpoints by writing out the 'dirtied' (changed)
>> portions of the database periodically will satisfy the datastore
>> backup requirement.  Checkpoints can be restored simply by starting
>> with the last full backup and applying (in order) the saved changes
>> since that backup occurred (can be done offline).

>> I disagree with you on a lot of points here, but I'd start here. I
>> think transactions are important in a MUD. Its the best way to
>> prevent duplicates through synchronisation problems (which is how
>> most of the EQ ones I heard about worked). If someone giving an
>> item to someone else can be made with a call to a single
>> transactional 'switchObjOwnership()' type method, then you aren't
>> going to get either dupes or item loss when passing items.

> You can only get dupes or item loss in a system that does not
> capture coherent snapshots of the persistence state during safe
> periods.  Consider: A hands item to B; both objects A and B (and in
> some cases the item object itself) are dirtied (persistent state
> changed).  If you create a checkpoint prior to the exchange, any
> rollback results in the restoration of the state prior to the
> exchange.  If you create a checkpoint after the exchange, any
> rollback results in the restoration of the state after the exchange
> occurred. You only run into problems when you try creating a
> checkpoint in the midst of an exchange - simply designing the server
> system so this condition cannot occur solves the problem.

Unless I've misunderstood, by creating checkpoints and rollbacks,
you've just manually implemented transactions. Or do we have a
different understanding of what they are?

>> You previously said that infrequent reads were required. Thus I
>> don't see how the performance of an RDBMS is going to impact your
>> proposed solution.  Writes are generally fairly fast, its the
>> queries that are slow.

> Writes are going to be slower in an RDBMS than a OODB/Persistent
> Object Store due to the large number of indices and tables typically
> required in an RDB model that is used to represent an object model.

Thats definitely an assumption too. What kind of writes are we talking
about? If its replacing a record, then finding the location in the
flat file will be slow unless you have some kind on indexing
yourself. These sweeping generalisation probably don't serve anyone :)

>> By not going for an RDBMS you have made any type of reporting
>> functionality many times more difficult to implement. If you have a
>> large game, then I would imagine functionality to measure how many
>> warriors have weapons of greater than 'x' affectiveness is
>> something you might want to find out infrequently enough to make
>> writing a bespoke tool a pain, but frequently enough that having
>> sql is a feature. The same for economy reports and such like. With
>> a bespoke object store, any type of data-mining is just hideous.

> In previous discussions elsewhere, the maintenance issue has been
> raised time and time again as a reason for choosing RDBMS over
> OODB/Persistent object store.  There is an alternative for OODB that
> is in many ways far more powerful: embedded script/interpreted
> language engines.  For C++ server implementations, Java or Python
> are natural choices.

So you are saying that writing your own data store access language is
better? To implement the functionality and concise expressiveness of
SQL is non-trivial, and a project probably as complex as the mud
itself. If you aren't going to give that functionality, then you have
an inferior solution.  Going back to the flat file system I worked on,
it did indeed have an advanced suite of programmatic data access
functionality. Compared to writing a query in SQL, it was still a dog.

>> Anyway, a well tuned and designed database can be remarkably fast.

> Performance issues boil down to the number of file i/o operations
> required per equivalent action.  In the case considered in the
> original message, the OODB/Persistent object store approach results
> in far fewer file i/o operations than the equivalent RDBMS solution
> for the actions required.

Well we've already realised that we are talking different ideals
here. When you said you had an ideal method of handling these things,
I think most of us assumed you were talking about a large scalable
system. If you are just writing a single PC text mud then frankly it
probably doesn't matter much how you implement this stuff.

As to the amount of file i/o, thats not something I feel you can judge
as any decent RDBMS has highly evolved and effective caching (assuming
you understand how to design a DB). You would probably get less disk
i/o with an RDBMS.

>> and a persist method to get it to write itself to the db. None of
>> these are a great deal of work. If you were to go towards Java or
>> C#, you could make this even more trivial with the object
>> reflection.

> It seems you're speaking here of translating to/from the rdb model
> in each object's persistence interface.  For simple systems this
> would work fine, but with a large number of classes I'd think both
> implementation and maintenance would become a task of herculean
> proportions.

Thats why I mentioned reflection, it does all the work for you. With
it, you can iterate through all the properties on an object, and in C#
objects can even be instructed to convert themselves into XML (Java I
don't know about...).

Do you not have to programmatically define how each object is
persisted using your system? (as you would using the pattern I
described)

>>>  I've heard all the arguments against OODBMS over the years and
>>>  all the arguments for RDBMS, and in this case at least, *none* of
>>>  them hold any water.

>> I disagree. I think an RDBMS with a bespoke in-memory cache would
>> be the optimal solution.

> The very need for a seperate cache makes such a solution non-optimal
> in the stated case.

Surely your system needs a cache too, otherwise both will be disk i/o
bound for every object access. Of course you get a free cache with an
RDBMS (but assuming a distributed system its not local, which is why I
specified a custom local one).

>> What about failover? A proper RDBMS will faciliate this. I get ill
>> thinking about having to write one of these for some kind of
>> bespoke flat file object store.

> Failover is a non-issue because the persistent store is tightly
> integrated with and local to the MUD/MORPG server.  Even in the
> distributed case, depending upon system design, it may not be
> necessary.

If you are working on the basis that you are using one PC sure. From
what I understand of commercial MMORPGs, disk failure is fairly common
due to the load on them, so I'm working from the assumption of a raid
array anyway.  This doesn't change the fact however that a database
cluster associated with a raid array isn't more robust and if I were
to write a MUD, even a small one, I'd design it to be scaleable.

>> Its interesting, because I have worked on two version of a
>> large(ish) scale distributed fat-client system, one where we used
>> sysbase, and another where we did use a bespoke flat file system
>> with in memory cache for 'performance' reasons. The flat file
>> system whilst initally fast, was in fact more trouble than it was
>> worth for the following reasons:

> I do not believe the application spheres are congruent.  We can
> compare apples and oranges all day in re RDB/ODB.

I feel that they are which is why I used the example :)

In the end, I think the basis of the argument is that both Derek and I
are talking about large scale systems, and that you probably
aren't. Having said that even on a smaller system, I think a lot of
the arguments still hold.

A lot of people seem to make a lot of assumtions about database
performance without actually benchmarking, I'm not asserting that you
have, but I'm just refuting a lot of the arguments these people put
forward, as most are unfounded.

Dan
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list