[MUD-Dev] RE: Randomness
David Kennerly
kennerly at sfsu.edu
Tue Feb 3 00:00:50 CET 2004
Hi Alex.
I apologize for leaping into this conversation midstream without
reading prior messages. Your insight into one mechanical cause
player frustration and the well thought out algorithm caught my eye.
Alex Chacha wrote:
> The player will get a lot more frustrated when they get 5 misses
> in a row rather than 7 misses out 10 sparcely distributed (e.g. T
> F F F T F F F T F). There seems to be some feeling of "doing
> work" that players expect, if they are fighting a monster and they
> keep missing they get frustrated but if they get a hit once every
> 3 or 4 swings they accept it since they are doing something.
I concur. It seems that one way to prevent this is to reduce
randomness. I've played on MMPs that reduced to hit randomness and
reduced damage randomness. Reason? As you said, if the probability
for a string is above zero, then the probability for that string
occurring at least once in the lifespan is very high. And once it
does, it is a frustrating experience.
The level of thought you put into this algorithm caught my eye:
> First one is a pure white RNG, every missed trial "moves" the
> curve towards a success value, so initially rolling 10+ with a d20
> is 50%, if the user misses, then next time around the RNG
> coefficient is adapted so the chance to roll 10+ is 60%, then 78%,
> then 86% etc with 90% cap.
Since I haven't been keeping up, may I ask: Have you considered
selection without replacement? And compared the two?
Basically, instead of "rolling a die," draw a card. Using the d20
system (3.5e) as shared vocabulary to discuss the concept, instead
of rolling a d20, there being 20 perfectly shuffled cards. If the
player has a melee attack of +0, then he would hit AC 11 at 50% of
the time. So there would be 10 hit cards and 10 miss cards. The
player could get 10 misses in a row, but the probability of this is
much less than 10 misses in a row in selection with replacement
(rolling a die, or replacing and reshuffling after each draw).
Suppose "f" equals the number of consequtive misses. With
replacement
p[a](f=10) = 1/2^10 = 9.8 * 10 ^ -4
Without replacement
p[b](f=10) = C(10, 1) / C(20, 10) = 5.4 * 10 ^ -6
Without replacement is roughly two orders of magnitude less
probable. And without replacement is guaranteed never to have more
than 10 failures in a row. I'm curious how the algorithm you
mentioned compares for 10 failures in a row.
If one were applied worst case analysis and had predetermined 5
consecutive misses to be the maximum tolerance, then the deck could
be divided into two separate decks, each without replacement, so
long as each deck had no more than 5 in it. For consistency's sake,
it'd might be evenly divided. E.g., 75% success rate = 7 out of 10
and 8 out of 10.
I'm not a computer scientist, so I don't know how efficient a
shuffling algorithm would be during run-time. I mean just for 20
elements during a string of 20 events, not asymptotically. I guess
not worse than 20 * the running time of with replacement. Of course
it requires more memory but only, per user, an additional 20 bits +
a constant overhead. Does that sound right?
Heh. I'm not a statistician either. :) Yet, I believe even the
small event probabilities, such as 9.8 * 10 ^ -4 are worth
considering in an MMP, since for a simple example, there could be an
average 1000 attacks per user per hour. To be crude (since actually
only 2 of the 10 possible combinations of 10 consecutive failures
conform to a division of 10), this could 100 strings of 10 attacks.
If--admittedly extremely oversimplified--there were, for a
reasonably small MMP, average of 100 simultaneous users per day with
uniform distribution of attacks, and independence of attacks, then
there would be 100 strings / hour * 24 hours / day * 30 days / month
= 72000 strings per month. Since this is very large and the
probability is very small, but 7.2 * 10 ^ 4 is still small enough in
relation to the probability, the Poisson distribution approximates,
with a rate of {lambda} = n * p, which is:
{lambda}[a] = (7.2 * 10 ^ 4) * (9.8 * 10 ^ -4) ~= 70
So there's going to be, on average for a low-traffic MMP, given all
these oversimplistic preconditions, 70 strings of 10 consecutive
failures per month using a mechanic analogous to a simple d20 die
roll. Whereas, without replacement, a mechanic analogous to a
20-card deck draw:
{lambda}[b] = (7.2 * 10 ^ 4) * (5.4 * 10 ^ -6) ~= 0.4
Or about 170 times fewer strings of 10 consecutive failures, and--in
the latter case--absolutely no strings of 11 consecutive failures or
more. Does that sound right? That is all contingent on my having
done all that correctly, which I'm more certain I didn't with my
lack of skill and at this late hour.
Even if all this were true, computing the distribution of loss of
accounts due to player frustration from instances of 10 consecutive
failures is a tougher problem. :)
David
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev
More information about the mud-dev-archive
mailing list