[DGD] Damage Weapons and Stuff (Math and Code)

Ben Chambers bjchamb at bellsouth.net
Mon Jun 2 03:10:57 CEST 2003


----- Original Message -----
From: "Noah Gibbs" <noah_gibbs at yahoo.com>
To: <dgd at list.imaginary.com>
Sent: Sunday, June 01, 2003 3:50 PM
Subject: Re: [DGD] Damage Weapons and Stuff (Math and Code)


>
> --- Ben Chambers <bjchamb at bellsouth.net> wrote:
> > Does this idea sound plausible?
>
>   Your basic idea is plausible.  Basically you're
> controlling the equivalent of whether your
> distribution "feels" like 3d6 or 5d4, or whether it
> "feels" like 1d20, but you know that.  A large
> standard deviation would feel more like a single die
> (more like a uniform random variable) than with many
> dice (more like a gaussian, which is what you're
> literally using here).

The reason I like this approach, is it easier to manipulate and calculate
the standard deviation than to find the desired type of spread and then
change that to a dice type.  As a matter of fact there is an easy equation
for changing dice into a standard deviation.

>   Your system can't actually mimic a uniform random
> distribution, but it can sorta feel like it.  Fair
> enough.  You can always add that back manually with
> getrand() if you feel like you've lost something.

The uniform random distribution is gotten using DGDs built in random
function.  getRand takes that and uses it to generate a random number in the
range 0 - 10000 and then divides by 10000

>   The system could certainly be workable overall.
> This seems like a lot of effort to go through to
> control these things, but it's certainly statistically
> rigorous.  In fact, it's practically statistically
> transparent -- any tactics sites that may spring up
> around your game will have an easy time of it since
> they can use all the same classical statistical
> analysis that you do.  You may consider that either
> good or bad.
>
> > An example of the standard
> > deviation code is presented
> > below
>
>   Overall, looks decent.  You *really* need more
> documentation.

Yep, I do.  Can't argue there.  As a matter of fact you aren't the first one
to mention that.  I whipped this code up at 3 AM yesterday, so excuse me for
some ommissions ;)

>   The division at the end of invNorm() seems really
> obfuscated here.  Are you just dividing the offset by
> 100, effectively?  An offset like 305 seems like it
> would still turn into 3.05, and that's still in excess
> of 3 (which you claim is the largest available number
> of standard deviations).  So I'm missing something, or
> you are.
>
> >    return ((double)((int)(n / 10)) / 10) +
> > ((double)(n % 10) / 100);

Yep, you're completely right.  It is obfuscated.  The reason is the table of
inverse normal distributions that I was using, goes from 0 to 3.49 in
increments of .01.  I chopped off at 0 to 3.09 for simplicity, but
the range is in fact -3.09 to 3.09 in my case.  The key is that technically
the number of standard deviations is unbounded.  That is there is
theoretically a number that lies an infinite number of standard deviations
away from the mean that is infinitely unlikely to occur.  The problem is
coding infinite ;)  The obfuscated code is because the table is set up like
this:
            .00        .01            .02        .03        .04        .05
.06        .07        .08        .09
0
0.1
0.2
0.3
0.4

That is what was translated into the program (which I'm adding to the
documentation as I speak).

>  As far as a quick speed boost goes:  you're
> currently just using a binary search.  You could try
> making an estimate of how far off you are instead, and
> try to go that far between a and c for your next b
> value.  That would wind up taking fewer iterations in
> almost all cases.  Sort of a bargain-rate Newton's
> Method for this, since I think Newton's Method itself
> would work indifferently on a Gaussian.

As a matter of fact I was considering that, but as you can see the change
between numbers in the first part is very large but at the end it makes only
miniscule changes.  The normal function is a correlation between standard
deviations and probability, and takes the form of e^(x^2).  The obvious
problem is the only way to integrate that involves Taylor Polynomials and
theoretically infinite numbers of summations.  Of course an approximation
can be generated using only a few of the terms of the Taylor Polynomial,
which I'm going to look into tonight.

_________________________________________________________________
List config page:  http://list.imaginary.com/mailman/listinfo/dgd



More information about the DGD mailing list