[MUD-Dev] FEAR Animat style AI for Quake 2

Dave Rickey daver at mythicentertainment.com
Mon Dec 2 00:54:26 CET 2002


From: "Sasha Hart" <hart.s at attbi.com>
> [Dave Rickey]

>> How good they can get at it is mostly a matter of how much
>> processing power you can dedicate to GA selection or NN back-
>> propagation, and how many inputs you can wire up and process in
>> real time.

> Also your expertise in working with and picking parameters for the
> techniques, of course. Backprop is a bit of a black art and gets
> misapplied all the time.

> I don't know nearly enough about GAs, but I would have to convince
> myself that they weren't too slow to learn - because of the
> premise that you operate on the entire 'genome' rather than
> piecewise, which at least intuitively suggests to me that you get
> much slower sampling than you would with an online method. Then
> again, I generally play with problems that probably don't play to
> the strengths of GAs.

I'm not going to pretend to be up to date on the current state of
the field, but the essential difference between an NN and a GA is
that a neural network is a learning system, "doing this seemed to
work in those conditions, so we'll remember that the next time the
same conditions apply", while GA's are at heart a search algorithm,
and in this context the trick is setting up the tactical decision
space in such a way that it is amenable to being searched
semi-randomly (and the searches can be incrementally improved).

There's been some good work with neural networks being hyrbidized
with GA's, as well as setting up collective swarms of GA's that
essentially act like a very loose neural net (that's being very
loose with the definition, see Swarm Intelligence [James Kennedy, et
al]).

>> We're not far from being able to run this grade of AI for entire
>> worlds simulataneously (several thousand players interacting with
>> a like number of AI agents at any one time).  Maybe 5 years
>> before we see a game using this approach.

> Depends on the algorithms, of course - GAs and NNs are not cheap,
> but for some problems, even quite general ones, there are pretty
> cheap alternatives.

GA's are cheap to run, what isn't cheap is running *enough* to
sufficiently saturate the search space, then tracking their results
in order to run selection.  At the current state of the art, we can
run 50 input, 20 output, 1000 node NN's on a fairly modest chunk of
hardware, and these are capable of quite a bit more adaptive and
purposeful behaviour then what we've been able to simulate with
scripting (this is similar to the NN's used for Creatures, which not
only learned but displayed something that could be mistaken for
unique personalities).  You could run 10 such networks on a modern
"blade" format server module, and in 5 years such blade systems will
be truly commodity hardware at a couple hundred dollars apiece.

Anyway, the whole system doesn't have to be running selection and
back-prop all the time to be good enough for our purposes.  We don't
want ideal solutions anyway, these guys are supposed to lose.

--Dave


_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list