[MUD-Dev] FEAR Animat style AI for Quake 2

Ted L. Chen tedlchen at yahoo.com
Sun Dec 1 21:38:40 CET 2002


Sasha Hart
> [Ted L. Chen]

>> At a simple level, the stimuli-response of 'animat' can easily be
>> tricked, and actually produce funny results because they have no
>> overriding executive.  For example, assume we have a threat
>> detection organ that tells the rest of the brain that someone is
>> attacking when they have a rifle pointed towards us.  What
>> happens when the guy with the rifle turns rapidly toward and away
>> from us?

> That's not a problem with S-R, that's a problem with "when they
> have a rifle pointed towards us." Change the conditional to
> something sane and you solve the problem. For example, if the
> animat habituates to rifle-pointing, it has effectively
> established a new conditional - "if a rifle is pointed towards us
> AND he didn't just do it 3 times, be very afraid."

In a way, that's kinda my point.  You need to have some higher level
that provides more stringent conditionals on what the animat
processes.  The most flexible, and IMHO, the least useful brain
organs, are so general to the point that they won't generate the "if
a rifle is pointed towards us AND he didn't just do it 3 times, be
very afraid" condition.  That requires time-history, or non-sensory
data.  Other things hand-coded rule-based systems might take into
account are agressiveness stances, own-resource managment, etc.
These are things that don't generally appear in animat organs.

With more conditions, there might be a point where your learning
cases get so extensive that it's high-nil impossible to tune the
animat.  It might learn "if a rifle is pointed towards us AND it's
daytime" if your learning cases are ill-posed.

> I'm somewhat interested as to what the executive would do here.

I use the term executive very broadly.  Something I would call an
executive would normally have the power to override any usually
automated behavior.  Like your deciding to put down a hot stove pot
back on a stove is an executive decision.  Your natural reflex is to
drop the stove pot.  But you know that's not a good idea because
stove pots obey the laws of gravity and fall towards your feet.
You've probably never had a hot stove pot land on your feet, but
you've generated that rule through simulated modelling of 'what
would happen'.  That's an executive in action.  In terms of
traditional bots, it usually comes in the form of a priori
high-level conditionals.

  S-R: If a rifle is pointed towards us ... propose action.
  Executive: Ignore proposed action when enemy has low chance to hit.

It lacks learning, and reaks of hard-coding if-then statements.  But
in most cases, it gives the semblance of high-level intelligence and
isn't as prone to generating really really weird conditionals
(unless of course your AI programmer has had one too many beers).


TLC


_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list