[MUD-Dev] Personality modelling

J C Lawrence claw at under.engr.sgi.com
Wed Apr 15 12:39:05 CEST 1998


Forwarded for Cynbe who is currently unable to dpost for technical
reasons:

Date: Tue, 14 Apr 1998 21:42:03 -0500
Message-Id: <199804150242.VAA04958 at laurel.actlab.utexas.edu>
From: Cynbe ru Taren <cynbe at muq.org>
To: claw at under.engr.sgi.com
Subject: [MUD-Dev] Personality modelling 

| > http://www.erasmatazz.com/Library/JCGD_Volume_7/Personality_Modelling.html
|
| <nod>  I'm working my way thru the JCGD pages.  Lotta good stuff.
| I'll excerpt here as I get the chance.

A nice collection I wasn't aware of -- thanks!

Two quick notes:

1) Above link doesn't work for me with Netscape, at least -- needs
to be lower-case "library".

2) The personality design vector notion touches on some pondering I've
been doing re AI over the last few decades.  I'll predict that

 (A) You'll wind wanting your values to be two-valued real vectors (or
     else complex numbers), so that you can distinguish value from
     intensity.  E.g., weakly suspecting that someone is very evil is
     not the same thing as being very confident that they are very
     mildly evil.  Having separate direction and magnitude available
     for a value makes this distinction easy.  (If you go with the
     two-valued real vector approach, the generalization from one-D
     to 2-D or 3-D directions for values is then straightforward.
     If you go with complex numbers, you're at a dead end once you
     reach quaternions.  Then again, I grok neither of them...)

 (B) You'll wind up wanting to compare these vectors of values using
     an inner-product distance metric rather than a Euclidean distance
     metric, because the Euclidean distance metric is dominated by
     differences, while the inner-product distance metric is dominated
     by similarities -- typically more to the point in these sorts
     of situations, in my ignorant impression.
       (If anyone knows of a promising, realistic alternative distance
     metric to those two, I'd be most interested in hearing about it.
     But I have a tantalizing impression one can come close to proving
     that there are no more.)
       In my impression, projection of long sparse vectors (i.e., inner-
     product metric) is a very promising way to do soft, human-seeming
     associations/comparisons that feel much more natural and robust
     than the classical AI boolean expression stuff.  Toss in a little
     information theory, and I expect you're really getting somewhere.
     But that's straying offtopic. :)

 (C) You'll ignore the above two points initially and rediscover
     them the hard way. :)


 Cynbe

--
J C Lawrence                               Internet: claw at null.net
(Contractor)                               Internet: coder at ibm.net
---------(*)                     Internet: claw at under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...



More information about the mud-dev-archive mailing list