[MUD-Dev] Neverwinter Nights
John Buehler
johnbue at msn.com
Thu Jun 7 22:28:22 CEST 2001
rayzam writes:
> From: "John Buehler" <johnbue at msn.com>
>> I'm not talking about modeling a foveal region that has a 10
>> degree field of view and then peripheral vision out to 180
>> degrees. That would be like having a flashlight at night and
>> having to wave it around in order to see anything. I'm talking
>> about having an area of high quality vision and an area that the
>> character must rely on auditory clues - along with a transition
>> between the two. I have every intention of assembling a
>> prototype and tuning it to be entertaining to play.
> Interesting. Though I would say to use the auditory cues in a 3d
> sound system, to localize sounds outside of the screen. That gives
> the effect of objects making sounds outside of the visual
> field. including behind the player.
Auditory cues would certainly want to utilize the computer's sound
system, but given that the player's field of view is so small, the
spatial cues are pretty lousy. What should be to the character's
left and right are actually just a few inches apart right in front
of the player. If I were to try to present the sounds relative to
the character (i.e. as if the player were still in 1st person view),
I think that would just lead to confusion.
> The question remains with how you're going to handle the high
> quality and low quality vision with transition zone, on a standard
> monitor. Even a 21" monitor doesn't give enough real estate to
> pull it off easily. Unless you're planning to compress 180deg
> field into the monitor size? I've not had a chance to use a
> compressed display like that. Has anyone, and is it easy to
> remap/become immersed in that frame of reference?
This is all being done in 3rd person. If you're assuming that, then
I don't understand this paragraph.
> Furthermore, my other point still stands. Something may be in the
> periphery but has an internal representation that's much better
> than what you're suggesting.
> Example: I walked into this room. I'm now in front of the
> computer, looking at the monitor. I can still see the tissue box
> that's about 85 degrees away from straight ahead. I 'see' it's
> colors and shape, keeping my object recognition of it
> high. However, it is at the edge of my peripheral vision [which
> extends to about 95 deg away], and I'm not actively getting that
> information. Though I'm not actively processing it's colors, I
> saccaded to it a few times, I'm covertly attending it, and thus,
> in my visual representation, it's still got those colors. It
> sounds like you're suggesting that would be grayed out. I never
> thought you meant to only have 10 degrees of the visual field in
> high quality and the other 180 degrees in gray. Just that in our
> 'low quality' visual area, we persist the features we've gained by
> saccading around the visual field.
I'm not trying to duplicate the exact mechanisms of the eye. I'm
trying to provide a visual presentation that causes the player to
behave and react as if the character actually could see varying
amounts of information around it. Plus things like identifying
nearby objects based on sound emanations.
I see a 160 degree field of clear vision, with an additional 20
degrees of low-quality peripheral vision beyond that. The rest of
the field of view is grey and only shows grey lumps when a sound is
made in that area. Again, this is all being done in 3rd person. In
1st person, you can get the large field of conventional vision (by
knocking the drawn area aspect ratio to something very large - wide
and short), but lose peripheral and sound cues. Sound cues could be
managed in some way, but the spatial aspect of exactly where a sound
came from wouldn't be easily presented. Third person is for
situational awareness, while first person is for examination of the
world around the character. Especially for seeing long distances,
as would be required for navigating in the wilderness.
In 3rd person view, the large field of view is overdone, but
necessary for playability.
> What I see happening is this: I'm playing the game, a sound comes
> out of the left side of the screen, and there's this gray object
> moving there. Sounds become attached to [to the point of
> mislocalization] a nearby salient object. So I 'hear' the sound as
> coming from the gray object. Naturally, my eyes saccade to the
> object. But it's a low quality gray object. I then have to move
> the character's POV till the low quality gray object is in the
> high quality area.
> To emulate this natural process, you'd have to have the avatar's
> eyes saccade to that location. That is, have the POV rapidly
> change to that location for a few hundred milliseconds and then
> back to where the player actually had the view.
If the object is in the oversized field of view, it just shows up as
usual. If there is a sound in the grey area, the player must get
his character to turn its head to look in that direction, sweeping
the 160 degree cone in that direction. If we do persistence of
vision, the area that the character was looking at could fade out
slowly. Features for tomorrow's games.
> I don't want to just criticize, so I'll make some alternate
> suggestions:
> 1) multiple monitor support! :) Actually, this was one of the
> few new MS features that I thought would be good for
> games. Primary monitor straight ahead.. pull out your old 14/15"
> monitors and place one on each side. Games could support
> this. High on the cool factor. High on the tiny niche factor.
> Probably not worth the development time and money.
I did this for a flight simulator that I worked on years ago and was
disappointed with the results. Focusing on the multiple monitors is
a chore, as is interpreting the large world field of view covering a
small user field of view. We found that turning our head and
refocusing was more effort than just pushing a button to give a view
in a different direction. Food for thought.
> 2) Consider the area outside of the monitor, your low quality
> area. Have sounds localize outside of the screen [if the objects
> moving on the screen have associated sounds, then even ones to
> the sides of the monitor, let alone in 3d space around it] will
> be mapped to 'outside' locations.
Naturally, this should be done regardless of the technique used.
There is always information beyond the screen boundaries.
> In general, we turn our heads and eyes to look at anything over 20
> degrees from the direction of gaze. People can get this 40 degree
> total range [20 degree to each side] to be the size of the
> monitor, based on distance from it. In fact, a person general
> won't sit so close that they have to turn their head to look
> around the screen. Thus, it becomes more natural to tie together a
> saccade [eye movement], with a bigger motor movement, i.e. the
> head turning.
And I'm suggesting larger foveal fields of view in order to
accomodate the player. The character shouldn't be able to see so
much, but the player has the entire screen in his foveal area (more
or less), and mucking up a lot of the screen probably isn't going to
be greeted with much enthusiasm.
> Then the player only needs to remap the combined eye with head
> movement to an eye with 'mouse' movement. It becomes a natural
> combination.
That is my goal - let the character's gaze be moved about using the
mouse. Note that I'd like to see a game that permits just the eyes
to move for certain 'look' actions, then just the head, then just
the shoulders, etc. Turning and moving the body should be a
separate control.
Thanks for the thoughts.
JB
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev
More information about the mud-dev-archive
mailing list