Net protocols for MUDing (was: Moore's Law sucks)
s001gmu at nova.wright.edu
s001gmu at nova.wright.edu
Sat Feb 28 21:40:45 CET 1998
On Fri, 27 Feb 1998, Adam Wiggins wrote:
> [coder at ibm.net:]
> > On 20/02/98 at 02:35 AM, Jon Leonard <jleonard at divcom.umop-ap.com> said:
[...]
> > No, you misunderstand.
> > I am proposing that MUD clients move to a protocol and data model which is
> > tolerant of data loss. If packets get lost, or arrive far to late, the
> > client won't care and will continue to offer a decent representation of
> > what is happening in the game. The main problem with telnet lag is *not*
> > latency but dropped packets -- the whole damn client freezes while
> > awaiting the lost packet. Instead have the client be predictive and work
> > on a best-effort basis. It works with the data it gets, and ignores or
> > attempts to generate the data it never sees for whatever reason.
> > Raph has commented that UOL's client does this in some areas.
[...]
> On a slighty more interesting and possibly less practical line of thought,
> I've been considering using the timestamps on the packets in creative ways.
> As long as the server and all clients were synced, and all packets were
> timestamped with an extremelly high granularity timer (say, 1ms or lower)
> then the server could actually "insert" the packets it receives from the
> client at that point in time, and then recompute all the events that happened
> since then. In order for this to work you'd need a very robust event
> system (like JC's) and predictable AI for the computer-controlled characters.
> You'd also need to clip the timestamps on incoming packets to a resonably
> small interval - probably three or four seconds.
[connections at different times w/i the game...]
I make no comments on the above, rather odd, idea.
> Now, assuming that we're using both high-framerate clients and the timeline
> breaking I'm suggesting above, this is going to cause some incongruous stuff.
> A simple case would be the aforementioned spacecraft, piloted by a player,
> speeding towards a cliff wall. The player's connection is slow. On their
> machine, they hit the up key just in time to clear the cliff face and
> continue flying. Everyone else sees their ship fly into the wall, and then
> a moment later, as the server inserts the packets, they fly clear to safety.
> Depending on your world, you could approach this different ways. You could
> have certain events (collisions being the main example) flaged as requiring
> server confirmation. Thus, everyone would see the player's ship disappear into
> the cliff face, but they would only see an explosion and a rain of ship parts
> once the server confirmed the crash. Otherwise they just see the ship pop
> out of the other side of the cliff, making it look more like a glitch in the
> clipping routines than a total server botch. I would also tend to, instead
> of making the ship "pop" to its new location, speed up by some large amount
> based on the distance from the old position to the new one, so that it appears
> to hurry to the new place. This will certainly look funny, but hopefully not
> like a 'break' - and if you have ships which speed up and slow down easily
> anyways, hopefully this should rarely cause anyone to think anything is
> strange. Sudden turns will have the effect of doing a larger, sweeping
> turn - a sudden 45 degree turn by the lagging player should look more like
> an overshot 60 degree turn, which then turns back to the new heading.
>
> I haven't thought this through real thoroughly, nor attempted any complex
> or hairy scenarios. I'm sure once I do some problems will pop up, but at the
> moment it seems much more appealing than the usual unresponsive clients.
>
> I also used all sci-fi examples (ships) because client prediction would
> tend to be more accurate with them (I imagine). Thank's to Newton's first
> law, a missed second or two of packets won't be a big deal unless something
> unexpected (collision, sudden direction change) happens, and even then it
> can be made to look relatively seemless.
I hate to post so much with only a brief comment, but I thought it silly
to type basically the same statement here and there in response to
particular arguments.
I highly recommend you look at the IEEE DIS standard I mentioned a few
posts ago (I can get the RFC# again if you like). It deals with a lot of
these very same problems. It certainly won't solve everything, but it's
not a bad approach. Essentially, it maintains a state for each entity at
each location, using dead-reckoning algorithms to calculate the actual
motion of an entity when that entity is not sending updates very often.
It doesn't set out a method for handeling the 'do then undo' situations
you mentioned above, but it provides a solid framework which should be
easily extendable to cover a lot of those types of situations. It is
heavily geared towards military simulations, and large portions of it
you'll probably not need to deal with, but the simulation management
protocols deal extensively with the ideas you've brought up.
-Greg
More information about the mud-dev-archive
mailing list