[MUD-Dev] Re: MUD-Dev digest, Vol 1 #237 - 9 msgs
Dr. Cat
cat at realtime.net
Thu Dec 23 11:02:51 CET 1999
-- Start of included mail From: J C Lawrence <claw at kanga.nu>
> On Wed, 22 Dec 1999 22:14:44 -0500 (EST)
> Rahul Sinha <rsinha at glue.umd.edu> wrote:
>
> >> I implemented this because people can create their own worlds
> >> offline and when they start each becomes a shard. And two
> >> players with mutual agreeance can create links between their
> >> worlds and thus users can navigate over player created terrains.
>
> > so distribute the server binaries, and code in a easy way to link
> > servers.
>
> While I don't know the mechanics of how he does it, Dr Cat does
> something not entirely unrelated to this with Furcadia. I recall
> that he's discussed the high level features of these "dreams"
> (corret term?) on the list, but not the implementation.
Furcadia comes with a "dream editor" players can create their own maps
with, and which lets them test most of the features supported by the
scripting language (DragonSpeak) offline. The game server runs one
process per "dream", and when a player uploads one they've created it
assigns a process that currently has zero players in it to handling the
new map, and puts a teleporter to it next to the player that uploaded it.
The model is designed specifically to allow the processes to be
distributed across multiple host machines for scalability. But
I don't plan to let players run dream server processes. If it
ever looks like it might make good business sense & wouldn't degrade
the quality of the game experience, I might consider it. Right now
I'm running the server at an exodus.net hosting site, which as far
as I know is the best way to assure the lowest latency for the greatest
percentage of players. My experience suggests that most netlag comes from
the public peering sites that data gets routed through when a player's ISP
and the MUD's connection are provided by two different backbone providers.
If your MUD host is on PSInet, then players using ISPs connected through
PSInet will tend to get low latency, and performance for users on any of
the other backbones will be highly variable and unreliable (though
sometimes it will be good). The solution is to have a site with "private
peering", meaning basically they bought high bandwidth connections to most
of the major backbones. Generally traffic from any user that's using any
of those backbones will come right in through that backbone with low
latency. Exodus connects to six of the top backbones with fiber optic
lines, which is pretty good if you ask me.
For a less latency-sensitive game, or a hobby project, spreading the
servers all over and having highly variable performance might be fine
anyway. But for a commercial server, quality of service is essential,
and it would be bad to have people wander into a tavern and suddenly find
themselves being served by some user machine on an obscure backbone, or
through a sattelite link that guarantees 500 milliseconds extra latency,
or sharing some guy's 56K modem with 70 other players.
I've also been kind of assuming that all the server machines will be
connected by a LAN. It's conceivable some or all of the communication
between hosts could be done by having multiple CPUs with a shared memory
architecture as well. This allows for more lenient assumptions about how
much volume of data can pass between the server processes and with what
kind of latency it will arrive. If they were talking halfway around the
world to each other it'd take a lot less to choke the game and slow it
down. That's a theoretical issue at the moment, though. Right now the
architecture would work modestly well with widely separated hosts, and
after a major reworking I'm going to do next year to improve scalability,
it'd work even better. It's more an issue of my design goals, performance
goals, and business goals not being served by giving out the server
binaries than one of the architecture not being suited.
Of course there would be huge security issues with users hacking their
servers to do whatever they wanted, which they would do. Right now
security is dealt with in the game mainly by A) not having much of any
goal-oriented mechanics implemented yet so there's little motivation to
cheat, and B) having all significant decisions made by the server, so when
I do finally get some goals in people would want to cheat at, we'll still
be in fairly good shape. Turning the server binaries over to the public
would make method B very ineffective and inadequate, and I'd have to do a
huge amount of work to implement some more difficult approach (which would
still be less effective after all that work, most likely). Of course
continuing to avoid having goal-oriented mechanics would be one way to
reduce the impact of passing out server software. But giving a player
godlike powers would still enable them to impersonate anybody, spy on
people invisibly to listen in on their private conversations or simply log
ALL of everyone's speech to browse through later, and trick and mess with
people in a variety of other ways. Again, such insecurity might be
acceptable for a hobby game with a little warning to players up front to
watch out for that kind of stuff. But for a commercial game it's a bad
idea.
When it comes right down to it, a lot of the motivation players have for
wanting to run their own machine is psychological. Having it right there
on their PC is "cool" and gives them a feeling of greater control over it.
But a game running on someone else's server machine can give them as much
control over their areas as the designer/programmers choose to give them,
and can potentially offer them (and the players that enter their areas)
better and more consistent performance and reliability. The amounts of
CPU, RAM and disk resources it takes to handle a few dozen players
visiting their dream/shard/map/whatever-you-call-it are pretty trivial and
inexpensive these days, when you can pick up a machine for 500 bucks that
would have been called a "supercomputer" not that many years ago. Even a
hobby project shouldn't have too much trouble getting enough computing
power together to smoothly handle a few hundred simultaneous players, if
the server is coded for efficiency rather than for "let's fulfill the
creator's desire to implement a bunch of complex cool stuff the players
won't even notice". If you get a thousand or two thousand people - ask
'em to pitch in a few bucks or sell a t-shirt, and go buy a second server
machine and network them. Or scrape together a thousand bucks from your
inflated salary professional programmers are supposed to get, decide your
hobby is now officially an Expensive Hobby, and buy the machine yourself.
(If you're not getting an inflated salary yet, try a headhunter. :X)
Distributing a game server all over the net might also be fun because A)
it sounds cool and gets press and attention and people saying "wow that's
neat", and B) it's yet another technical challenge and a lot of programmers
are motivated primarily by the fun of figuring out how to solve interesting
new technical challenges. But to build the biggest and best online
environment, I think distribution is primarily a solution to the fact that
no single CPU system can handle more than N users (where N is a number
that rises each year, but is far smaller than the largest number of users
you'd like to have and won't pass that threshhold any time soon, if ever).
Distributing the game processes over machines on a LAN (or multiple CPUs
in a box with shared memory) solves that problem quite decently in my
opinion. Spreading things out further can degrade performance and
introduce huge security issues. Not to mention bigger problems keeping
every machine running the latest update to the server code, etc. And I
think the gains from spreading it out are tenuous at best, it's a solution
in search of a problem because it "sounds cool". It would have been an
appealing one back when machines were weaker - indeed, I did some thinking
about it back in 1990.
One place where there might seem to be some merit would be in serving
different countries, if there's enough demand for this to be an issue & if
the game is latency sensitive enough for it to matter. You're never going
to get latency halfway around the world faster than the speed of light
(unless we make some major new discoveries in physics), and right now
you're going to get performance a lot worse than that. But games like
Ultima Online solve this by putting a whole separate copy of the game in
those other countries that have sufficient demand to warrant it. Which
probably makes more sense anyway. Why make a "seamless" world where
walking to areas hosted in other countries made performance degrade badly,
and then go to extra effort to encourage people to stay in the locally
hosted areas so the game experience is better... When with less effort,
you can just set it up so they can only have the good performance?
*-------------------------------------------**-----------------------------*
Dr. Cat / Dragon's Eye Productions || Free alpha test:
*-------------------------------------------** http://www.bga.com/furcadia
Furcadia - a graphic mud for PCs! || Let your imagination soar!
*-------------------------------------------**-----------------------------*
_______________________________________________
MUD-Dev maillist - MUD-Dev at kanga.nu
http://www.kanga.nu/lists/listinfo/mud-dev
More information about the mud-dev-archive
mailing list