[MUD-Dev] Preference for host OS
J C Lawrence
claw at 2wire.com
Mon Sep 10 20:19:39 CEST 2001
On Sat, 08 Sep 2001 23:35:54 -0700
Brian Hook <bwh at wksoftware.com> wrote:
I've re-ordered your text slightly in the following quotes, moving
the following quoted sentence from the end to the beginning.
> So if people are willing to chime in about their choice of host OS
> and, more importantly, why, I'd be very interested in hearing.
There are two parts to this decision: technical and business
requirements. In an ideal world the two are synonymous. Typically,
in the Real Word, they're not even close to synonymous. This is
probably the largest single factor that makes OS choices messy.
Frequently the fact that business requirements will dictate
technical requirements is not acknowledged and instead those
technical requirements are considered primal. When it occurs this
is a critical flaw in the analysis process that fundamental
distorts any findings and results.
The classical case of this is the engineers responsible for the
technical determinations specifying a system that they personally
prefer or are comfortable with (eg Unix people specifying a
Unix-like system, Windows people specifying a windows-like system,
etc). It needs to be realised that this is not a technical
requirement but a business decision, and that the business decision
can by reflection dictate technical requirements. There are
significant benefits to using systems and tools (development, host,
design philosophy etc) that your current staff are familiar with.
Conversely, sometimes its cheaper to get new staff or to train your
current staff on a new/unfamiliar area. As much as we engineers may
like to think that this is a simple and obvious decision, from a
business perspective its NOT necessarily so trivial, especially once
the marketing and legal depts get involved.
I'm not aware of any general purpose OS out there that is
technically incapable of running a large MUD, and in that list I'm
explicitly encluding things like Plan 9, Inferno, assorted other
"research" OSes, a host of Unix flavours and Unix-like systems, the
range of Microsoft offerings, MacOS, a large range of embedded
platforms (tho most have poor TCP/IP stacks for the purpose) and so
forth.
The guiding question is not, "Can this OS do what we want?" Instead
its a question of, "What would it cost us to have <insert_platform>
do what we want?" Answering that question fully involves a large
raft of business decisions, risks analysis, cost/benefit analysis,
staffing questions, etc and can easily involve many more departments
than just HR and engineering.
Note:
Start out by realising that you don't actually have to have a
host OS at all. If you devolve the base definition and set of
activities of a MUD server (and I'm counting both classical text
MUDs and the newer crop of graphicals in here) you're
essentially looking at the requirements of an OS. There's
little/no fundamental difference between the two sets -- its all
resource management and event processing.
Several early MUDs were written as replacement command
interpreters atop the core OS. (Think replacement for
COMMAND.COM for DOS or /bin/sh for *nix systems). As such you
logged into the host OS and were dropped directly into the game
with the game's internal command line actually being the system
command line for that account. This model has several
advantages in regard to authentication and billing requirements.
A few (I can recall two) of the early MUDs hacked the base OS,
customising it to run the MUD to the point that the system
booted directly into the MUD with no intervening OS, and there
was no general application layer or general purpose OS left.
General commentary:
Start out by admitting your preferences/religion and the extents
and limits to which it can be justified and supported.
Use what you are familiar and comfortable with (ie go with your
preferences/religion -- you're going to anyway -- but understand
your choice).
Understand the costs of fixing/covering up weaknesses in your
choice.
Don't ignore administration or scaling costs.
Don't ignore long term maintenance and support costs.
Don't ignore the extent to which your choice will tie you into a
specific solution path, and the risks/benefits of that decision.
Make sure you have understood the costs and benefits of other
platforms, as regards becoming competent with them, and in any
on-going expenses, risks, or liabilities. ie do due diligence on
your decision.
Most of this really comes down to deliberately trying to be
intelligent about the decision and ensuring you understand the scope
and ramifications of the area and the decision.
Most of these thorny questions really just come down to attempting
to be intelligent about the area and to make an educated and
embracive decision.
> - Windows NT/2000 Server (expensive)
I am a fan of tool-builder oriented systems which tends to align me
with Unix-like systems and gives a general dislike of Windows. This
is somewhat aligned with the fact that with minor exceptions I've
not used Windows for over 10 years now. As a result I've become
very used to a manner of working and using computer systems which is
explicitly antithetical to Windows (its simply not possible to work
that way under Windows), and that gap grows larger with every month
I spend assuming a Unix-like environment and with every Windows
release.
While I do regular competitive/capability analysis on Windows and
its applications, I'm not in a good position to represent that side
well, so I largely won't.
My concerns on Windows would be:
-- Up-front license costs and future costs and risks both in terms
of licensing and maintenance
-- Administrative costs, both in ease and effectiveness
-- Platform and vendor tie-in (part of your risks analysis)
-- Technical transparency of the platform
-- Scripting/automation/integration transparency
-- Limited availability/expense of third party tools/sources/tools
>From an engineering vantage, depending on your project and location,
there may be benefits in staffing availability and in the
ease/ability to leverage platform supports (COM/DCOM/.Net/VB etc).
Take care on those aspects as they come with vendor and platform
locks, solution method limits, legacy and maintenance limitations
etc -- its really a rather complex and multi-faceted evaluation for
those factors.
> - OpenBSD - FreeBSD - NetBSD
Odds are you're going to end up with your decision falling across
three splits:
1) Unix-like or not Unix-like?
2) Proprietary or OpenSource?
3) x86 platform or non-x86?
For me the value propositions for Unix-like systems are
overwhelming. I'm a tool builder and have a tendency to view
everything from that vantage:
Building a service (MUD or otherwise) involves building tools.
Some tools are used to build other tools, some tools are used to
conglomerate other tools, some tools are used to analyse or
re-present other tools, and the final tool is your service in
question.
Others are not comfortable with that approach, and so don't have the
same preferences. Unix-like systems tend to make that tool-based
approach and view easier than others. Further, given a
tool-building philosophy there are administrative benefits for
Unix-like systems, both at the simple level of script/tool
automation of administrative and operational systems, and at the
level of glue integration of disparate systems. If your admins are
of the, "I'll just write a script to glue those together and do
that" bent, then Unix-like systems will likely default. If your
admins are instead of the, "I'll have a tool/application to do that,
or I'll get a developer to write me one," then Unix-like systems
will be less attractive.
Again, in the end its the people that are the determining factors,
not the technology or implementation.
The proprietary vs OpenSource choice is partially religious, but is
mostly dependent on the requirements of your application space and
its external integrations. If you're doing nasty things with Java
or have high thread or socket counts its tough to argue with
Solaris. A closed source solution may be effectively mandated if
you're dependent on external applications such as Oracle (Oracle on
linux is still raw), or Professional Services vendors to integrate
your system with third party systems, or have audit/liability
contractual requirements with financial institutions, or any of a
host of other business reasons.
As previously admitted I default to Open Source solutions as I like,
appreciate, and am used_to/most_comfortable in fully transparent
source bases. Specifically it bugs me when I have black boxes in my
development or runtime environment.
I'm currently facing a debugging case where a black box libC call
(open()) appears to have either an internal race condition or just
not be re-entrant and I just can't prove it without the source
access I don't have. Not good.
> - various forms of Linux (RH,Debian,Mandrake,SuSE, etc.)
I prefer Linux to the *BSDs. Why? I prefer SysV models at both the
syscall and administrative levels to *BSD models. Its not a big
thing. I've worked on both and can live in both environments. Its
a preference, likely based on the fact that I used SysV systems long
before I used BSD-like systems, no more.
That fact that Linux is receiving more developer attention
currently than the *BSDs encourages this view -- it means that
interesting tools and systems are more likely to be available and
ported for Linux than for the *BSDs. While I don't tend to crowd
the bleeding edge, there's a definite advantage to having an
effective guarantee that any interesting OpenSource *ix package
will already run under Linux if it wasn't initially developed
under Linux.
Within the realm of Linux distributions I prefer Debian, with a
second class preference for any of the .deb based distributions over
RPM-based distributions. As a packaging format .deb incorporates
more information on the package and its dependencies both for
runtime and building from sources than RPM does. This difference in
data sets allows the Debian packaging system to do automatic
dependency satisfaction, full dependency graph determination and
manipulation etc, in a manner and with a flexibility which RPM-based
systems cannot. I like this -- as a SysAdm it helps make my life
easy, and it makes reproducibility in systems and SysAdm easier to
accomplish (a non-trivial benefit).
Notes:
I particularly like the fact that given a Linux/x86 system running
VACM (needs an IPMI/EMP supporting motherboard) I can, quite
literally, fully maintain, administer, upgrade, reset,
power-cycle, BIOS reconfigure, and otherwise drive a remote box
with the same facility as if I had physical access. The only
difference at that point between remote and physical access is
that I can't fondle the wires or replace hardware components.
ObNote: I don't run VACM on Kanga.Nu (tho I do use some of the
EMP supports). Zocalo doesn't have the infrastructure for VACM
and I've been loathe to push an extra box on him just to run the
VACM server.
I used to run RedHat, and with one exception now only use Debian.
The administrative benefits were compelling. If you want to
research the area, read up on Debian's dpkg and apt-get tools.
The only major feature which Debian's .deb package tools don't
support (RPM doesn't either) are two stage package commit
processed for installs. AIX in particular shines in this
area. (Note: There's some discussion of this being added to
Debian's packaging system) Details:
Under AIX a package install is a two stage process.
Installing a package under AIX first backs up and checkpoints
every file that the package install will touch or overlay. Once
that is done the new package and its files are installed.
At any point the package may be uninstalled and the system
revereted to the exact condition it was in prior to the install
(all binaries, all config files, etc).
At any point the package may be committed. This removes the
back history for binary files and frees the requisite disk space
for all the duplicate binary file copies. After a package
commit you can no longer transparently revert back to the prior
condition. IIRC however config file histories will be
maintained across commits so that you can do diffs across
package install versions.
> Solaris (anyone bother?)
A great many people do, especially in the commercial arena. Solaris
scales very predictably and has very well known performance curves
and general behaviours.
Given that Sun are handing out Solaris/x86 for free (I've got a copy
I picked up at LinuxWorld if anyone would like to borrow it), and
are aggressively supporting it both in the market and with
applications, development toolchains, etc, it can be an interesting
option.
ObNote: I used to work for Sun. I'm not terribly fond of Solaris
as an admin (I despise PkgAdd), but I have little bad to say about
it otherwise.
Aside: I've been particularly pleased with Oracle performance under
Solaris/SPARC (tho I prefer DB2 under AIX for large SQL databases).
> - OS X server (doubtful since hardware is very expensive)
As is regularly repeated to me by the local crowd: Macs make great
Linux/PPC boxes. Beyond that I can't comment much.
Past a very minimal point I really don't care much about the
underlieing hardware (do I have hardware, can I afford it, does it
work, is it fast enough, does it have the required level of fault
tolerance, can I replace it if/as needed). Or, more simply for me
hardware devolves to the following question:
Can I run GCC and will my stuff compile reasonably easily?
Which really translates to:
Has GCC been ported?
Is the base ABI not weird?
Its worth noting that mid/low end SPARC and PPC boxes (Apple, Sun,
IBM/RS6K, etc systems are currently very cheap on the second hard
market), as well as being readily available/cheap at
managed_hosting/colo facilities (this is particularly true of lower
end (Ultra)SPARC boxes). Netra T1s are cute boxes.
ObExample: E450s (a very nice mid-range Sun box) are getting
particularly cheap and I'm told have pushed the second hand market
beyond saturation point. Check EBay and .com auctions.
Remember to calculate the effective and required half life of your
hardware platform, and then the effective cost should you be
required to change or replace your hardware platform ahead of
schedule. While the classical server companies do make excellent
examples of industrial design, frequently across the expected half
life of the system the cost/risk ratios may not be compelling as
versus a much shorter half-life commodity box.
Witness how extensively throw-away Celeron-based E-Machine systems
are being used in infrastructure positions in the Valley. They
have a trivially short half-life, but for those cases scaling
across systems, implementing fault tolerance across nodes, and
then simply throwing the old box away and plugging in a new one in
the case of any fault is a compelling argument.
> I'm sure there are MUDs that still run on AmigaDOS or OS/2 or
> something, but I doubt they're mainstream =)
I originally moved to Linux as I needed a 64bit development
environment. I moved to Linux/Alpha purely as at the time that was
the most affordable and performant 64bit platform I could find.
With the recent death of Alpha, price/performance ratios have fallen
precipitously. If you can ensure that your application is portable
it may be interesting to go Linux/Alpha today and then look to
UltraSPARC, PPC, Hammer, or Itanium down the road.
Aside: Take note of the much lower code and data density on Alpha
and the resulting effects on RAM requirements.
> The factors that seem to be the crux of an OS choice are probably
> price, robustness, security, performance, and ease of
> installation/administration/development.
Uhh, yeah, them too.
> I've done some Linux installs and, overall, they seem to be okay
> (RH and Mandrake).
There's a big difference between installing an OS and maintaining
and running it across extended periods. There's also a big
difference between maintaining and running an OS across an extended
period, and building and running applications on that OS. Don't
conflate the two problem sets.
--
J C Lawrence
---------(*) Satan, oscillate my metallic sonatas.
claw at kanga.nu He lived as a devil, eh?
http://www.kanga.nu/~claw/ Evil is a name of a foeman, as I live.
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev
More information about the mud-dev-archive
mailing list