System Security (was: Re: [MUD-Dev] players who "take away from the game")

Cynbe ru Taren cynbe at muq.org
Thu Nov 11 17:01:39 CET 1999


On Wed, 10 Nov 1999 22:00:11 -0700 
cg  <cg at ami-cg.GraySage.Edmonton.AB.CA> wrote:

> [Eli Stevens:]
>> This got me wondering...  :)
>> 
>> What precautions should be taken when writing a MUD codebase from
>> scratch?  Are most security holes that a MUD box might have at
>> the OS level, or does having a program like a MUD running open up
>> opportunities that would not otherwise exist (assuming that the
>> ability to issue OS commands and such is not a feature)?

Issues I would (well, have :) look at:



*  Obviously, but apparently not obviously enough (since they keey
   hitting the news):  Buffer overflow bugs.

   You want to design all your I/O buffer handling, text/string
   manipulation &tc so that these are -structurally- almost impossible,
   instead of depending on line-by-line individual feature coding to
   do everything right in spite of the basic architecture.

   E.g., using C++ style string classes instead of by-hand malloc()/free()
   logic can make it a lot less likely some string operation somewhere in
   midnight madness feepland is subject to buffer overflow.



*  Next to specific buffer over-runs, array indexing errors and
   pointer arithmetic are likely the most fruitful source of exploitable
   bugs.

   If security matters, you may want to pick an implementation
   language or coding style which inherently minimizes the potential
   for these:  Avoid pointer arithmetic and use C++ array classes
   written to do bounds checking, say.  The added peace of mind
   can easily justify the extra runtime overhead.

   Many such bugs are ultimately caused by having to do manual
   storage allocation at a micro-level and screwing up.  You may
   wish to architecturally eliminate this class of bugs either
   by adopting static allocation (if your design is simple enough
   to allow it at acceptable cost in space), or else by supplying
   a system-wide automatic storage allocation mechanism, either
   an off-the-shelf refcounter or garbage collector or perhaps
   something custom, depending on the design.



*  Try to architecturally make the security kernel -- the subset of the
   program which -must- be correct for security to be maintained -- as
   small and stable as possible.

   The first stage here, obviously, is to -have- a security kernel, as
   opposed to having the security of the system depend on basically
   every line of code in it. :)

   The second stage is to make the functionality of that kernel as
   efficient, general-purpose and policy-free as possible:  This
   minimizes the need/temptation to go mangling it and introducing
   new security holes.

   An application language powerful and efficient enough that most
   feepage can be done safely in it rather than dangerously in the
   underlying C server can add a lot to long-term security.



*  Segregate all host-OS access out carefully, grant such access
   extremely sparingly, and scrutinize the resulting code with
   imagination and rampant paranoia.

   E.g., are you logging outside-triggered events to a host
   file?

   Sounds simple and safe, but deliberately flooding the
   server with such events may cause the logfile to fill a critical
   partition:  Will all affected processes remain secure in that case?
   You may need to check all diskwrites to see what happens upon
   disk-full errors, and think about it.

   You can put a limit on how long the logfile is allowed to get --
   but now you've provided an attacker with a ready-made way of
   covering tracks, by just scrolling the critical episode off
   the log via lots of noise events afterwards.



The typical security hole seems to trace back to some little
convenience feep done with the mind on things other than security: The
server design goal is to make the system secure even in the face of
utter idiocy on the part of someone doing such little feeps in a hurry
on caffiene at 5:57AM before running to catch the bus.





The above all relates to just keeping the host OS account(s) secure
from folks using the server: Keeping the internal contents of the
server sane in the face of in-server meddling is a complete extra
security front calling for some up-front model of how privileges are
handed out and enforced.

It is hard to say much general about that except that if you start
with no clear security design, you'll wind up with effectively no
security at this level -- pretty much the standard state of affairs.

Multi-user OSes, capability designs, file access control lists &tc can
provide some models on which to base your internal security mechanisms
at this level, if you wish to take the problem seriously.

It is much easier, of course, to shrug, provide nothing, and blame the
application programmer any time some simple coding slip brings the
whole house of cards down.

:)

My $0.02 worth.

  Cynbe



_______________________________________________
MUD-Dev maillist  -  MUD-Dev at kanga.nu
http://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list