[MUD-Dev] Security
J C Lawrence
claw at kanga.nu
Mon Feb 14 23:42:21 CET 2000
A wonderfully applicable Bruse Schneir quote:
http://slashdot.org/interviews/99/10/29/0832246.shtml
Especially given that game worlds and their controls, data
structures, and dependencies are inherently complex (whcih tends to
leak over into protocol and system complexity):
--<cut>--
The real problem with protocols, and the thing that is the hardest
to deal with, is all the non-cryptographic dressing around the core
protocols. This is where the real insecurities lie. Security's worst
enemy is complexity.
This might seem an odd statement, especially in the light of the
many simple systems that exhibit critical security failures. It is
true nonetheless. Simple failures are simple to avoid, and often
simple to fix. The problem in these cases is not a lack of knowledge
of how to do it right, but a refusal (or inability) to apply this
knowledge. Complexity, however, is a different beast; we do not
really know how to handle it. Complex systems exhibit more failures
as well as more complex failures. These failures are harder to fix
because the systems are more complex, and before you know it the
system has become unmanageable.
Designing any software system is always a matter of weighing and
reconciling different requirements: functionality, efficiency,
political acceptability, security, backward compatibility,
deadlines, flexibility, ease of use, and many more. The unspoken
requirement is often simplicity. If the system gets too complex, it
becomes too difficult and too expensive to make and
maintain. Because fulfilling more of the other requirements usually
involves a more complex design, many systems end up with a design
that is as complex as the designers and implementers can reasonably
handle. (Other systems end up with a design that is too complex to
handle, and the project fails accordingly.)
Virtually all software is developed using a try-and-fix
methodology. Small pieces are implemented, tested, fixed, and tested
again. Several of these small pieces are combined into a larger
module, and this module is tested, fixed, and tested again. The end
result is software that more or less functions as expected, although
we are all familiar with the high frequency of functional failures
of software systems.
This process of making fairly complex systems and implementing them
with a try-and-fix methodology has a devastating effect on
security. The central reason is that you cannot easily test for
security; security is not a functional aspect of the
system. Therefore, security bugs are not detected and fixed during
the development process in the same way that functional bugs
are. Suppose a reasonable-sized program is developed without any
testing at all during development and quality control. We feel
confident in stating that the result will be a completely useless
program; most likely it will not perform any of the desired
functions correctly. Yet this is exactly what we get from the
try-and-fix methodology with respect to security.
The only reasonable way to "test" the security of a system is to
perform security reviews on it. A security review is a manual
process; it is very expensive in terms of time and effort. And just
as functional testing cannot prove the absence of bugs, a security
review cannot show that the product is in fact secure. The more
complex the system is, the harder a security evaluation becomes. A
more complex system will have more security-related errors in the
specification, design, and implementation. We claim that the number
of errors and difficulty of the evaluation are not linear functions
of the complexity, but in fact grow much faster.
For the sake of simplicity, let us assume the system has n different
options, each with two possible choices. Then, there are about n^2
different pairs of options that could interact in unexpected ways,
and 2^n different configurations altogether. Each possible
interaction can lead to a security weakness, and the number of
possible complex interactions that involve several options is
huge. We therefore expect that the number of actual security
weaknesses grows very rapidly with increasing complexity.
The increased number of possible interactions creates more work
during the security evaluation. For a system with a moderate number
of options, checking all the two-option interactions becomes a huge
amount of work. Checking every possible configuration is effectively
impossible. Thus the difficulty of performing security evaluations
also grows very rapidly with increasing complexity. The combination
of additional (potential) weaknesses and a more difficult security
analysis unavoidably results in insecure systems.
In actual systems, the situation is not quite so bad; there are
often options that are "orthogonal" in that they have no relation or
interaction with each other. This occurs, for example, if the
options are on different layers in the communication system, and the
layers are separated by a well-defined interface that does not
"show" the options on either side. For this very reason, such a
separation of a system into relatively independent modules with
clearly defined interfaces is a hallmark of good design. Good
modularization can dramatically reduce the effective complexity of a
system without the need to eliminate important features. Options
within a single module can of course still have interactions that
need to be analyzed, so the number of options per module should be
minimized. Modularization works well when used properly, but most
actual systems still include cross-dependencies where options in
different modules do affect each other.
A more complex system loses on all fronts. It contains more
weaknesses to start with, it is much harder to analyze, and it is
much harder to implement without introducing security-critical
errors in the implementation.
This increase in the number of security weaknesses interacts
destructively with the weakest-link property of security: the
security of the overall system is limited by the security of its
weakest link. Any single weakness can destroy the security of the
entire system.
Complexity not only makes it virtually impossible to create a secure
system, it also makes the system extremely hard to manage. The
people running the actual system typically do not have a thorough
understanding of the system and the security issues
involved. Configuration options should therefore be kept to a
minimum, and the options should provide a very simple model to the
user. Complex combinations of options are very likely to be
configured erroneously, resulting in a loss of security. There are
many stories throughout history that illustrate how management of
complex systems is often the weakest link.
I repeat: security's worst enemy is complexity. The most serious
protocol problem is how to deal with complex protocols (or how to
strip them down to the bone).
--<cut>--
--
J C Lawrence Home: claw at kanga.nu
----------(*) Other: coder at kanga.nu
--=| A man is as sane as he is dangerous to his environment |=--
_______________________________________________
MUD-Dev maillist - MUD-Dev at kanga.nu
http://www.kanga.nu/lists/listinfo/mud-dev
More information about the mud-dev-archive
mailing list