[MUD-Dev] Object Models

John Buehler johnbue at msn.com
Thu Nov 30 10:30:29 CET 2000


Bruce writes:

> It is a notational technique that is integrated into the design
> and implementation of the compiler, language, and the runtime
> libraries to provide for a way to re-use libraries in the face of
> many obstacles. (Like not having source, bugs present within the
> libraries, etc.)

And on these topics, I'll have largely no comment.  The whole thing started
out on design, and now we've headed over to system maintenance and
deployment.  You'll get no argument from me that updating a running system
is a critically valuable feature for a 24x7 system.

> But, if at some point, we decide to remove an interface, all that
> we need to do is remove one of the files that implements our
> extension to the rest of the system from the build system and
> re-build the relevant library, and we've now removed that
> entirely from use.  We didn't have to go crawling around lots of
> places and remove bits and pieces from here and there.  This is
> somewhat similar to some of the tools that exist for Aspect
> Oriented Programming and Subject Oriented Programming which allow
> you to isolate chunks of code that cut across a number of classes
> into a separate set of files, which are then put together by the
> tools (like Aspect/J).  Of course, I can't speak to how this may
> or may not be similar to the tools that you are using.  Maybe
> your tools do all of this for you as well?

They did not.  The tool set we were working on started at the absolute
basics of components: what is a component, what is a type, how do you
compose components from subpieces, etc.  The idea of constructing some kind
of algebra for bringing them together and then separating them again was
beyond the scope of the work in the time available.

> If all tools were so perfectly safe, we'd have no recourse when
> we actually needed to do something.  What would happen if we
> should need to binary patch a component out in the field?  Say,
> we have a running server, but we don't want to have downtime.  We
> need to quickly replace a buggy piece of code.  Without the
> ability to perform runtime loading of code, how would you propose
> that the code be replaced without incurring downtime?  Try to
> avoid the temptation to respond that downtime would be preferable
> to taking the chance that one might hang themselves by having
> this type of freedom. :)  It is the job of a software engineer to
> know what their options are, what the problems are and to come to
> an appropriate choice.

In the languages that you and I use there are scads of things that we are
disallowed from doing because they make no sense in the paradigm of the
language.  You can't have two variables of the same name in the same scope.
You don't crank out invalid machine instructions.  And so on.  These are
things that you just plain don't want to do.  It works out that there are
lots of type-related things that engineers don't want to do as well because
they have a significant potential for introducing bugs.  And instead of
permitting engineers to do such things, we were working on ways to ensure
that a process exists that means that engineers don't even have the option
of making such mistakes.  Properly supported by tools, it would have been as
effortless as any other development process.

> > With what we were doing, the lion's share of
> > documentation lay in the interfaces that components
> > implemented.  So whenever a component was used, the
> > existing interface documentation could be referenced.
> > When a component brought together multiple interfaces
> > (perhaps from multiple subcomponents) such that they
> > relate to each other, a new 'type' contract needs to be
> > written to express that interaction.
>
> So, how exactly are you composing these interfaces?  What
> mechanics are you using?  Is this C++?  Are you using some custom
> tools?  How are you writing contracts?  What are contracts
> written in?  Can you provide an example?  Is this tool something
> that we can look at?

The interfaces were composed by hand using a meta-language that expressed
pre- and post-conditions for each method as well as an abstract data model.
The data model was what the pre- and post-condition used to express
contractual obligations of methods.  All tools were custom.  The interfaces
were the bottom-level contracts, and when a component implemented multiple
contracts that related to each other (e.g. get/set the same abstract value),
then a correlation contract was constructed.

Sorry, but I'm sure that I can't offer an example of a contract.  Not that I
took any with me when I left Microsoft   ;)

> > In our component approach, the development manager
> > liked to use the phrase "throw away the sources" for
> > what to do when a component ships.  He wanted components
> > to be absolutely opaque.  There's no need to see into a
> > component for integration purposes because you integrate
> > at the level of public interfaces (a runtime
>
> Runtime?

You ask a component for its interface using the COM IUnknown::QueryInterface
mechanism.  That mechanism never fails because of static type declaration
and checking.  That's assuming that you declare the type of your components
correctly.

> > consideration), and there's no need to see into a
> > component for debugging purposes because there are no
> > bugs (you wouldn't believe the verification process).
>
> Well, explain!  I -know- that I won't believe "no bugs".  After
> all, if the tests claim there are no bugs, I don't think I'd
> still trust it as I don't know if:

Yes, the 'no bugs' claim is fairly outrageous and there's really no such
thing.  However, developing small, intensely (and intelligently) tested
components permits for a process that actually approaches zero bugs.

>  * Tests were incomplete
>  * Underlying system problems (can't do much about this)
>  * Tests were bug-free
>  * Testing harness was bug-free
>  * Did the tests cover data input? Did they
>    account for a malicious user? Did they cover
>    all of the relevant cases there? (Can they
>    cover all of the relevant cases?)

To begin with, we were going with extensive design, development and review
cycles on the contracts themselves.  So we didn't design an entire
application and then have folks look over it at a certain level, ignoring
this chunk and intensively examining that chunk.  Instead, each contract
that was designed for a component set was inspected on its  own merits.
That's because we wanted to design reuseable contracts.  If the contracts
are so bizarre and are so oddly twisted into the overall application that
they can't be ever used again, it's not an interesting contract and will
have to be redesigned.

Certainly the tests cover input data.  Those are among the preconditions for
the correct operation of any method and are taken very seriously.

As an example, I wrote a fairly hairy chunk of code that was about 2500
lines long (an exceptionally large component) that has had one bug uncovered
so far: when invoking the insert method with zero values, the component
would complain if NULL were supplied as the values to insert.  The contract
claimed that NULL was okay, but the component complained anyway (even though
it was coded to deal with the trivial case).  This happened because I wrote
the contract, the tests and the component.

As I've mentioned earlier, the fact that we have to exercise every arc and
every block of code ensures that we do exhaustive testing.  That exhaustive
testing is one of the reasons that I decided to part company with the group.
It was simply too arduous to do by hand.  More tools were needed and I
wasn't prepared to wait for them to be developed.  But I sure do look
forward to building software AFTER they exist.

> (But, I do believe that testing and automated test suites are the
> only way to be assured that something does indeed work.  But, I
> still assume that it has bugs.)
>
> As an aside, it often isn't clear that you've seen what you're
> saying actually work:  "He wanted components to be absolutely
> opaque."  Were they?  If not, why not?  If so, how did the people
> using those components fare with them?  Did they run into
> unexpected behaviors?  Was everything completely clear to them
> from the documentation and the contract?  Did that include the
> memory usage patterns and performance profile of the component?

Components were opaque.  Users of a component relied upon the documented
contracts in order to understand their behavior.  We didn't have lots and
lots of experience with throwing components over the wall at other teams due
to the time we spent in just trying to implement the absolute basics of
components.  Bootstrapping the whole mess is really expensive.

Everything was in the contracts, but they aren't necessarily perfectly
human-readable.  More tools were needed to present the information in the
contracts in more reader-friendly fashion.

Memory usage and performance is part of a component's documentation, but the
manufacture of that information was in the less-well-structured end of the
overall process.

We were simultaneously tackling some 200 significant architectural issues.
Some, like error handling, I've mentioned.  Others, like composition, were
HUGE considerations for what we were tackling.  There was a huge chart on
the wall in the hallway that listed the responsibilities of our four
architects.  It was the sort of chart that wasn't going to get filled out in
a couple years' time.  Why mention this?  Because the team was about really
figuring out the basics of software construction.  No cheating, no hacking,
no fakery.  Following strict rules from bottom to top, how does software get
built?  I'm a big fan of that approach because it means that when all is
said and done, you actually have the tools and process to build software the
best way possible.  From THERE, we can actually begin to advance the state
of the art.

In short, somebody really needs to stop being so practical about software
construction and take a good hard look at how it actually is done.  That's
my impression based on the time I did it practically versus correctly.

JB


_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list