[MUD-Dev] Re: Why threading? (Was: Output Classification Notes)
J C Lawrence
claw at under.engr.sgi.com
Fri Aug 7 10:22:25 CEST 1998
On Wed, 05 Aug 1998 11:07:10 +0200
Ola Fosheim =?iso-8859-1?Q?Gr=F8stad?=<Ola> wrote:
> J C Lawrence wrote:
>> As those of you that know my basic design may know, from a
>> process-level vantage my design fills the following description:
>>
>> A highly threaded application (~50 threads at rest, potentially
>> into the hundreds under heavy use) with a very large number of
>> potential contention points, a very high rate of contention checks,
>> but a very low rate of contention combined with contention points
>> being held for ralatively long periods (often based on physical
>> IO).
>>
>> Translation: I need a synchronisation/locking mechanism that has
>> one of the following two characteristics (cheap contention checking
>> is a given as is cheap block/unblock semantics):
> Why did you choose a threaded (asynchronous) approach? Because you
> wanted it to scale well? Or because it's fun?
My original reasons for chosing a threaded approach was because the
background of the server design as based on my ideas on parallel
processing and process distribution, and I wanted to teach myself
threads. (The original design back when I was infatuated with Inmos
Transputers (still kinda am) had every main functional module (BD, IO,
Sceduler (now split into the Dispatchor and Executor) etc running as
standalone cooperatove parallel processes).
The main reason for sticking with the threaded approach is because
I've found it a very natural and fluid method of compartmentalising
the problem space. I tend to think threaded approaches to problems by
default now. Scalability both across larger (MP) machines and over
load is also an interest. If I stay here at SGI, one of these days
I'm going to wrangle time on a 128 processor Origin2000 and see what I
can do with say 100,000 simultanesous (simulated) users all sitting
evenly distributed across a score or so directly-connected 100Mbs
NICs. Theoretically my IO handling and general process model *should*
scale fairly smoothly. I'd like to see if it does.
> What are the exact advantages over a singlethreaded more iterative
> approach? I can basically see one advantage, localization (in the
> code) is easier to achieve.
For me the main advantage is in the base design. With a single
threaded approach in essence the functional modules within the design
are always competing for processor time. Who gets to run next? Loop
too fast and you're wasting more time on context shifts than you are
on processing. Loop too slowly and the system in unresponsive or even
loses data due to unhandled system buffer overflows. Attempt to drive
your internal context shifts via blocking mechanisms amd the design
rapidly becomes schitzophrenic in its handling of the interrupt
returns and cascaded interupts..
Seemingly the only sane approach is to use simulated threads within
the process, at which point you have to define and resolve your own
scheduling, contention, and yield semantics, which is both not a
trivial task, and is fraught with exceptions. Going MT carefully
sidesteps most of that mess while gaining few new problems (note that
the one above is not a design level problem but an implementation
level problem).
--
J C Lawrence Internet: claw at null.net
(Contractor) Internet: coder at ibm.net
---------(*) Internet: claw at under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...
More information about the mud-dev-archive
mailing list