Disk v. Mem

Cynbe ru Taren cynbe at laurel.actlab.utexas.edu
Tue May 13 21:09:23 CEST 1997


ashen at pixi.com:

| erh, at the risk of getting bounced out of
| here for asking a fourth-grade question... ;)
| 
| what are the advantages of swapping out objects
| to disk as opposed to keeping them in memory?
                                               
Excellent question -- and the follow-on excellent question
is "Ok, so why not just let the host virtual memory hardware
do it for you?" :)

| do you really need to save that much memory?
| do you need it for something else?

At today's ram prices, depending on your budget, you may well not need
to.  A large mud can run 100Meg + of ram, though, much of it
infrequently used in many cases, and you might not want to devote ~$500
worth of ram to the db if you can avoid it with a simple software
fix...

| are objects so large that 40,000 of them fill mem quickly?

On some servers, at least, objects take a minimum of 100 bytes
each, and the sky is the limit after that as properties are
added:  There may be hundreds of thousands of strings of about
sixteen bytes each average size.

| does too full a mem size inhibit the driver speed?

Not for most practical purposes, as long as it all fits in
physical memory.  If it gets big enough to start paging out
to disk via the host virtual memory system, performance can
drop off -dramatically-.  (I speek as someone who has on
occasion worked with hundreds of megabytes of volume MRI
datasets on machines with only dozens of megabytes of
physical ram *wrygrin*...)

| isn't it more costly in speed to write to disk?

Absolutely -- a factor of a million or so.  (Milliseconds
to access disk vs nanoseconds to access ram.)  Meaning that
if your server suddenly starts running at disk speeds
instead of ram speeds, it may suddenly look a million times
lower.

Humans are amazingly sensitive to a slowdown of just
a constant factor of a million. :)

| what kind of savings do you get versus this extra cost?
                                                        
Depending on your situation, you may buy nothing at all,
you may be able to support a bigger db than you could
otherwise afford, you may be able to do a better job of
virtual memory than the host OS/hardware would otherwise
do, thus reducing the time spent waiting for disk, or
you may be able to save money on hardware by not having
to buy an extra gigabyte of ram.

If you really need to access all of your db every second
or so, then diskbasing just aint gonna work for you. This
may be true of some combat-style muds with simulation going
on in every room on every cycle, say.

If much of your db goes untouched for minutes, hours or days
at a time, you may be able to save lots of ram by keeping the
unused parts on disk, and only the frequently touched stuff in
ram. (Which still isn't free, no?  Else I'd have a lot more of
it floating around the house than I do. :)

"Why not just let the host OS page the unused stuff out to disk?"

The basic problem is that modern hardware swaps out units of 4K,
whereas objects in muds tend to be 16-32 bytes long (given a
reasonable server design - 100 bytes or so if its spendthrift).

So if a 4K page contains only one 40-byte object in use,
it still has to stay in ram, even though 99% of it is not
in use.  If you're willing to buy 100x more ram than is
logically needed, this isn't a problem:  Otherwise, a software
solution that swaps out smaller units of ram can be a big win.

A viable alternative is to have your server move all the objects
in frequent use into one spot in ram:  This leaves lots of pages
which are 100% unused instead of 99% unused, which the host 
OS/hardware can then swap to disk for you.  This is a very viable
approach which for some reason I don't seem to see anyone using...

 Cynbe



More information about the mud-dev-archive mailing list