[MUD-Dev] Re: [CODE] [LANGUAGE/PLATFORM SPECIFIC] My Event Engine
J C Lawrence
claw at under.engr.sgi.com
Wed Jul 22 10:52:43 CEST 1998
On Wed, 22 Jul 1998 10:20:41 +0800
Joel Kelso<joel at ee.uwa.edu.au> wrote:
> s001gmu at nova.wright.edu wrote:
>> so, yes, it is 'tick-based', but only inasmuch as the computer is
>> tick based. I don't impose any artificial ticks on top of the
>> system ticker. :) I do allow for the definition of a GRAIN, but
>> that is only for shorthand. It gets a bit cumbersome to schedule
>> an event to go off in one week of real time if you have to measure
>> it down to the nanosecond. The Grain is just an int multiplied to
>> the offset before it is added to the current time:
>>
>> t += offset * GRAIN;
> Hey, this might be obvious, but how do you get a system clock
> millisecondsor microseconds (nanoseconds ?) in Unix ? Is there a
> POSIX standard function for this ?
Note that high resolution timers under Unix are ___NOT___ guaranteed
to be accurate, and I can state with utter confidence that under
HP-UX, Solaris, Irix and AIX they are in fact very inaccurate at the
nanosecond level (thing about the ratio of nanoseconds to clock speeds
for one).
That said, the API set you are interested in are under the getitimer()
man pages, are derived from BSD, and have yet to be defined by ANSI.
They are extremely common under Unix however (haven't found a *nix
platform without them).
--
J C Lawrence Internet: claw at null.net
(Contractor) Internet: coder at ibm.net
---------(*) Internet: claw at under.engr.sgi.com
...Honourary Member of Clan McFud -- Teamer's Avenging Monolith...
More information about the mud-dev-archive
mailing list