[MUD-Dev] Event Scheduling

Hans-Henrik Staerfeldt hhs at cbs.dtu.dk
Mon Feb 14 13:10:25 CET 2000


On Sat, 12 Feb 2000, Cynbe ru Taren wrote:

> > Hans-Henrik Staerfeldt <hhs at cbs.dtu.dk> wrote:
>=20
> | Now, making log_2(N) single-bit operations takes O(1) time on a compu=
ter,
>=20
> *chuckle*
>=20
> One of the great things about math is that you can prove
> anything you want, merely by making the appropriate
> assumptions.
>=20
> Yes, if you posit a machine which gets faster as the problems get
> bigger, beating the O( N * log( N )) bound isn't hard.

Well, if we cannot assume that log_2(N) single-bit operations takes=20
O(1) time on a computer, it would be dead-wrong to assume that things
like addition does either. After all, you need to make O(log_2(N))
bitwise additions to actually make an addition of two numbers that is=20
O(N). Yet, addition is considered O(1) by all other algorithm designers!
Why not make bitwise or the same? And use it?

> They in fact explicitly assume the size of the numbers being sorted
> stays bounded as N increases, which as I pointed out in my previous
> letter immediately trivializes the problem -- reduces it to O(N) in
> the asymptotic limit merely by treating it as a counting or binning
> problem.
>
> So by only going to O( N * log( log( N ) ) ) they are
> missing the boat by a fair fraction! :)

They would have if the algorithm ran on O(2^N) space, yes, but the size=20
of the largest integer/float your computer can hold in a register is not=20
equal to the size of your memory for bins, which is why they undeline=20
that the algorithm runs in O(N) space as well :-)

> If one were to be somewhat honest about the entire hack they are
> working on -- which is not without intrinsic interest -- one would
> concede that in any practical situation, the size of the values being
> sorted rises as log(N), and that the bit-parallel capabilities of a
> particular machine are fixed at the time it is made, rather than
> conveniently varying as N varies: You would then be able to do bit
> parallel hacks on (say) quadruples of values at a time when they were
> small, pairs of values later on, and finally only individual values:
> The natural O(N*log(N)) value would then re-appear, knocked down by a
> constant factor due to the fixed additional parallelism being
> employed.

Well, as many other implementations, N is limited by the register that
holds it. Your argument holds as well to any algorithm that uses addition
in (1) time. If you implement any algorithm using addition, that will=20
scale beyond the register size you run into exactly the same problem,=20
which is why this 'hack' is defendable. Or rather just as defendable as=20
any algorithm using addition in O(1) time.

I'm not at all sure what you mean by the values rises as log(N) in
'any practical situation'. <tease> Judgeing from your very theoretical=20
point of view, i would guess you would be against practical situations
</tease>

> In honest analytical terms, what they are doing is solving small
> problems less efficiently than larger problems, and then conveniently
> cutting off the scaling (at considerable violence to the entire
> Knuthian notion of asymptotic analysis -- it didn't really originally
> mean "ending at the first inconvenient value"!) in order to get an
> artificially flat "asymptotic" formula.

I would say, that they solve the largest problems that can be solved by
any days computer more eficiently than those problems that cannot be
solved by any days computers.

I agree that bounding N to be contained in a register is very much
against the Knuthian notion of asymptotic analysis, but so would
constant time addition be, which would sort-of screw up big-oh analysis
as we know it, if we did not allow it.

> Still, anything that gets you a PhD, I suppose!
>=20
> It was this sort of BS that eliminated in me any
> temptation to go through the PhD circus... :(

To my knowledge this is not his PHD thesis. Being on the commitee of both
STOC and ICALP is no small feat either.


Hans Henrik St=E6rfeldt   |    bombman at diku.dk    | work:  hhs at cbs.dtu.dk=
      |
address:                |___  +45 40383492    __|__       +45 45252425   =
  __|
 Dybendalsvej 74 2. th, | Scientific programmer at Center for Biological =
    |
 2720 Vanl=F8se, Danmark. |  Sequence Analysis, Technical University of D=
enmark|




_______________________________________________
MUD-Dev maillist  -  MUD-Dev at kanga.nu
http://www.kanga.nu/lists/listinfo/mud-dev



More information about the mud-dev-archive mailing list