[MUD-Dev] TECH: UDP Revisted
Bobby Martin
bobbymartin at hotmail.com
Fri Oct 26 12:47:09 CEST 2001
> From: Bruce Mitchener <bruce at puremagic.com>
> Bobby Martin wrote:
>>> From: Bruce Mitchener <bruce at puremagic.com>
>>> I was curious as to what sorts of advanced features you had in
>>> ARMI and the documentation didn't really go into much detail
>>> beyond some usage notes.
>>> Do you have any support for pipelining of requests?
>> No, although TCP (which I currently use) buffers packets for up
>> to 200 ms before it sends the buffer.
> Mmmm, this isn't quite what I was thinking about when I wrote
> that.
I was hoping it wasn't, but I couldn't figure out where you were
going so I just took a shot in the dark ;)
> If I fire off 3 message sends in a row, and TCP is the transport
> in use, what happens? Do all 3 get sent to the remote end right
> after each other and then responses are returned, tagged with some
> request identifier as they happen? Is there an ordering imposed
> on the return of requests? (Say that you send off messages 1 and 3
> to object A, and message 2 to object B. Object B takes a long
> time in responding. Does that delay the return of the result for
> #3?)
There is no 'order of reply' imposed. To impose such an ordering
would throw away a lot of the gains you make by using asynchronous
method calls. You get a response as fast as the little packet can
make his way back to you. :)
> Or, similar to that, what happens if one method call returns a
> very large bunch of data. Will that hold up following method
> sends, or do you break data into chunks similar to what BEEP and
> HTTP/1.1 can do to multiplex a single channel?
I'm not sure how to answer this one. If the Socket's output stream
is feeding you a large block of return data on one thread while you
try to send a block of message data to the input stream on another
thread, it just gets handled however Java Sockets handle it. I
haven't done anything special.
For UDP, once again, currently we don't do anything special. A
block of return data that big was probably thrown away anyway since
UDP is very bad at handling large blocks of data. Currently our UDP
implementation is _very_ naive; it just sends whatever blocks you
send it and assumes they get there. I suggest only using it for
small packets for which guaranteed delivery is not required, like
object position updates.
> Finally (for now :)), do you share the transport connection, or do
> you establish new ones frequently (like per-object, per-method
> send, etc)?
For UDP, all comms happen over one port. In TCP, there is one
connection per client, and that one connection is re-used for all
objects & method calls on that client. Later, the guaranteed
delivery & ordering calls will also all happen over one port.
> But, on the point that you brought up, do you have control over
> that 200 ms buffering from Java? (To turn off nageling or whatever
> you might want to change.) Is the 200ms delay proving troublesome
> at all? That seems like a fairly high penalty to pay in terms of
> latency for any distributed communication.
Java Sockets don't allow changing the delay, but they do allow
turning nagle's algorithm. I currently don't do that and the Cosm
clients & server seem to communicate in a quite timely fashion. It
is easy for you to manipulate the Sockets if you like (although
right now it would require changes to the ARMI source, which is
open).
>>> How about for monitoring message traffic or requests for
>>> debugging, checking load balancing, etc?
>> Currently in the works, but not there right now.
> What sort of stuff is in the works? :)
Number of bytes per second per client, number of bytes total per
client, ping time. The number of bytes is also separated by input &
output. Henke, the programmer working on the monitoring, tells me
that it's done. I haven't seen the code yet.
>>> What sorts of security or authentication do you support?
>> Currently, none :( It is fairly trivial to add an encrypting
>> message filter, though. You can filter messages that you send
>> any way you like.
> What sort of model are you thinking about though? Is this mainly
> for communication between trusted peers? Or might it be used in a
> more hostile environment with foreign (or user-written) code
> involved?
I intend to use it in Cosm which is a hostile environment since I
expect the users to try to crack the server.
My plan for security was actually on top of ARMI rather than part of
it, though. To log in, the user's client requests a token from the
server. The server remembers what token it sent. The client
encrypts the token with the user's password, and that token is sent
back with the login request. The server encrypts the token it sent
with the user's password and compares that with what it got back,
and if they match, the login is successful.
I hadn't thought too much about hijacked connections, but we can
periodically reply to a message and break the connection if there is
consistently no response or a response that indicates that the TCP
originator didn't really send the message.
If you like, you could tunnel all the ARMI messages over an SSL
connection pretty easily. I will test that at some point but I
expect it to use up enough CPU that I won't go that route. I will
probably leave the filter in CVS for others to use, though.
>>> Can you compare your packing overhead to that of RMI or other
>>> systems? Do you compress your datastreams?
>> We don't currently formally compress data streams. I have done
>> some sampling of RMI throughpuut versus ARMI throughput; ARMI was
>> about 10 times smaller for the method calls I tested. This
>> definitely needs more formal testing.
> and later in your post, but moved here for ease in replying ...
>> I have implemented production compression systems before, and I
>> can tell you that you will almost certainly need tailor made
>> compression (i.e. just write clever serialization code) for any
>> significant gains. Lots of small information packets are hard to
>> compress, unless you take large production runs and analyze them,
>> then compress assuming the usage patterns don't change too much.
>> Most compression mechanisms require that every block of data you
>> compress has a table describing how to decompress it, and for
>> small blocks the table takes up more space than the amount you
>> gain by compressing. If you analyze large production runs of
>> data, the tables can be static and just sit on either end of the
>> connection.
> With respect to serialization strategy being more important than a
> traditional compression algorithm, I definitely agree. Brad
> Roberts and I made some changes to Cold about 2-3 months ago that
> changed how we serialized some bits of data and objects for
> storage into the DB. Those changes managed to take a DB that was
> roughly 2G and drop it to about 1.6G in size, which was quite
> nice. (It also had the interesting impact of reducing our
> fragmentation of free space on disk as more objects were now small
> enough to fit in the minimum block size, consuming more of bits of
> free space floating about that previously used to just contribute
> to wasted space. Those gains aren't accounted for in the numbers
> that I gave above.)
> Regarding throughput, you say that messages were smaller. Were
> the round-trip times also less? Or roughly similar?
I'm sorry, I don't have data on that. That's a darn fine thing to
add to our performance monitoring tool, though, and we will make it
so.
I intend to get all this data on some simple RMI vs ARMI and put the
comparisons up as documents in the ARMI project.
One thing to note is that since ARMI uses ids instead of fully
qualified names to refer to class and method names, the first call
to a method will be much slower than later calls. This is because
the client must ask the server what those ids mean the first time it
gets them, and the answer is the fully qualified name. This little
exchange makes ARMI slower than RMI for the initial method call, I'm
sure.
Also, we have ourselves slated to write an armic. Right now, all of
the method call translation happens dynamically, which is very
flexible but not the speediest thing in the world (although,
frankly, we don't see the slowdown in Cosm). We will write an armic
that examines each class to be shared and writes explicit proxy &
skeleton code as a compile time action instead of run-time.
>>> Any more information to explain in more detail how/why ARMI is
>>> great is welcome. :)
>> Offhand I don't know what I can tell you about why ARMI is great
>> that isn't listed on the web site. Main attributes are:
>> 1) Your method calls can be asynchronous (your app doesn't stop
>> and wait for the return value from a method call until you need
>> the return value.)
> Will you provide some higher level support to be used by a caller
> of an asynchronous method to help deal with the failure cases that
> might result, or any sort of syntactic sugar/glue to help deal
> with the complexities that arise from a more asynchronous model?
Yes, we will be changing from a 'use the message id to request
waiting for the return value' model to a 'just ask for the return
value from the returned ReturnValue object' model soon. We will
also probably support exceptions 'returned' and timeout returns when
the message is lost, but that is not planned soon. The ReturnValue
should be put in within the next couple of weeks, I think.
> Some cool work in this that has been done is in the E programming
> language. (Disclaimer: that work probably is from predecessors to
> E, but that was my first exposure to this and I can't credibly
> cite which predecessor it might have been.)
[SNIP a bunch of cool E stuff]
I hadn't seen this before; superficially it looks very similar to
what I'm doing with ReturnValue. I will examine it further and see
if there's more that I can steal from them.
>> 2) I use ids for the class and method instead of the huge
>> descriptors that RMI uses.
> Does that change the security profile of your protocol at all?
> (Is there a good reason that RMI uses large descriptors?)
I don't see how it would. And I can't imagine why RMI uses large
descriptors, except maybe to cut the cost of those first few method
calls. Oh yeah, for Cosm, the methods called won't change very
often. I will put those in a text file so both sides can know about
the class/method <-> id mapping and thus bypass the exchange on the
first call. This will be optional for other ARMI users.
>> 4) <not implemented yet> You can choose whether a method call
>> is guaranteed or not. Currently you either use UDP for
>> transport and all method calls are non-guaranteed but fast, or
>> you use TCP for tranport and all method calls are guaranteed
>> but slower (but still much faster than RMI).
> How will you do this? Will it be something that is decided at a
> method call site? Or will it be per-method (and globally
> decided)? If the latter, will you have some sort of IDL that
> provides that info, a registration process, or something else?
It will be per-method, with an optional XML configuration file that
tells which methods should be non-guaranteed. There may also be a
per method-call method that makes the next method called on that
thread non-guaranteed, but I don't personally have a need for that.
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev
More information about the mud-dev-archive
mailing list