[MUD-Dev] Storing tokens with flex & bison
Jon A. Lambert
jlsysinc at ix.netcom.com
Wed Jan 19 04:32:32 CET 2000
Chris Gray wrote:
>[Jon A. Lambert:]
>
>{My, we're certainly getting into some low level technical stuff here.
>Those not interested, please skip all this! Marian, there will be a
>test on the details tomorrow! :-)}
Warning: This post contains even more graphic and explicit implementation
details that some might find offensive (or hilarious). ;-)
>> BTW, this
>> technique might have been useful in the DevMUD project for dynamically
>> loading and registering new function libraries (DLLs).
>
>Nod. I've done that when I needed the database to be stable, and didn't
>want the key values for the builtins changing. A new major release lets
>me clean 'em up again, however.
I wonder if there be commercial possibilities for mud library writers in
distributing obfuscated portable byte code. Gotta have a stable VM
like Java. Sorry... I'm always thinking about exploitation. ;-)
>What's your stack size? Mine is currently 8K. That allows 2048 local
>variables, parameters and return addresses - I figure it'll do. Doesn't
>take too long to hit that with deep recursion. Recursive Towers of Hanoi
>has no trouble, however.
>
It's 1K and will grow at 1K increments. Each variable occupies 12 bytes
on the stack regardless of type. More on this below. I'm going to have to
add TOH to my test scripts.
>Me too on the locals. However, I was aiming for maximum speed here, so was
>happy to relax my style to get it. Here's some snippets of my main byte-
>code interpreter, to show the gcc dynamic goto stuff, along with the
>alternative for non-gcc compilers.
[snip]
>#define BREAK goto *(codeTable[*pc++]) <== funny gcc stuff
Ok I have seen this, or that is an equivalent. I can only do this with my compiler
by using the inline assembler.
>I did the add1/sub1 experiment. It would have taken less than 10 minutes
>if I hadn't left out a couple of 'break's in a code-generation switch. :-(
>
>Unfortunately, there was no noticeable change in run speed with or without
>them. However, with this version of gcc/egcs and RedHat Linux, the tests
>all run slower than on the previous version (same machine). Grrrrrr!
Hmm.. on its face, you would think it would be an operation that would occur
frequently in loop blocks. A few instructions saved in each iteration should
in theory add up fast.
>Sure, I'll buy some of those. The nature of my parse-tree interpreter is
>such that those weren't issues with it, however. I just have a file-static
>variable that says when to get out of any loops, and let the recursive
>interpretation just dribble its way back to the top. 'interpreter.c' in
>my ToyMUD sources is sort-of like that - it uses the return value in the
>recursive calls to say whether or not to quit. As for multiple input
>languages - well, I might have to add one or two new parse-tree node
>types, but that would be about it.
Nod.
>> Ok, I'll pull back the covers. And your guess is pretty accurate. ;)
>
>Do I win a cookie? :-)
>
Sure, why not. I'm going to get long-winded here and maybe you'll toss
that cookie. And anyone else who doesn't fall asleep yet, feel free to attack
viciously. ;-)
For all my talk about readability, my TMVar class is probably one of
the most obtuses one and does do violence to the OO model.
If I was really OOey, I'd probably implement a class heirarchy like so:
Var
|
\--- Integer
\--- Float
\--- String
\--- List
\--- Array
\--- Object
\--- Error
\--- Buffer
\--- Queue
\--- Method
\--- KA
The Var class would wrap and subclass all the possible data types.
In early versions, I did this and wound up with lots of overhead. All
that stuff you read about in "1000 reasons why I don't use C++"
So this came to mind:
class TMVar {
Type type;
union {
int i;
float f;
TMError e;
TMString s;
TMBuffer b;
TMList l;
TMQueue q;
TMArray a;
TMObject o;
TMKA k;
TMMethod m;
} value;
}
Note: I use TM as a prefix in front of most classes to avoid namespace
collisions. Most of these are self-explanatory types. I bet a few might make
be curious. TMMethod is a wrapper object for an online version control system
and TMKA is a special data structure short for "Knowledge Area" which is
intended attached to objects and used by a procedural reasoning system.
It's bytecode that has been compiled by a PRS compiler.
The big union is reminiscent of Bison/Yacc tokens (and CoolMUD too).
Such an abomination is of course illegal in C++. One solution was to use
pointers to the classes and invoke their respective allocation and construction
routines. However, I didn't like this idea since it played havoc with my reference
counting schemes.
Anyways I came across this notion of split-phase construction and
split-phase destruction as documented by James Coplien I believe.
class TMVar {
Type type;
// Allocation pool - should hold just enough room
// for class vtable ptr + class member variables
// All classes have one member variable, a pointer.
// So two ints should be more than enough
unsigned char value[8];
}
All my data type classes are 4-bytes in size, 8-bytes if they
have a vtable. Float, int and ObjectId's fit nicely into 8 bytes
too.
So the new operator is overloaded (short circuited) to return a
pointer to "value". It does nothing, ergo no allocation is done
at all, yet the constructors are called. :-)
Here it is in my String class:
inline void* TMString::operator new(size_t, void* p)
{ return p; }
And here is the TMVar default constructor:
inline TMVar::TMVar(Type t = NUM) : type(t) {
switch(t) {
case NUM:
*(int*)value = 0;
case STRING:
new(value) TMString;
case LIST:
new(value) TMList;
... etc...
}
And split-phase destruction is not done with 'delete', it's handled by
calling the destructors directly.
TMVar::~TMVar() {
switch(type) {
case STRING:
((TMString*)value)->~TMString();
case LIST:
((TMList*)value)->~TMList();
... etc...
}
And finally all my data type classes are just pointers to the real
information. I implement copy-on-write reference counting semantics
in all my classes.
class TMString {
struct Strdata {
int len; // length of string
int mem; // memory allocated
int ref; // reference count
char *str; // string itself
Strdata(int _mem = STRING_INITIAL_SIZE) : ref(1), len(0)
{
mem = MAX(_mem, STRING_INITIAL_SIZE);
str = new char[mem];
str[0] = '\0';
}
~Strdata()
{
delete[] str;
str = 0;
}
};
Strdata *mpValue;
}
The default constructor is cheap, it just nulls the pointer. All
the other String functions recognize the null pointer as
equivalent to a null String.
TMString::TMString() : mpValue(0) {}
And copy constructors are cheap too. I just increment the
reference counter and share the data.
TMString::TMString(const TMString& r_source_str)
{
if (r_source_str.mpValue) {
r_source_str.mpValue->ref++;
mpValue = r_source_str.mpValue;
} else {
mpValue = 0;
}
}
So there's my explanation for all the fugly casting that's going on below
and the unusual use of the new() operator.
>> TMVar operator+(const TMVar& v1,const TMVar& v2) throw(Error)
>> {
>> if ((v1.type != v2.type) && (v1.type != LIST)) {
>> throw (E_TYPE);
>> }
>> if ((v1.type == ARRAY) || (v1.type == ERROR) || (v1.type == OBJECT)) {
>> throw (E_TYPE);
>> }
>>
>> TMVar ret(v1.type);
>
>As I've mentioned, my C++ is lacking, but I think I'm OK with this. It's
>semantically the same as
>
> TMVar ret = v1.type;
>
>but without any possibility of a copy constructor call?
>
Aye. It's the default constructor overriding the default type.
TMVar ret; // by default creates an integer type variable loaded with 0
In the case of other types, the default constructors create classes with
null pointers (see TMString above)
>> And heres the native code. I have register optimizations on, but I turned
>> off constructor inlines so it could be followed. Pardon the line breaks.
>
>OK. Ouch, those exceptions are expensive!
>
Oh yes. But currently when we hit them we are done for anyway. I've yet
to implement ignoring and/or catching exceptions in the programming
language though it is in my spec and at the top of the todo list. And when
I do, writing functions that expect exceptions will incur the expense <waah!> :-(
[much snippage]
>> c:\tychomud\tmvar.cpp, 484: return ret;
>> 00403B44 E871F1FFFF call TMVar::TMVar(const TMVar &)
>> 00403B5C E896F2FFFF call TMVar::~TMVar()
>
>I think this bit is what I was getting at with my worry about temporaries
>with reference variables. But again, I'm far from an expert. This pair
>of constructor/destructor calls on the 'return' seems to be undesireable.
Yep. Your criticism is spot on. Hrrm... That's one of those BIG ones, eh?
Lots of instructions that can be cut out there.
>If you rewrote the operator routines to take a 3rd reference parameter
>that is where to store the result, would not this extra stuff go away?
>This isn't a reference case here, but perhaps something similar.
That's a good idea. I can't use the "operator+()" signature since it
won't take a 3rd parameter unless I've missed some C++ trivia.
It really isn't a necessity, just syntactic sugar. I'd use something like
"add(left,right,ret)" instead. <bops head> Thanks!
>> Passing by reference is the same as passing by pointer as far as the
>> machine code that is generated by the compiler. Of course, as a general rule,
>> I only use const reference parameters for safety.
>
>Well, from a non C++ expert, my understanding is that it is fairly easy
>to accidentally pass a non-lvalue to a routine that requires a reference
>parameter. The compiler will silently proceed to create the needed lvalue
>by creating a new temporary value of the required type, and doing an
>assigment of the expression to the new temporary. It then passes the
>address of the new temporary to the routine. This is a bug if in this
>particular call, a value stored into that reference parameter matters.
>It is a performance hit because of possible constructor/destructor calls
>on the temporary variable, in addition to the value copy.
>
A made a few mental mistakes early on with casting which are very similar to
the above.
Writing things like:
((TMList)ret.value).rSetAdd(v2);
Instead of:
((TMList*)ret.value)->rSetAdd(v2);
The former creates a brand new copy of TMList, performs the operation on
it, with the result promptly vanishing into oblivion leading to much head
scratching and gnashing of teeth.
Another danger to earlier is in returning references to variables
created on the stack. Pretty much the same as the C error of returning
pointers to locals. I would _never_ever do that. <grin>
>Just for geek fun, here is the code gcc generated for my addition case.
>This includes the final 'goto' thingy.
>
> CASE(bc_add) {
> ULONG_T ul;
>
> ul = *sp++;
>0xc72 <bcRun+1298>: movl (%esi),%eax
>0xc74 <bcRun+1300>: addl $0x4,%esi
> *sp += ul;
>0xc77 <bcRun+1303>: addl %eax,(%esi)
> BREAK;
>0xc79 <bcRun+1305>: movzbl (%edi),%eax
>0xc7c <bcRun+1308>: incl %edi
>0xc7d <bcRun+1309>: jmp *0x0(,%eax,4)
> }
>
>(This is an unlinked .o file - there would normally be a non-zero offset
>in that last 'jmp' instruction.)
>
Oh that is slick. Can't beat that. Only left for you to do is start
writing code that will execute in parallel. :-)
--
--* Jon A. Lambert - TychoMUD Email: jlsysinc at nospam.ix.netcom.com *--
--* Mud Server Developer's Page <http://jlsysinc.home.netcom.com> *--
--* "No Free man shall ever be debarred the use of arms." Thomas Jefferson *--
_______________________________________________
MUD-Dev maillist - MUD-Dev at kanga.nu
http://www.kanga.nu/lists/listinfo/mud-dev
More information about the mud-dev-archive
mailing list