[MUD-Dev] Player run reputation system
Trump
trump at vividvideo.com
Fri Jun 29 11:21:51 CEST 2001
On Thursday 28 June 2001 10:22 am, J Todd Coleman wrote:
> ----- Original Message -----
> From: "J C Lawrence" <claw at 2wire.com>
> To: <mud-dev at kanga.nu>
> Sent: Wednesday, June 27, 2001 9:25 PM
> Subject: Re: [MUD-Dev] Player run reputation system
>> The computational load is actually fairly trivial. At the SQL
>> end its 3 - 4 joins per entry, and you mostly need to do that
>> during rendering, not during entry.
> That would depend entirely on how the tables were organized --
> typically it's best to avoid joins at all cost, even with indexed
> tables. When you're going for read optimization, throw all that
> "normalize to the highest degree possible" crap out the window.
> Denormalize as much as possible, and you eliminate the need for
> table joins. The downside, of course, is that you waste a lot of
> space with repeated data, but thats often a much more trivial
> concern than performance when you have a huge number of concurrent
> users.
Lets test a live database running PostgreSQL.
2 tables, about 2 million entries total....
ng=# select count(*) from transactions;
count
--------
604254
ng=# select count(*) from site;
count
---------
1351705
A typical join:
ng=# explain select site.siteid, transactions.amount from site,
transactions where site.userid = transactions.userid and
site.userid=1;
NOTICE: QUERY PLAN:
Nested Loop (cost=0.00..142.40 rows=6 width=24)
-> Index Scan using site_userid_idx on site (cost=0.00..16.70
rows=4 width=8)
-> Index Scan using transactions_userid_idx on transactions
(cost=0.00..31.33 rows=8 width=16)
Single statements for reference.
ng=# explain select siteid from site where userid=1;
NOTICE: QUERY PLAN:
Aggregate (cost=16.71..16.71 rows=1 width=4)
-> Index Scan using site_userid_idx on site (cost=0.00..16.70
rows=4 width=4)
ng=# explain select amount from transactions where userid=1;
NOTICE: QUERY PLAN:
Aggregate (cost=31.35..31.35 rows=1 width=4)
-> Index Scan using transactions_userid_idx on transactions
(cost=0.00..31.33 rows=8 width=4)
A cost of 150 is still so fast as to be unnoticable to the user.
100 such queries can be excecuted in the time it takes your average
web page to load on a broadband connection.
This database has about 20 different tables, most of which contain
about 15 pieces of information. To have all this info in each line
those lines would need about 300 pieces of info.
The conclusion I draw from this is that the join takes about twice
as much CPU time as the individual selects. However, by having very
little redundant data (userid in this case) the DB size is actually
reduced by about 95%
Adding a 2nd server to double your CPU power is going to run about
$3k. Adding another 500 gigs of RAID storage to hold a 10 times
larger DB is going to be about $70k
Your server load should never be so that severe that your CPU cant
handle it. Databases start producing errors with too many users at
once - the software just gets confused. More zones, more servers.
Avoiding joins at all costs is a bad idea.
-Azeraab
_______________________________________________
MUD-Dev mailing list
MUD-Dev at kanga.nu
https://www.kanga.nu/lists/listinfo/mud-dev
More information about the mud-dev-archive
mailing list