Performance - 04/26/2001 8:50 PM
Well, recently I've been dealing with mysql and w3t performance issues a lot.
I've got w3t perl installation on a linux box with apache and mysql.
The forum has about 250 000 posts and about 4000 users.
Until last weekend it run on 2 servers:
-httpd-modperl on pentium II 333 (160MB RAM, scsi disk) - load about 0.5 to 1
-mysql on dual PIII 850 (256MB RAM, scsi disk) - load about 1 to 3
On monday I moved http and mysql to single server with one pentium III 1GHz, 256MB RAM and ide disk (ATA66, 7200rpm).
I've also upgraded 5.3perl to 5.4b1.
On old servers forum run quite well, could be better, but it was acceptable.
On the new server it runs pretty fast...except for the day hours where there are 50-100 concurrent users.
Load is going to 20-50 (!), there is a lot of mysql and httpd processes and the forum hangs.
The problem is that lots of sql queries are locked.
As you know standard mysql table type (myisam) does table locking, which means that when you do an update rest of queries have to wait for update to complete (they are locked because of lock on table).
With small tables it isn't a problem, because updates, selects are fast then.
The problem starts showing with large tables when queries are longer.
From mysql manual:
"Table locking is, however, not very good under the following senario:
A client issues a SELECT that takes a long time to run.
Another client then issues an UPDATE on a used table. This client will wait until the SELECT is finished.
Another client issues another SELECT statement on the same table. As UPDATE has higher priority than SELECT, this SELECT will wait for the UPDATE to finish. It will also wait for the first SELECT to finish! "
This is common scenario in w3t.
Some clients issues selects from w3t_Posts to read/find posts (or ie. count unread posts in all boards), while some other just read some post and are updating read counter for this post, which is in the same table as post.
"Some possible solutions to this problem are:
1)Try to get the SELECT statements to run faster. You may have to create some summary tables to do this.
2)Start mysqld with --low-priority-updates. This will give all statements that update (modify) a table lower priority than a SELECT statement. In this case the last SELECT statement in the previous scenario would execute before the INSERT statement.
3)You can give a specific INSERT, UPDATE, or DELETE statement lower priority with the LOW_PRIORITY attribute.
4)Start mysqld with a low value for max_write_lock_count to give READ locks after a certain number of WRITE locks.
5)You can specify that all updates from a specific thread should be done with low priority by using the SQL command: SET SQL_LOW_PRIORITY_UPDATES=1. See section 7.33 SET Syntax.
6) You can specify that a specific SELECT is very important with the HIGH_PRIORITY attribute. See section 7.19 SELECT Syntax. "
I've tried --low-priority-updates. Doesn't help. updates of b_counter are locked instead of select and situations looks the same.
I've tried max_write_lock_count=1. Better, but also doesn't solve the problem.
I've changed updates of b_counter in showthreaded.pl and showflat.pl to have low priority - updates locks again.
I wonder what could be done to make w3t more scalable.
Split boards to separate tables to make queries faster?
Good old times :-) But searching in all forums would be harder.
I've noticed that mainly problem is with lots of councurrent 'select count (*) from w3t_Posts' and 'update w3t_posts set b_counter=b_counter+1 where B_main=x'
Every hit on a post issues update on b_counter in w3t_posts, which isn't good (even though b_main is a key)
I wonder if b_counter could be moved to a new table (along with b_number and b_main) and if it helps.
Again from mysql docs:
"Concurrent users is not a problem if one doesn't mix updates and selects that needs to examine many rows in the same table."
"If you get speed problems with the table locks in MySQL, you may be able to solve these to convert some of your tables to BDB tables. "
Well, I've tried BDB.
First - it was unstable, mysql (3.23.37max) crashed often.
Second - select count(*) is SLOW with berkeley db, becaue it doesn't maintain a count of the number of rows in the table (typical count was about 4 minutes, while it was <1 sec with myisam)
Third - w3t_posts grow from 170MB to 400MB with bdb
I've also tried to use new innodb tables, but I didn't manage to convert w3t_posts - alter table w3t_posts type=innodb says 'row too big - can't allocate memory'
so, do I need to change to postgresql to have row level locking or something could be done with w3t database design or maybe there is some option in mysql which could help
I could also buy another pentium 1G, but for how long it helps
Deleting old messages is not an option rather.
Greetings,
Piotr
I've got w3t perl installation on a linux box with apache and mysql.
The forum has about 250 000 posts and about 4000 users.
Until last weekend it run on 2 servers:
-httpd-modperl on pentium II 333 (160MB RAM, scsi disk) - load about 0.5 to 1
-mysql on dual PIII 850 (256MB RAM, scsi disk) - load about 1 to 3
On monday I moved http and mysql to single server with one pentium III 1GHz, 256MB RAM and ide disk (ATA66, 7200rpm).
I've also upgraded 5.3perl to 5.4b1.
On old servers forum run quite well, could be better, but it was acceptable.
On the new server it runs pretty fast...except for the day hours where there are 50-100 concurrent users.
Load is going to 20-50 (!), there is a lot of mysql and httpd processes and the forum hangs.
The problem is that lots of sql queries are locked.
As you know standard mysql table type (myisam) does table locking, which means that when you do an update rest of queries have to wait for update to complete (they are locked because of lock on table).
With small tables it isn't a problem, because updates, selects are fast then.
The problem starts showing with large tables when queries are longer.
From mysql manual:
"Table locking is, however, not very good under the following senario:
A client issues a SELECT that takes a long time to run.
Another client then issues an UPDATE on a used table. This client will wait until the SELECT is finished.
Another client issues another SELECT statement on the same table. As UPDATE has higher priority than SELECT, this SELECT will wait for the UPDATE to finish. It will also wait for the first SELECT to finish! "
This is common scenario in w3t.
Some clients issues selects from w3t_Posts to read/find posts (or ie. count unread posts in all boards), while some other just read some post and are updating read counter for this post, which is in the same table as post.
"Some possible solutions to this problem are:
1)Try to get the SELECT statements to run faster. You may have to create some summary tables to do this.
2)Start mysqld with --low-priority-updates. This will give all statements that update (modify) a table lower priority than a SELECT statement. In this case the last SELECT statement in the previous scenario would execute before the INSERT statement.
3)You can give a specific INSERT, UPDATE, or DELETE statement lower priority with the LOW_PRIORITY attribute.
4)Start mysqld with a low value for max_write_lock_count to give READ locks after a certain number of WRITE locks.
5)You can specify that all updates from a specific thread should be done with low priority by using the SQL command: SET SQL_LOW_PRIORITY_UPDATES=1. See section 7.33 SET Syntax.
6) You can specify that a specific SELECT is very important with the HIGH_PRIORITY attribute. See section 7.19 SELECT Syntax. "
I've tried --low-priority-updates. Doesn't help. updates of b_counter are locked instead of select and situations looks the same.
I've tried max_write_lock_count=1. Better, but also doesn't solve the problem.
I've changed updates of b_counter in showthreaded.pl and showflat.pl to have low priority - updates locks again.
I wonder what could be done to make w3t more scalable.
Split boards to separate tables to make queries faster?
Good old times :-) But searching in all forums would be harder.
I've noticed that mainly problem is with lots of councurrent 'select count (*) from w3t_Posts' and 'update w3t_posts set b_counter=b_counter+1 where B_main=x'
Every hit on a post issues update on b_counter in w3t_posts, which isn't good (even though b_main is a key)
I wonder if b_counter could be moved to a new table (along with b_number and b_main) and if it helps.
Again from mysql docs:
"Concurrent users is not a problem if one doesn't mix updates and selects that needs to examine many rows in the same table."
"If you get speed problems with the table locks in MySQL, you may be able to solve these to convert some of your tables to BDB tables. "
Well, I've tried BDB.
First - it was unstable, mysql (3.23.37max) crashed often.
Second - select count(*) is SLOW with berkeley db, becaue it doesn't maintain a count of the number of rows in the table (typical count was about 4 minutes, while it was <1 sec with myisam)
Third - w3t_posts grow from 170MB to 400MB with bdb
I've also tried to use new innodb tables, but I didn't manage to convert w3t_posts - alter table w3t_posts type=innodb says 'row too big - can't allocate memory'
so, do I need to change to postgresql to have row level locking or something could be done with w3t database design or maybe there is some option in mysql which could help
I could also buy another pentium 1G, but for how long it helps
Deleting old messages is not an option rather.
Greetings,
Piotr