Pleasantly Surprised
With 30 clients pounding the test machine, Postgres chugged along at 3.76
pages per second. That sounded really bad at first, until we ran the same test
on MySQL. MySQL did so poorly that we eventually cancelled the test and reduced
the concurrency to just 5 users, and it still only fared .77 pages per second.
pages per second. That sounded really bad at first, until we ran the same test
on MySQL. MySQL did so poorly that we eventually cancelled the test and reduced
the concurrency to just 5 users, and it still only fared .77 pages per second.
To be clear, this is an unusually intense page and requires 16 queries and
joins a dozen tables together in interesting ways. If you’re wise, you won’t
make your entire web application this complex.
joins a dozen tables together in interesting ways. If you’re wise, you won’t
make your entire web application this complex.
To spice up this test, we created a second PHP page that did inserts, updates,
deletes, and used transactions. We had 30 clients hit this
new page, and 30 clients hit the “My Personal Page” simultaneously. Since
Postgres supports transactions, we decided to have 25% of the transactions
“rollback” to see if that causes any performance problems.
deletes, and used transactions. We had 30 clients hit this
new page, and 30 clients hit the “My Personal Page” simultaneously. Since
Postgres supports transactions, we decided to have 25% of the transactions
“rollback” to see if that causes any performance problems.
To be fair, this is a test where we fully expected MySQL would fail – because
of its table-level locking. The “My Personal Page” joins several times against
our “Groups” table, which was being updated frequently in this test. While it
was being updated, of course MySQL would have to wait to get a table-level lock,
while PostgreSQL would simply move along using its “better than row level” locking.
of its table-level locking. The “My Personal Page” joins several times against
our “Groups” table, which was being updated frequently in this test. While it
was being updated, of course MySQL would have to wait to get a table-level lock,
while PostgreSQL would simply move along using its “better than row level” locking.
Postgres chugged along at 2.05 pages/second, and MySQL simply failed the
test and locked itself up (again table-level locking is the major pitfall
of using MySQL). MySQL didn’t crash, but our benchmarking software (Apache’s “ab” utility)
timed out when it didn’t get any result from MySQL.
test and locked itself up (again table-level locking is the major pitfall
of using MySQL). MySQL didn’t crash, but our benchmarking software (Apache’s “ab” utility)
timed out when it didn’t get any result from MySQL.
The numbers here are telling. MySQL was very slow even at 5 concurrent users,
and failed at 15 concurrent users (the graph actually flatters MySQL’s performance
here).
and failed at 15 concurrent users (the graph actually flatters MySQL’s performance
here).