#native_company# #native_desc#
#native_cta#

MySQL and PostgreSQL Compared Page 4

By Tim Perdue
on July 30, 2000

PostgreSQL

The results for Postgres might surprise a few people, as Postgres has somewhat
of a negative reputation among some web developers (initial releases
of Postgres had widely-rumored issues in addition to laggard performance).
According to my experience, and these benchmarks, most of that reputation is
unfounded. In fact, it appears that PostgreSQL withstands up to 3 times the
load that MySQL can before throwing any errors — on the same hardware/OS
combination.
Postgres happily chugs along at roughly 10 pages/second, enough to serve about
400,000 pages/day, assuming a regular traffic curve with the peak at 2x the
bottom. That’s an awful lot of pages and is far beyond what most people will
see on their websites. In addition, most of the pages on your site will
not be as complex as the one in this test. As with MySQL, you’ll be happy
to pay for a hardware upgrade if you pass this ceiling. Because of Postgres’
architecture, it could probably continue to scale up the more processors and
RAM you give it.

Wins

Well, postgres has some extremely advances features when shown next to MySQL.
While I don’t use most of the features myself, they are available for the
truly-hardcore developers out there. Many developers don’t even realize what
they’re missing by not having some of these features available.
An example of where you should be using a transaction is if you are doing
more than one update/insert/delete in a sequence. For instance, your script
inserts a new user into your user table, then also inserts a row in another
table, and you update a flag somewhere else. In this case, if the first insert
succeeds, but the second fails, what do you do? With Postgres, you could
Rollback the entire operation and show an appropriate error. With MySQL,
you would wind up in an invalid state, unless you program in a bunch of
logic to handle the situation. In real-world use, most queries don’t fail
unless you’re a lousy programmer, and if the second query did fail, the
results may not be dire (unless we’re talking about an accounting/banking/critical
application where there can be no risk of incorrect data).
Anyway, foreign-key support is now in Postgres 7.0+, which means that when you insert
a row, the database can do some fairly impressive validation checks. Same if
you delete a row – it just plain won’t let you delete a row if another table is
depending on it. I love this idea and can envision rewriting entire websites
just to take advantage of this feature.
Triggers and views are interesting and powerful tools that can be used in
Postgres, but not MySQL. I haven’t used either one, but I can think of a
hundred uses for Views if I were to redesign SourceForge from the ground up on
Postgres.

Limitations

The primary limitation with Postgres is not its performance (as most web sites
will never run into that barrier), but hard-coded limits like the
8k row size limit (which probably dates back to its earliest days). When I
designed Geocrawler.com on Postgres, I had to segment large emails into 8k
chunks to work around this lame limitation. Also, by default, Postgres is
compiled to only support 32 connections, which is not enough for a high-traffic
web site, especially when you consider that postgres delivers each page much
more slowly than MySQL.
One other limitation may bug a lot of PHP users – Postgres has no equivalent to
MySQL’s mysql_insertid() function call. That is, if you insert a row into a
MySQL database, MySQL will hand you back the primary key ID for that row. There
is an extremely round-about way of doing this in Postgres, but it’s a headache
and is probably slow if used a lot.

Stability

Postgres will run smoothly for extended periods of time without trouble. My
Postgres 6.5.3 install has run for 90 days without blinking on my tired old
PowerMac 8500, while getting about 50-100,000 pages per day. And when postgres
gets loaded, it just bogs down, it doesn’t quit and give up the ghost under
stress.
The problem with Postgres is that when you do have a problem with it,
it’s usually really bad. Like a fubar database file or, more commonly, a
corrupted index (which can frequently be dropped/rebuilt). I have encountered
other serious problems with older versions of postgres (6.4.x) where multiple
same numbers were inserted into a primary key (something that should be
impossible under any circumstance).
There have also been problems with Postgres where you can wind up with
“half-baked” indexes, tables, etc that you cannot drop or get rid of. I have
not seen these yet on Postgres 7, but I haven’t used it enough to know.