postgres tuning – fixing a production problem

After a new feature was implemented (not by me), coderanch started crashing almost every day in the middle of the night.  In a few days, I’ll be blogging about the troubleshooting process and how timezones helped.  This post focuses on the end game – once we knew the problem was being caused by a large query – which is when I started being involved.

The tuning process

Like all good SQL tuning, I ran explain and iterated.  As background the new feature was a join table with half a million rows.

Explain cost What changed Observations
210,184 n/a No wonder the site is crashing.  For a web page (the forum list), this is forever!  While the query plan is using an index, it is using the index to join a table with half a million rows to a table with millions of rows.
40,590 Removed an unnecessary subquery.  (It was unnecessary because the column in populates isn’t used.) The problem is that the query isn’t using the index for a where clause.  Which is causing joins on very large tables to get a small amount of data.  Another problem is that the query limits the # rows returned to one page worth but does it at the end prohibiting the database from saving work.
1,807 Hack – we really want to  query the post time from the join table.  Since it wasn’t on there and it was too much work to add during the week, I introduced a hack.  I sorted by post creation (post id) and limited the query to sorting the most recent 100 records for the rest of the query. While this is much faster, it is functionally incorrect.  If an older post is still under discussion, it didn’t appear in the post list.  So broken, but fast enough to get us to the weekend.
288 Added the latest post time as a stored field on the join table. Ah.  Done

Learnings about postgres – locks

I ran a really simple statement to add a column to a table:

alter table jforum_topics_forums add column last_post_time TIMESTAMP without time zone not null default now();

Luckily I was on the test server because I had to kill it after 5 minutes.  At first, I thought the problem was setting the field to a value since it had to go through all the records.  That wasn’t the problem though.  The problem was that postgres was waiting on a lock.

SELECT * FROM pg_stat_activity;

select * from pg_locks where pid= 4503 and granted='f';

Running the above SQL, showed me postgres was waiting on an exclusive lock.  After I shut down the forum, the alter statement ran almost instantaneously.  The actual stored procedure to populate the new field (based on another table) took a few minutes.  But that makes sense as it was a stored procedure doing half a million queries.

Testing with lots of data

Everything went fine on my machine. On the test server (which does have lots of data), I realized that I forgot to add the index that uses the new last post time column.  That was the entire point of this exercise!  And it goes to show how important it is to have production volumes of test data.

more postgres tuning in jforum

A teammate installed a new feature on CodeRanch JForum that uses a 4,515,409 row table. When dealing with over a million rows, scans become a huge performance drain. To the point where one query was slow but real usages with many at the same time brought down the app. The reason why the query was slow was interesting so I asked him if I could blog about it.

The original query

select countryCode, countryName, region, city
     from ip2location
     where 540815125 >= low and 540815125 <= high;

Running this through explain says it uses the index with a cost of:

Bitmap Heap Scan on ip2location (cost=5949.66..54170.71 rows=219870 width=32)

That’s a really high cost explain plan.

My first thought was to change it to:

explain  select countryCode, countryName, region, city
     from ip2location
     where 540815125 >= low and 540815125 <= high;

Which has a much better explain plan of

Index Scan using ip2l_low_asc_idx on ip2location (cost=0.00..8.77 rows=1 width=32)

The reason is that in the first query, postgres needs to scan the large index from the beginning until it hits the low value. In the second, I gave it permission to start really close to the target row. I subtracted 1000 but that was arbitrary. It just needs to high enough to be in the vicinity of the row without missing out on any data.

My approach also makes the lookup time consistent. It is always looking through 1000 worth of index. (Which is always less than 1000 rows given the bunching of low to high.) The original is immediate through a full index scan.

Then the original teammate switched it to:

select countryCode, countryName, region, city
     from ip2location
     where  low =
           (select max(low)
            from   ip2location
            where 540815125 >= low)

This has the same explain cost as the hacky one and is clearer.  Same logic though – it doesn’t require scanning/returning extra results.

Where’s your database’s ER Diagram?

I was recently training a new software developer, explaining the joys of three-tier architecture and the importance of the proper black-box encapsulation, when the subject switched to database design and ER diagrams. For those unfamiliar with the subject, entity-relationship diagrams, or ER diagrams for short, are a visual technique for modelling entities, aka tables in relational databases, and the relationships between the entities, such as foreign key constraints, 1-to-many relationships, etc. Below is a sample of such a diagram.


I. The Theory

Like many enterprise technologies, ER diagrams can be a bit of an overkill in single-developer projects, but come in handy as soon as you need to explain your design decisions to a room full of people. Since a software application is only as flexible as its underlying database, ER diagrams help define the initial set of business rules for how people will be able to interact with the system. As a software development practice, they are often encouraged, but in medium to large companies, they may be absolutely required. There are enough tools now to create ER diagrams quickly and easily, many of which will generate SQL database creation statements for a variety of platforms directly.

II. The Practice

Due to tight time constraints and ever-expanding scope creep, I find most developers skip creating or maintaining ER diagrams whenever the opportunity arises. One telling example of this is a developer who creates a diagram, starts building the application, and realizes their initial diagram was completely flawed. Given that they are now behind schedule because they made mistakes in modelling the data, they do not have time to go back an update the model, and their ER diagram becomes a distant memory compared to the final database. Most managers would rather see finished software products than accurate diagrams, although will take both if offered.

ER diagrams are incredibly useful in the early stages of designing a new application, but as an experienced software developer, I spend more time enhancing and maintaining databases than I do creating them from scratch. Furthermore, it can be difficult to create an ER diagram for an existing database, especially if you were not involved in its creation. Even when companies do maintain ER diagrams, they tend to be months, often years, out of date, as it can be difficult to motivate each and every software developer to update the database documentation after making a change.

III. Where’s your database’s ER Diagram?

What about your software application? Is there now or was there ever an ER diagram for your company’s database? Is it 100% accurate to the current production database? I’d like to hear from other developers to find out if people are diligently maintaining ER diagrams, or if it is really a common practice to let them fall by the wayside after a database is established.