Why JDBC + JSP = Bad

Over years of moderating at The JavaRanch, I’ve seen one type of question spring up on a weekly basis: that asked by people who need help with JDBC code inside of Java Server Pages (JSPs). As much as we may want to help this individual fix their particular problem, the overriding thought of “STOP WHAT YOU’RE DOING” often prevents us from doing so. The purpose of this post is to explain why putting JDBC code inside a JSP file is akin to shooting yourself in the foot. With a shotgun. While not wearing shoes.

Don't use JDBC inside of JSP pages

1. You cannot reuse the code
First and foremost is the issue of code reusability. While importing java classes is quite common, importing code from a JSP is not. While you can write JSP functions, although I never recommend doing so for reasons I won’t get into now, you’re basically writing code that you cannot be used anywhere else, particularly in non-JSP java classes. The most common counter response to this is “Well, I don’t need to use it anywhere else”. Yes, you do. Whether its just reusing code for making the connection to database or the code for performing a query and reading the results, it will be used again at some point in the future in a way you have not thought of yet. Unless you are being paid by the line and prefer this sort of thing, it’s a bad move, and I guarantee your code base will be much larger than someone who put all their JDBC code into normal Java classes. Larger code base means more difficulty to maintain and more headaches for you down the road.

2. You are mixing business logic with the presentation layer
Probably the most overlooked issue for inexperienced developers is the fact that you’re mixing the business/data layers with the presentation layer. I’ll put it another way, if your boss comes in one morning and says we’re throwing out the JSP front end and replacing it with a web service, Java Swing, Flash, or some other interface, there is virtually no way for you to reuse the database code without going through every line of every JSP file by hand. If the database code had been placed in plain java files, then you would have a path for packaging the JDBC code into a single JAR and making it available as a service to a different front-end client such as a web service, Flash, etc.

In enterprise development, the presentation JSP layer and the database are often separated by multiple layers of indirection such as described by the commonly used three-tier architecture pattern. Those who are just starting out programming often do not know why mixing these layers is bad, but I promise you if you stay with software development you’ll understand one day.

3. But it’s just this once!
Often times, JDBC code enters JSPs by developer lying to themselves saying “Well, it’s just this once” or “I just need to test something”. Instead of being removed when the developer is done ‘testing’, the code often persists for a long time afterward. Furthermore, putting your JDBC code inside of reusable Java classes makes testing go faster! Spending 10 minutes setting up a single reusable Java JDBC class will save you hours down the road. Then, if you want to test more than one JSP page with JDBC logic, you already have your Java class file to start with. Proponents of test-driven development tend to understand this better than anyone.

4. It’s really hard to maintain
Code maintenance is another topic that new developers do not fully appreciate since they have not spent years maintaining the same code base. Unless you write the most beautiful JDBC code imaginable, its very difficult to read through huge JSP files looking for bugs and/or making enhancements. It’s a lot easier if all the JDBC access is restricted to a set of files much smaller in size than the JSP code base.

5. It’s a really bad practice
If after reading this article you still do not fully understand why you should not put JDBC code inside of JSPs, let me simplify the issue by saying “Just Don’t Do It”. Whether or not developers understand the reasons against doing so is not as important as stopping them from doing so in the first place. In short, you create code someone else (possibly yourself) will have the misfortune of maintaining down the road.

Making MySQL Use More Memory

Unlike a lot of database servers, MySQL is strangely conservative (by default) on how much memory it will allocate. If you’re not careful, you can have 16GB of RAM on your machine with MySQL only using 50MBs, leading to extremely poor performance under heavy load. I know firsthand that navigating MySQL configuration guides can be a daunting task, so I’ve prepared this post for those looking for a ‘quick fix’ to encourage MySQL to use a more healthy amount of memory.

Which database storage engine do you use primarily?

Many beginner users may not understand this, but with MySQL you have a choice in which storage engine implementation your database runs on. This is where performance tuning begins to get complicated, as you have to set the configuration variables that correspond to the storage engine you are using! You can see what engine you are relying on by opening MySQL Administrator and viewing your schema under the Catalogs tab. Each table should should have an engine associated with it, which likely says either MyISAM or InnoDB.

In this post, I’ll cover how to increase general memory usage for InnoDB:

Set “innodb_buffer_pool_size” to be up to 80% of RAM, or at least a few hundred megabytes. The default is 8MB and I imagine anyone running a MySQL server these days can at least spare 200 megabytes of memory, so it should at look like this:

innodb_buffer_pool_size=200M

Again, the default is 8M! So, if you’re not setting this variable, you’re choking your database. Feel free to give it 1-2GB if you have the available memory, but the most gain will be made by just going above the default.

There are more InnoDB settings you could set, but their benefits pale in comparison with the value you’ll gain by increasing this from the default of 8M.

postgresql explain

I had an opportunity to do some tuning on postgresql and was pleasantly surprised at how smoothly it went.

The first thing I did was try to run an “explain” on the query under discussion.  (Explain is a tabular or graphical view of the detailed steps the database uses to execute your query.  By knowing what path it will take and what tables/indexes it will look at, you can tune your query appropriately.) Knowing this works differently in different databases, I looked up what to do.  Here are the steps:

To run explain at the command line:
1) Type “explain” followed by your query.  For example “explain select * from table”.

That’s right – one step!

To run explain graphically:
1) Install pgadmin if you haven’t already
2) Type query into editor
3) Choose query –> explain
This shows the graphical view of the query.  Clicking on the data output tab shws the text view generated by the command line.

Now it may have changed since then, but I needed to create a separate table the last time I ran an explain in Oracle.  This was extra steps that I have to look up each time.  db2 had a good graphical explain built into the tool.

What surprised me here was that I figured out postgresql’s explain much faster than Oracle’s.  Namely because it was so simple!  For the command line version, there is only one step – and it’s not one I am likely to forget.

It’s always nice when software works in such an intutive manner.