J2EE: Why EJB2 CMP was Doomed to Failure

EJB2 Failure

For those in the J2EE world, one of the hottest, most contentious topics that has arisen is what went wrong with EJB, version 2. For those not familiar with the subject, I’ll try to provide a brief description. In 2001, Sun released a specification for connecting a Java object, called an Enterprise Java Bean, hereafter referred to as an EJB, to a database table through an construction called an Entity Bean. They referred to the technique as Container Manager Persistence, or CMP for short. Through the clever use of XML files, one could define Entity Bean relationships between Java code and a database, that could change on the fly without having to recompile the code or rewrite the application.

Initially, Entity Beans were well received as ‘the wave of the future’, but the problem lies in the fact that this specification was mostly worked out only in theory. The specification was written with such detailed requirements for the developer, that anyone who tried to implement Entity Beans in EJB2, had immediate code maintenance problems. There were just too many rules and maintaining large code bases was extremely difficult. In 2006, Sun released the Java Persistence API, JPA for short, in version 3 of EJB specification that, for all intents and purposes was a complete rewrite of EJB2 Entity Beans. They streamlined a lot of the interactions that were required to set up EJBs and borrowed a lot from more grass roots technologies like Hibernate and JDO. In essence, they threw out EJB2, copied Hibernate, then renamed the whole thing as ‘the next version of EJB’. Ironically, they might have been a little too late as many organizations had all ready switched to Hibernate by the time the implementations of the new specification were released.

These days most developers prefer Hibernate to EJB, although given the pervasive nature of Sun’s EJB/J2EE terminology I wouldn’t be surprised if JPA/EJB3 (the name at least) makes a comeback. As I look back on these recent events it makes me wonder, if EJB2 was such a good idea in theory, why did it fail so miserably in practice? What was it about Entity Beans people failed to consider? I hope to use this post to address some of this issues, albeit in hindsight, that the original planners of EJB2 did not consider.

  • Issue #1: Not all Servers are Created Equal

One of the most prominent features of Entity Beans in the EJB2 specification was their ability to work with any database system. A developer could, again in theory, write an application could seamlessly deploy on a variety of database systems (MySQL, Oracle, SQL Server, etc.) and applications servers (JBoss, WebLogic, WebSphere, etc.). The mechanism behind this fluid deploy was that all relationships between the database and application were stored in a single XML file. One just needed to open this XML file in a text editor, rename a few fields, and the application would work on that system. Need to deploy on a variety of systems? Just create a new XML file to map the application to that system.

Sounds wonderful in theory, but the practically speaking? This never worked, the servers were just too different. For example, while all databases have standardized on common language called SQL for 90% of the database communication, it would be a near miracle for you to find a code base that did not use any database-specific features. Most large enough systems rely on stored procedures, indexes, foreign key relationships, and key generation that while may be similar from database to database, doesn’t mean you can just drag and drop code from one system to another. Porting code from one database system to another often requires editing thousands of files line-by-line looking for changes and is a quite a difficult, often impossible task.

And that’s just database servers, application servers require their own server-tailored XML to perform the data mappings for each and every field in the relationship. Most databases have an average of a hundred or so tables with a couple dozen fields per table, so you’d have maintain a server-specific XML file for thousands of fields for each application server you wanted to deploy to. When a developer says porting an EJB2 CMP application to a new application server is hard, this is a large part of what he’s referring to.

Part of the problem with the variation among application servers was Sun’s own fault. While they defined the vast majority of the spec themselves, they, for whatever reason, allowed each application server software to define its own extensions for such features as database-specific fields. This wiggle-room allowed application server developers to create schemas as different as night and day.

  • Issue #2: Maintaining large XML files in J2EE is a contradiction

Going back to a single-server application, let’s say you do implement an EJB2 CMP-based system. At the very least you have an Entity system composed of a hundred java files, a single XML file (ejb-jar.xml) that defines the entity beans, and a single application-specific server XML file (jboss-jdbc.xml, open-ejb.xml, etc) that maps the entity beans to the database. What happens if you make a change to the java code or the database? Those files must be updated, of course! That means a developer needs to have exclusive access to edit these 2, quite large, XML files. In large enough groups, developers will be fighting to make changes to these files and contention for this file will be high in the version control system.

The history of J2EE is that it was designed to be a component oriented large business platform. Have a project with only a single developer? No reason to use J2EE since it can be done a lot faster by one developer. On the other hand, have a large project that requires dozens, perhaps hundreds, of developers working together? That’s what you ‘need’ J2EE for. But now add the fact that hundreds of developers are going to be writing and rewriting 2 very large XML files and you come to the realization that the technique is fundamentally flawed. You need J2EE for large development teams but maintaining a lot of single-point large files tends to break down faster in large development groups.

  • Issue #3: Databases change far less frequently than software

We’ve previously discussed how EJB2 defined all of those XML mappings in a way to allow you to change the database and application server without ever having to recompile code. The idea (I guess) being that database changes and application server changes were far more likely than code changes. Well, in practice that idea is completely wrong. The vast majority of J2EE applications work on one specific database type and one specific application server (often a specific version of the software). Furthermore, database changes are extremely rare. Often, once the database structure is defined, it is rarely, if ever, changed. Java code on the other hand is often rebuilt on a nightly basis. You would never get into the situation in practice where a change to an existing database system was needed before the code was ready to support it. Add to that the fact that most developers go out of there way to write solutions to software problems that do not require changes to the database! Most are fully away of the problems associated with changing the database (modify all related java and XML files, notify related teams, write database patches, etc), and will only change the database as a last resort.

In short, EJB2 defined a powerful spec for an ever changing persistence database layer, and no one bothers to use it in practice because of the maintenance issues involved. I’m sure I’m not the only developer that has seen poorly named database tables and fields like ‘u_Address_Line3’ (instead of ‘City’) but refrained from changing them knowing the havoc such changes would likely bring. Since the user is never supposed to view the database directly, why should it matter what things are named?

EJB2 Post Mortem and Lessons Learned
Not all of these issues were unaddressed during the life of EJB2. For example, XDoclet is a tool that provided a way for generating the large XML files for each system after a code change was made. The problem? It was never developed to support all systems and still had developers fighting to check in use their version of generated file to the version control system. Ultimately though, after thoughts like XDoclet were too little too late; people had seen how much more useful the implementation of object relational mapping tools such as Hibernate had become, and jumped shipped early on. When JPA finally did come out in EJB3 the audience was far from excited: the most dedicated of the crowd who had adopted EJB2 early on were left with no way to upgrade their systems other than to do a near-complete rewrite of their CMP system, and those in the crowd who would have probably most enjoyed the new features of JPA had all ready switched to Hibernate, with no reason to switch back.

When JPA/EJB3 was released, it virtually did away with all of the XML files, instead choosing to put the database mapping information directly in the java code file in the form of an annotation tag. In this manner, a developer only needed to look to one place to make changes to the entity bean and database. In the end, EJB2 entity beans is yet another example of why systems designed in theory should be tested more in practice before being trumpeted as the ‘the wave of the future’. It’s not that JPA is more superior to EJB2, in fact far from it. As I’ve mentioned EJB2 had amazing data control and could was capable of many powerful features. The problem lies in the inherent contradiction of creating J2EE as a component-oriented platform while enforcing centralized interactions and overly complicated management schemes. In the end, it was hard to find one developer, let alone a team of 30, capable of managing the EJB2 code base properly.

Finally Closing of JDBC Resources

I love reading Alex’s The Daily WTF and I noticed the recent Finally WTF is relevant to JDBC in an important way. All *good* JDBC developers already know you should close your result sets, statements, and connections (in that order) in a finally block when you are done with them, but do you all know how they should be closed? In particular, while a finally block will be entered under most circumstances, there’s no guarantee every line of code will be executed. Consider the following code I often come across:

Connection con = null;
Statement stmt = null;
ResultSet rs = null;
try {
     // Do stuff
     ...
} catch (Exception e) {
     // Do exception recovery stuff
     ...
} finally {
     try {
          rs.close();
          stmt.close();
          con.close();
     } catch (Exception e) {  }
}

Now, can you figure out what’s wrong with this code? Imagine if the result set rs is never populated and stays null. The first line of the finally block will throw a NullPointerException, and the stmt.close() and con.close() will never be executed. In other words, your failure to close a result set would lead to a connection never be closed even though it was in a finally block! Sure the code is guaranteed to enter the finally block, but if it fails on the first line, the rest of the code will be skipped. Next, compare this other common but still incorrect solution:

...
} finally {
     try {
          if(rs!=null) {rs.close();}
          if(stmt!=null) {stmt.close();}
          if(con!=null) {con.close();}
     } catch (Exception e) {  }
}

This solution is a little safer in that it avoids NullPointerExceptions, but it’s equally as useless as the first solution in that there are a number of reasons why the first line of code could still fail, for example if the result set is already closed. This solution actually worries me the most because clearly the developer went through the trouble of setting up the finally block and null pointer catches, but failed to fully understand how a finally block works. Now, I present a superior solution:

...
} finally {
     try {rs.close();} catch (Exception e) {}
     try {stmt.close();} catch (Exception e) {}
     try {con.close();} catch (Exception e) {}
}

Now, is this solution safe? See that if rs or stmt fail to close, the call to con.close() will still be executed. Granted you could get fancy by adding logic to handle/log the exceptions or even catch Throwable (although catching Throwable’s never a good practice), but that’s a bit overkill. You could also nest your finally block with more finally blocks (just add finally block to the rs’s try/catch and put the rest inside it… and so on) although I tend to prefer this solution since it’s more readable.

Lastly, you could make helper methods for closing the the objects to make the code easier to work with such as:

...
} finally {
     DBUtil.close(rs);
     DBUtil.close(stmt);
     DBUtil.close(con);
}

Where the try/catch is inside the helper methods. Keep in mind this last solution isn’t really different from the previous solution, just a code management improvement.