Last night during the panel session at the Lone Star Software Symposium, nobody was surprised that an audience member brought up the infamous Pet Store benchmark. None of the panelists thought much of it as a useful or accurate benchmark, for a lot of good reasons.

One part of the discussion centered on the decision to build each version of the pet store according to the vendor’s recommended “best practices”. This resulted is that neither version of the app has an architecture I would recommend to clientsbut more importantly, the .NET application is optimized for raw performance, while the J2EE application is aimed more at flexibility, maintainability, and robustness.

Jason Hunter had a really nice suggestion for the next round: instead of measuring the performance of the things, let’s set up two development teams, one for each version of the application, neither of them having seen the code before. Give them new features to implement, and see how long it takes. Given the existing code quality (poor) and some of the choices made in the J2EE version, it still wouldn’t be a really representative test. But it would at least be interesting, which is more than you can say about the original.