I enjoy reading Bill Venners’ interviews with software development luminaries. Bill himself is (from what I’ve seen) a talented and tasteful developer, and he picks some of the best to interview. Plus, he makes sure he’s familiar with each person’s work, and asks intelligent questions. This week I read the final part of his interview with Martin Fowler, and it really resonated with some lessons I’ve learned over the past few years.

Last year I gave a talk at JavaOne (and later for two other audiences) called Stalking Your Shadow: Adventures in Garbage Collection Optimization. (Although it sounds like an arcane optimization talk, in reality it’s sort of a “stealth agile” talkthe firmest recommendation in it is to do tightly iterative development with performance testing beginning very early in the process, so you can catch poor decisions early, while they’re easy to change.) In that talk, I point out that the right optimization strategies are strongly dependent on your choice of platform, that different optimization strategies might either conflict with each other or reinforce each other, and that you must measure the effect of your changes to see whether they help or hurt performance.

The implication there, of course, is that if you are thinking about multiple optimizations in your system, then you mustif you want to avoid what Mike Clark calls “thrash tuning”—-have ways to mix and match those optimizations as you measure, to see which combination produces an acceptable result. One trick I recommend is to implement each of your optimizations as aspects using AspectJ. AspectJ makes it very easy to choose, from build to build, which aspects are included in your system.

I was focusing on optimizing at a particular point in time, but Martin talked to Bill Venners about the ongoing lifecycle of software. He discusses how advances in platform implementation technology can turn today’s optimizations into performance drags (he’s talking specifically about VMs, but the same things apply to the OS and compiler). He recommends that optimizations be revisited with each platform upgrade. To do that, of course, you need to keep the design simple and the optimizations well encapsulated.

My strategy of using AspectJ to insert the optimizations from the outside could make this a breeze. Start with the clean design, and after profiling you can leave the clean code in place, but replace it using an aspect that implements the optimization. Later, you can easily build with optimizations included or excluded and run your performance tests again.

(This also reminds me of a story I’ve heard several times about the development of Microsoft Excel. They have a very simple evaluator that they are certain is correct, but it’s very slow. They also have a completely separate, highly optimized and fiendishly complex evaluator. During development and testing they run it with both evaluators turned on, with the simple, slow evaluator checking the work of the fast one. Then they turn the slow evaluator off for the production build. This strategy seems to work wellI’ve certainly encountered numerous bugs in Excel, but none have involved the evaluator, and evaluation is fast.)