(Originally published on Mike Clark’s pragmaticautomation.com)

In 1999 I was consulting for a company that builds telecom equipment. They were developing a new variety of … well, let’s just call it a “local area telecom network.” It included multiple nodes for traffic switching, call control and setup, and administration. It was to be a full-fledged product, but (as so often happens) there was an artificially imposed deadline: they wanted to demonstrate the product at a trade show in just a few months. Those kinds of things can be death to a project, but this company was attacking it rationally—for the demo, they were quickly pulling together “version 1” using existing internal simulation, testing, and prototyping tools, while a different team was working on building “version 1.5” from scratch, and there was a reasonable amount of communication and feedback between the two teams.

I was working on the Java-based operations and management system for the version 1 team, and was asked to design and build a system that could start and configure the various separate processes on a node, hooking them together so that they would be ready to join the network and start doing work. The testing group had an existing tool that did a similar job, and although it wasn’t suitable for use even in the demo product, the team asked that I keep the command structure of that testing tool. I don’t remember the details, but that tool could respond to three or four very simple commands: “connect”, “send”, “wait”, for example.

At that time I was a fan of the Tcl language, which was designed to be a neutral, extensible, embedded command language for applications. Tcl’s inventor, John Ousterhout, designed the language after noticing a pattern in application development: nearly every application needs a command language of some sort, but the developers are focused on the application itself, so they cut corners with the command language, thinking they can get by with less. The command languages end up having “insufficient power and clumsy syntax,” in Ousterhout’s words … and then, contrary to the developers’ expectations, users find that they need to use that command language to do complex things.

I saw this pattern developing in our telecom system. The command language I was asked to develop was almost hopelessly simple, but the people asking for it were adamant that it was all they needed. And yet I noticed that the command syntax was completely compatible with Tcl syntax. Plus, an implementation of Tcl atop Java, called Jacl, was available for me to use. Because Jacl, like Tcl, was designed to be extended with application-specific commands (and had a very nice API for doing so) I made the case that it would be just as easy — probably easier, in fact — for me to implement the desired commands as Jacl extensions than to write a custom parser. Then they would have a real programming language at their disposal “in the unlikely event that they ever needed it.” My clients reluctantly agreed to this plan, but only because it would not involve extra cost. They were sure they knew their needs.

Implementing the commands as Jacl extensions was easy, and the payoff was huge. During preparations for the first deployments (for the trade show demo and a couple of early customers) the team wrote hundreds of lines of Tcl scripts to automate the setup and configuration of the different pieces of software that made up the network. If that power hadn’t been available to them, they would’ve had to make requests to the development team for added features in the system, which would’ve been a huge bottleneck. It was generally acknowledged that the use of Tcl for the configuration language was one of several decisions that helped the deployment teams meet the project deadlines. It also made the system more robust: the configuration scripts were originally intended just for starting the system, but they turned out to be very useful for dynamic error detection and recovery as well. Tcl has fallen out of favor (for reasons both good and bad) and today I’d probably use Ruby or Python. But that’s an implementation detail.

I’m a big believer in the YAGNI principle: if you don’t have a real, demonstrated need for some architectural or design feature today, assume that “you ain’t gonna need it” and keep your design simple and focused on the needs of the day. But YAGNI is a principle, not a law, and there are some places where I’ve learned that it doesn’t apply. This is one: if you know you need a scripting, configuration, or extension language — something that allows your system to be automated — you can safely assume that you need a complete programming language in that role. YIGNI: you is gonna need it.

Six of One, a Half Dozen of the Other

Chad Fowler (via del.icio.us) pointed me to a delightful post on the ll1-discuss mailing list. The discussion had turned to closures and objects, and which could be considered “richer” or “more powerful” or “more fundamental.”” Anton van Straaten chimed in with a post that I think summarizes the issue perfectly. It’s worthwhile to go read the whole thing, but the core of it is this:

Given this tension between opposites, I maintain that the question of closures vs. objects should really be a koan. […] Here goes:

The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said “Master, I have heard that objects are a very good thing—is this true?” Qc Na looked pityingly at his student and replied, “Foolish pupilobjects are merely a poor man’s closures.”

Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire “Lambda: The Ultimate …” series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.

On his next walk with Qc Na, Anton attempted to impress his master by saying “Master, I have diligently studied the matter, and now understand that objects are truly a poor man’s closures.” Qc Na responded by hitting Anton with his stick, saying “When will you learn? Closures are a poor man’s object.” At that moment, Anton became enlightened.

Why did I enjoy that so much? If you read this blog, you know that I’m fond of dynamically typed languages. I frequently encounter blogs, articles, or mailing list postings from people who believe dynamic typing is “the one true way” (I’ve been guilty of such things myself, in fact). And then I turn around and see people claiming that strong static typing has undeniable benefits, and it’s just too hard to build reliable software in dynamically typed languages (ignoring the numerous counterexamples), etc., etc. And the thing is … I don’t think either side is entirely right or entirely wrong.

There are, I believe, many things in our field that exhibit that same kind of duality—what Anton called “tension between opposites.” Here are just a few:

  • Functional vs. OO (Anton’s koan is just one example of this).
  • Compiled vs. interpreted languages.
  • File-based source code vs. live images.
  • Relational vs. OO databases.
  • Tests as design tools vs. tests as quality tools.
  • Browser-based apps vs. rich clients.
  • Code as data vs. rich syntax.

You could write a koan about each of those.

(I’ll note in passing that although the Zen koan is a delightful form for capturing and expressing such tensions, Zen doesn’t have a monopoly on such ideas. For just one example, compare Galatians and James. They seem at first glance to be in contradiction, and yet the core of each can be found in the other.)

I titled this blog entry “Six of One, a Half-Dozen of the Other.” For most of these things, I don’t think it’s really an even split … but the alternatives are weighted differently, and how you value them depends on your preferences, needs, fears, and experiences. To a great degree, I think it comes down to our predisposition toward another set of opposed concepts: directing vs. enabling.

As much as I enjoy arguing my side of these things with all the force I can muster, studying the history of our field has led me to conclude that there are inherent trade-offs in every one of the choices listed above. Neither side solves every problem; rather, each side has some strengths and some weaknesses relative to the other. While it may be fun to take sides (and I’m sure I’ll continue to do so), at some point the most productive course is to try to really understand the relative merits.

Once we do that, it will become clear that the choice boils down to what we value most … and thinking about those values would be much more fruitful.

(If you have other examples of such dualities in software development, I’d love to hear about them. Please let me know.)

I Think Not

I’m sitting in on a “webinar” today (how I loathe that word!) hosted by The Conference Depot using some software called On-Site Pro. They recommend that you test your browser compatibility, etc. before the meeting starts, which I did. I was greeted by this:

You are running the Mac OS X Operating System. On-Site Pro no longer officially supports this Operating System.

The application may work for you with limitations. However, for a better meeting experience, please upgrade to a Windows 98/2000/NT4/XP operating system.

Even if I don’t like it, I can understand them not supporting OS X. (Although I suspect that “no longer” is stretching the truth … I doubt they ever did support it.) But calling a switch to Windows an_upgrade_—that’s going too far. (And never mind that this “upgrade” would require buying new hardware, which they fail to mention.)

I have an XP box here at work that I could usebut I want to see how serious the limitations actually are. More later.

Update: The limitations are pretty serious: the software didn’t work at all. Good thing I had a Windows box handy … but here’s to the day when I won’t have to.

I hate to say I told you so …

… but I have to, ‘cause it feels so good. ;-)

I told you so!

(And heartfelt congratulations, as well.)

Near-Time Flow

Here’s another cat on the loose, leaving an empty bag behind.

For several months now, my friend Stuart Halloway has been talking to me about his new job, and the project he’s been working on. I noticed a couple of weeks ago that the website is finally showing some detailsthe company is out of stealth mode, and I can finally start talking about it.

Stuart’s job is with a company called Near-Time, and the product is called Flow. It is, quite frankly, the collaboration tool I’ve wanted for a long time. I saw an early version in October, and even in a rough state it was breathtaking. Flow includes the best of Wikis, blogs, browsers, bookmark managers, outliners, and email clients, all in one program. Flow gains a lot of power from having all of those things integrated into one interface (and it doesn’t hurt that it’s a good interface).

Although Flow is useful for individuals, it’s designed for collaboration. It seems to me to be an ideal tool for collaborative research, planning, and development work of various kinds, especially (but not only) if you can’t be face-to-face. Best of all, Flow is a collaboration tool that doesn’t require constant connectivity. The assumption of intermittent connectivity is baked into Flow and the protocols it uses for information sharing.

Near-Time is preparing for an early-access release of Flow in the coming weeks. If you and some others in your group use Macs, I urge you to register and try it out. I think you’ll be impressed.

subscribe via RSS or JSON Feed