Bridges and Software

(I sent this to the NFJS speakers’ mailing list last week, and Ben Galbraith suggested I repost it here.)

Every now and then I hear someone compare software development to bridge building. (Bridge building, of course, is just a placeholder here for “real engineering,” which in the speaker’s mind is much cleaner and more manageable than the current messy state of software development.) Sometimes it’s “software development isn’t like building bridges,” while on other occasions it’s “software development should be more like building bridges.” In either case, though, the implication is clear: bridge building is predictable, rote, unexciting, very manageable work, and software development is not. The only difference is whether the speaker likes software development the way it is, or wishes it could be different.

I think both positions are misinformed. And no, I’m not about to pull out my magic prescription for how to solve all the software industry’s problems by making it more like bridge building. In my experience, software developers tend to have an idealized, unrealistic view of what “real engineering” is like. Sure, some kinds of bridges are so well understood by now that there’s very little risk involved; freeway overpasses and the like are churned out regularly and routinely (in much the same way that simple CRUD applications, whether web- or desktop-based, are usually safe bets for even inexperienced development teams). But from what I’ve learned, bridge building in general is a lot more like modern software development than most people realize. And I think the software industry can learn some lessons from the history of bridge building.

Take, for example, the bridges of Swiss engineer Robert Maillart. His best known bridge, Salginatobel, was just featured in a really nice piece about some of the best man-made structures.

Maillart was seeking new designs that would take advantage of the properties of a new material: reinforced concrete. It had been in use for some time, and builders had figured out how to work with it, but Maillart realized that reinforced concrete had unique properties that would permit the use of new forms, resulting in significant savings (due to reduced material costs).

The formal methods used by civil engineers at the time weren’t up to the challenge of analyzing these structures (known today as “hollow box arches” and “deck-stiffened arches”). Maillart verified the designs empirically, by building models, rolling barrels full of concrete over them, etc. etc. The civil engineering establishment of the day vilified him as a charlatan who was endangering lives and cheating his customers by building bridges that would fall down. But he got customers anyway, because his designs were much, much cheaper to build. (The fact that they were strikingly beautiful didn’t hurt.)

Another engineer of the time was Leon Moisseiff, a strong proponent of formal methods and the developer of “deflection theory,” at the time the state of the art in mathematical analysis of suspension bridges. Moisseiff designed a bridge intended to be a showpiece for the power of deflection theory. It was the Tacoma Narrows bridge. After its famous collapse, other bridges that had been designed with Moisseiff’s assistance (such as the Golden Gate) were retrofitted with stiffening trusses. It turned out that deflection theory was deeply flawed in a way that nobody had yet realized.

One of Maillart’s bridges did fall down … after being buried under an avalanche. One was demolished because more capacity was required. The rest are still in use, and the forms he pioneered are now standard designs taught to civil engineers. The math eventually caught up with Maillart’s methods. As the story I linked to above notes, Maillart is an inspiration to the current superstar of bridge design, Santiago Calatrava.

I think there are some important lessons here for the software profession. The lesson is definitely not that “real engineering” is a mechanistic, purely construction-oriented process, which is the lesson that is usually assumed when software is compared to bridges.

Note: I have at best an interested layman’s knowledge of the history of bridge engineering. Sources include Henry Petroski’s wonderful Engineers of Dreams: Great Bridge Builders and the Spanning of America for information about Moisseiff, and David P. Billington’s article “The Revolutionary Bridges of Robert Maillart” (from the July 2000 edition of Scientific American). For what I believe to be the best description of the true relationship between software development and other engineering disciplines, I encourage you to read “What is Software Design?”, Jack Reeves’ brilliant essay.

Ajaxian on Tamarin

My friends at Ajaxian invited me along as a podcast guest again, and I’m so pleased with the result. Audible Ajax, Episode 20 focuses on Project Tamarin, the high-performance JavaScript VM donated by Adobe to the Mozilla project.

This episode is a bit of a change for the Ajaxians; rather than being an interview with one person or team, it’s a more formally produced analysis piece, with comments from several JavaScript and Ajax experts, only some of whom are really associated with the project. I lead off with a discussion about why current implementations of JavaScript are rather slow, but then Brendan Eich and Kevin Lynch (from Mozilla.org and Adobe, respectively) talk about the project, along with Alex Russell and representatives from the IE team and Zimbra.

Ben and Dion are planning to use a similar format for upcoming episodes, and I’m thrilledI think it works really well, providing a continuity and depth of analysis that you just don’t get from a single interview. Bravo, guys!

Every American Should See This Video

This video from the center for IT Policy at Princeton University is a crystal-clear demonstration of why purely electronic voting machines are a terrible idea for our country. The video deals with a particular kind of machine, made by Diebold. But the strong likelihood is that any electronic, software-controlled voting machine without a voter-verifiable paper trail is vulnerable to at least some of the kinds of attacks shown in the video.

I’m not the least surprised by this, and I don’t know many serious programmers who will be. Programmers and computer scientists have been raising a stink about electronic voting machines for several years, but it’s been difficult to explain to non-programmers the full extent of the danger. It’s nice to have a video that shows the complete cycle: how the machine can be subverted, how it can steal votes, and how the rogue software can cover its tracks. (The one thing about the video that did surprise me, by the way, was how quickly and easily the physical act of subverting the machine can be accomplished.)

Beyond this one example, though, are more dangers. I don’t believe that any such machineany machine without a voter-verifiable paper trailcould be sufficiently secure for the purpose, even in principle. And that’s not just a hunch. I have good reasons for believing that it’s not possible to make such a machine secure enough to be entrusted with our votes.

The paper that’s available on the page with the video describes in detail the research that was performed, and the findings. It unavoidably contains some technical jargon, from the fields of software and security. Overall, though, it’s quite accessible, and I don’t think you need to be either a computer or a security expert to understand the issues. There’s also an executive summary that hits all the highlights.

Two years ago, when I went to vote, I was not amused to find myself having to vote on one of these very machines. In light of that, though, I most definitely was amused by the “My Vote Counted” sticker I was given as I left, and I felt compelled to augment the sticker’s message. I’ll probably have to vote on the same machines again in a couple months. But I hope we’ll turn toward more secure, reliable equipment for future elections.

Ruby and Strongtalk (or, What He Said)

Because I was pressed for time yesterday, I ended my blog on Ruby VMs with a little teaser about other possibilities: “And there is still room for serious creativity there. I’ll write more about that soon.” About two hours later, Avi Bryant posted essentially the same thing I was going to say.

Avi has blogged before about the idea of implementing Ruby on an existing, fast Smalltalk VM (the object models of the two languages are very, very close; the biggest hurdle would be Ruby’s richer method argument handling).

But, as Avi points out, the open-source availability of Strongtalk, including the VM implementation, is a big development. Although it’s now ten-year-old technology, Strongtalk nevertheless represents the state of the art in dynamic language implementation. Strongtalks basic principles of operation have been widely known for years (although apparently not by Joel), but actual implementations of those ideas have all been in proprietary products. (The Hotspot source is available, but not as widely as a true open-source products.) For OSS developers who want to learn, the closest they could get to a cutting-edge dynamic language implementation has been Self. But although the techniques in Strongtalk originated in Self, the Strongtalk team took them a lot farther.

Sun’s HotSpot VM for Java already incorporates these techniques, so JRuby is already on track to take advantage of them. I don’t know that much about the CLR, but it wouldn’t surprise me to learn that it uses similar ideas, which bodes well for IronPython and an eventual Ruby implementation for the CLR. But there’s still a performance limitation imposed by the mismatch between object models, and the unavoidable mapping layer that implements one atop the other.

Mr. Malsky, I predict that in three years, Ruby will have performance rivaling Strongtalk’swhether by someone adapting Strongtalk itself to run Ruby, or by mining it for techniques that can be rolled into YARV or some other VM project.

(Well, maybe three years is a bit optimistic. But I can hope.)

Ruby VMs

Last year at FOSCON (and the next day at OSCON), _why showed the first animated installments of his “Least Surprised” cartoons, to the delight of all present. In one of them, Time.now.is_a? MagicTime, Malsky asked his audience to imagine what Ruby will be like in three years. The first suggestion? “Maybe we’ll have our own virtual machine by then.” Malsky was appalled. “No, no! Come on, guys! Three years? Ruby will have ten virtual machines built inside every metaclass by then. Be creative!”

For several of us sitting in the back, “ten virtual machines built inside every metaclass” was one of the biggest laugh lines of the evening.

Of course, it’s still absurdbut maybe only by two or three orders of magnitude instead of four. I’m amazed at what’s happening in the world of Ruby and high-performance virtual machines. I’m personally aware of seven (!) projects to either build a Ruby VM or implement Ruby on an existing VM:

  • Of course, there’s YARV.

  • JRuby, which has been around for a long time as a Ruby interpreter, is starting to become a true bytecode compiler in the Jython style.
  • There are no less than three projects to implement Ruby on the .Net CLR, building on the lessons of IronPython.
  • The Cardinal project, to implement Ruby on Parrot, has been restarted by Kevin Tew. (And Parrot itself is making serious progress again, after some difficulties.)
  • Finally, there’s a project underway to implement Ruby on Nicolas Cannasse’s Neko VM.

Naturally, there’ll be some winnowing of these options over time. But it seems clear that the Ruby community will end up with at least three solid VM options: YARV, JRuby and some variety of Ruby on the CLR. The core Ruby developers are strongly committed to YARV. The CLR version is too important not to do (and to my mind, last week’s announcement of IronPython 1.0, still as an open source project, makes a mature Ruby implementation on the CLR even more likely). And of course, Sun has now hired the two main JRuby developers, throwing at least some of their weight behind that project.

Come on, guys! Three years? Ruby will have three virtual machines that’ll run in every kind of IT environment by then. Be creative!

(And there is still room for serious creativity there. I’ll write more about that soon.)

subscribe via RSS or JSON Feed