(This continues thoughts from part 1.)
Brian really got me thinking about the nature of communication.
He lists several examples of how the “conduit metaphor” shows up in the way we speak about communication, and I thought of a really telling one from our field: Alistair Cockburn’s convection currents of information. But then, as I thought about it some more, that led me in some interesting directions.
We tend to tackle communication issues head-on. If we recognize that our communication isn’t working for some reason, we tend to address specific failings. Try to be more precise, or more thorough. Or more organized. Make sure things get disseminated to the right people. Have reviews to verify understanding and point out weak areas. And so on.
Part of the strength of XP and many other agile methods is that they don’t address the communication per se; instead, they address the context in which it occurs. They strive to make communication less formal, more frequent, more concrete, more serendipitous, and (tellingly, I think) more redundant. The key is understanding that people want to communicate, and we’re good at it, if the barriers are low enough.
So, back to Alistair’s metaphor. Although “convection currents of information” is clearly squarely in line with the conduit metaphor, it’s interesting that so much of what Alistair talks about is implicit, serendipitous communication. He talks about information radiators and other explicit channels, but the emphasis is on building a context where information simply flows, implicitly and effortlessly.
Update: Brian wrote a thoughtful response.
While I was downloading 1.4.1 yesterday, Greg Vaughn was reading the release notes and IMing me about them:
Known issues: no incremental gc :-(
What about concurrent?
It doesn’t say
That’s OK, really … incremental GC had serious problems in 1.3, and they weren’t fixed for 1.4; instead, nearly all of the GC work went into the concurrent collector, which arrived with Sun’s 1.4.1.
But I started wondering about the concurrent collector. It occurred to me that I could probably tell whether the concurrent collector was included by turning on verbose GC output and watching.
First, I ran the SwingSet2 demo with the default collector. I saw what I expected: lots of young generation collections, with full heap collections occurring just under 10% of the time. Then I tried -Xincgc, and saw the same behavior; the release notes were telling the truth about the incremental collector.
With the -Xconcgc option, I saw very different behavior. In some cases I’ve seen up to 500 young collections before any attempt is made at a full sweep. Looks like the concurrent collector is there.
But there’s a race condition. On 3 out of 4 runs, roughly, when it did finally attempt a full GC, I saw this:
[GC 14014K->14001K(58108K), 0.0069902 secs] [GC 13975K->13975K(58944K), 0.0093674 secs] [Full GC[Unloading class sun.reflect.GeneratedMethodAccessor4] [Unloading class sun.reflect.GeneratedMethodAccessor3] # # HotSpot Virtual Machine Error, Internal Error # Please report this error at # http://bugreport.apple.com/ # # Java VM: Java HotSpot(TM) Client VM (1.4.1_01-14 mixed mode) # # ShouldNotReachHere() # # Error happened during: generation collection for allocation # # Error ID: /SourceCache/HotSpot14/HotSpot14-14/src/share/vm/memory/generation.hpp, 346 # # Problematic Thread: prio=3 tid=0x0fd9ecf0 nid=0xedc8c70 waiting on condition # Abort trap
Brian Marick writes wonderfully (as usual) about how requirements documents and tests communicate the same thing in different ways. The interesting point is that, although it’s difficult to see how tests communicate the necessary information, for various reasons they do so more effectively than a requirements document. It shouldn’t be surprising that we have a hard time convincing some people of that.
I’ve been thinking about a related issue for a couple of years now. In The Mythical Man-Month, Fred Brooks writes about “the project notebook”: the authoritative source for information about the project. And it sounds like such a good idea. It’s easy to see how that would work. It records important information precisely and directly.
It’s also—as you begin to understand when you read Brook’s ideas for how to manage the project notebook—incredibly unwieldy and inefficient. And while the notebook might (if you spend a lot of time and money on it) contain all of the important information, it’s highly unlikely that it would ever be completely understood, by the right people at the right times in the right ways.
It works much better to let the code—the system itself, plus executable tests—serve as the project notebook. This is why iterative development is so important. The code is functional, and it gives us feedback about our understanding. (More importantly, it gives us feedback about our misunderstanding.) If we think we know something about how the system works, we try to write code that depends on that knowledge, and the system will tell us if we were correct. With the project notebook, on the other hand, we might not learn about our error until weeks or months have passed, by which time the ripple effects from our mistaken assumption will be tremendous.
Another common objection to agile styles of development goes something like this: “You mean you don’t document or model design decisions that come out of a meeting? Haven’t you ever been to a meeting where everyone thought they had reached an agreement, only to find out that each person understood the decisions differently?” Sure I have. But the problem there, I believe, is largely that people have a hard time communicating about abstract things. Handwaving, and even boxes and lines, aren’t really very expressive about what software is. I’ve been in a lot of those discussions, and the problem is that there’s almost no feedback about whether you really understand what’s being said. I’ve been completely confused and at the same time absolutely convinced that I understood everything.
When people have those same discussions in the context of code—working code—communication changes. It may seem strange to say this about software, but code is concrete. It’s unambiguous. It grounds our discussions in a reality, and it’s a terrific aid to effective communication and understanding. It provides a feedback mechanism for the act of communication. If you don’t understand what’s being said, then the proposal (more often than not) won’t make any sense when held up against the code.
It’s no wonder that, in many test-first teams, design debates are frequently carried out through unit tests. The “conduit” and “program” metaphors of communication each have their strengths, and the best policy is to let them support and reinforce each other.
Most of my compatriots on the No Fluff, Just Stuff tour have blogged about the first symposium of the year, so I guess it’s my turn.
I’m excited that things are rolling again. These are excellent events for the attendees, and also for the speakers. I’m always energized and enriched by spending a weekend talking to the other speakers and the audience members about software topics. (You’ll notice that both the quantity and depth of my blogging has decreased during the break since the Atlanta symposium at the end of November. I expect it to begin picking up again now.)
This weekend in Austin I gave five talks. The two older talks (Introduction to XPath and Java Web Start and JNLP: The Return of the Rich Client) were both well received, and the three new ones went much better than I expected:
- Concurrent Programming Utilities—I had a nice crowd for this one, and they were excited to learn about utility classes that can help them build better concurrent systems. More than one person said that they wished they had been to my talk a year ago, because it would have saved them a lot of grief.
- Introduction to Aspect-Oriented Programming and AspectJ—I need to work on a better demo and a few more diagrams to help illustrate some tough concepts, but folks liked the talk anyway, and I don’t think I lost anyone!
- Project Infrastructure Values, Principles, and Practices—This is my favorite talk of the bunch, and it also went really well. It ended up being about 20 minutes too short, which was unfortunate for this time, but it’s nice because it means I’ll have time for some demos and more in-depth information next time around.
I’ll be doing the same slate of talks at the Northern Virginia Software Symposium the last weekend in March. I can’t wait!
Doin’ the blog roundup today, two items really had me shaking my head in amazement.
First was the news (via LtU and lemonodor) that Yahoo! has finally succeeded in moving Yahoo! Stores (originally Viaweb) from its Common Lisp roots to a new, C++ implementation. But only sort of—in Paul Graham’s message about the switch, he reveals that the new implementation contains a new Lisp interpreter. (This is no surprise, really, since the store interface requires a runtime Lisp interpreter.)
In combination with that, what’s really puzzling is this comment: “The reason they rewrote it was entirely that the current engineers didn’t understand Lisp and were too afraid to learn it.”
Just after reading about Yahoo! Stores, I read Bill Venners’ account of some excellent and well known programmers discussing programmer interview tactics. There’s a lot of good stuff in there, including this from Dave Thomas:
Hire for talent. […] The world changes, so you need to hire folks who change with it. Look for people who know computing, not necessarily particular narrow niches. Not only will they adapt better in the future, they’re also more likely to be innovative in the present.
and from Chris Sells:
To identify how good the candidates are technically, I let them choose an area in which they feel they have expertise. I need them to know something well, and I ask them about that. […] I’m not necessarily after an expert in the area I need. If they learned why in the past, I have confidence they’ll learn why in the future.
Now let’s return to the original story about Yahoo! Stores. On LtU, Ehud Lamm quite reasonably wonders “whether maintaining a Lisp interpreter written in C++ is cheaper than sending new engineers to study Lisp.”
I’d like to emphasize the phrase “new engineers” in that sentence. If you have engineers who are afraid to go learn Lisp in order to maintain an extremely successful existing application written in Lisp (but who paradoxically think it’s fine to write their own Lisp implementation to support the customer base) then you need new engineers.