Still a few bugs in the system …

(via my O’Reilly blog)

I’ve been amazed at how well Google’s news service works. But no matter how good the technology is, occasional mistakes are inevitable when computer programs try to compile and correlate news headlines from lots of different sources. Today, some of the seams really showed.

In the “Top Stories” section at the top of the page, there’s always a subsection called “In the News” that contains a short list of topical links: topics that seem to be getting a lot of coverage, but haven’t made it to the prestige positions that include headlines and pictures. The links in “In the News” aren’t headlines. Instead, they’re topic keywords that have been extracted from the headlines by the Google software. Things like “Tel Aviv,” “Harry Potter,” and “NATO Summit.”

Today I noticed three topics in particular: “Our Man Flint,” “Magnificent Seven,” and “Academy Award.” It’s clear what’s going on therenews outlets are writing about the death of James Coburn, and Google is picking up on references to his achievements and most famous films in the headlines. But when I clicked on the topics, things got even more interesting.

The page for Our Man Flint was 10-for-10. All the stories were about Coburn. Academy Award was somewhat mixed, since there are other Oscar winners in the news this week.

But I was really surprised when I clicked the link for Magnificent Seven. Just looking at the first ten hits, I learned:

I don’t mean to take anything away from what Google has achieved. All things considered, it works amazingly well. And quite frankly, occasional strange juxtapositions like this can be goodthey add an element of the serendipity that’s present in a real newspaper, where you can occasionally run across a fascinating article that you never would have looked for.

Think about it. I’ll probably watch at least one of those Coburn movies on TCM Sunday night. The story about the horses was interesting, and I was surprised to learn that five years have gone by since the McCaughey septuplets were born. And it’s interesting that the producer of the film and one of its stars died during the same week.

I learned one more thing, too. All of those stories included the words “Magnificent Seven”—most of them in the headline. The name of the film has entered our language. That, in itself, says something about the legacies of James Coburn and Marvin Mirisch.

Agility for students, revisited

Since writing about teaching agility to computer science students, I’ve had a few more ideas, plus excellent suggestions from Patrick Linskey and Michael McCracken.

Patrick’s suggestion (which I can’t believe I didn’t think of myself) is to teach them more about the history of our field. We don’t have to make computer science students scholars of computer history, but we should give them some idea of the rich scope of the topic. Introductory computer courses tend to cover Babbage, Turing, and von Neumann, and then skip straight to Bill Gates. And for most CS students, that’s the end of it.

Michael’s suggestion is this:

… professors often understand the distinction between teaching skills for their own sake and teaching them as applications of more general principles. It’s important to convey that to the students as well. Giving an idea of trends and perspective helps breed the kind of healthy skepticism about tools and paradigms that Glenn mentions.That’s absolutely right. All too frequently I meet young programmers who are confused about thatthey think the particular techniques and tools they were taught are the fundamental things.

Here are the other things I’ve thought of:

  • Computer science students should learn, even as undergraduates, to read technical papers, and how to find them. (I have suggestions for anyone who’s looking for seminal papers that are well written and approachable.)
  • Along the same lines as Patrick’s suggestion: teach them about the great personalities of the field. Knuth, Hoare, Dijkstra, Englebart, Kay, Moore, Cray, Cocke, Hopper, Dahl, Thompson, Ritchie, Kernighan, McIlroy, Bentley, McCarthy, Minsky, Wirth, Steele, and many, many others are important not just for their technical contributions, but also for the color and vibrance they’ve brought to our field and its history. Learning about them and their relationships helps bring our past and present to life.
  • Help students to understand that there are communities of developers that they will be joining, and that those communities are where much of the advancement comes from. User groups, mailing lists, wikis, and blogspace are just some examples. Show students how they can be contributors, not just users.
  • Somewhere, years ago, I heard that one of the characteristics of “the professions” (medicine, law, architecture, engineering, etc.) was that professionals were expected to keep their education current. I have some friends who are doctors, and their homes and offices are always full of current medical journals. I often hear people complain that as a field we are not more “professional,” and yet few programmers take that responsibility seriously. It’s amazing how many programmers with years of experience have never read a book about programming (as opposed to just an API reference or tutorial) since they left school. That ethic of continued self-education should be taught to CS students, if they are to call themselves professionals.
  • Students need to leave school with some knowledge of ideas that are out of the mainstream. ZUIs, magic lenses, pie menus. Plan 9, BeOS. AOP, intentional programming, multiparadigm programming. Mob software. And many, many more.

I think I’ve worked there!

Saturday at the Atlanta Java Software Symposium, I overheard this as I walked past a small group talking in the hall:

“Our architecture is Big Ball of Mud, and we use Bitter Java as our coding standard.”

But the guy talking didn’t look familiar. He must’ve started working there after I left. :-)

Further evaluation of Objective-C

I am learning to be very fond of Cocoa, and Objective-C has been fun to program in, at least for the small things I’ve done so far. But I’m still trying to decide how to develop the personal project I’ve written about in previous entries, and for that purpose I’m noticing some weaknesses that Objective-C has, at least as compared to Java. (It certainly has strengths relative to Java, too. But I already knew about those. I’m just realizing the weaknesses now.)

The large amount of metadata that is stored in Java bytecode files is a huge advantage for Java. One of the things that annoys me about Objective-C is the fact that a class definition is split between a header (interface) file and an implementation file. My assumption is that header files are necessary because the object files don’t have sufficient metadata; you need the header in order to compile code that uses the class. But this solution is less than ideal. It forces a lot of duplication of information (method signatures, for example). And one of the terrific things about Java is that you can use any third-party libraries you have access to, even if you only have the object code. (Documentation is helpful, of course, but frequently not necessary, and the class files contain all the information that the compiler needs.)

Memory management in Objective-C is much better and easier than C or C++, and it does have a few advantages over Java. But overall, I’ve concluded that the Java mechanism is much better for application robustness. It’s certainly easier to program with.

Finally: Objective-C, by including all of C as a subset, suffers from C’s lack of rigor and discipline at the level of the type system. That is, the Objective-C type system can always be subverted by C code. Why is this important? Apart from the obvious safety and robustness issues, there’s another point that I’ve just become aware of. I’ve just been playing with IntelliJ IDEA as a development environment, and I’ve been very impressed by its thorough and easy refactoring support. Not only does nothing like that exist (so far as I know) for Objective-C, I believe it would be very difficult—if not impossibleto do a similarly thorough job for Objective-C.

I may well be wrong, and I’d be delighted to be wrong. But for the moment, I’m leaning strongly toward buying an IDEA license and developing my application in Java.

Cocoa preferences

One of the nicest things about Cocoa is the user preferences architecture. It provides a nice way to store explicit user preferences, as well as implicit things like window placement. And what I discovered today is that Cocoa has very nice hooks to make that easy. For example, to make a window remember its last position, you just have to add one line to the awakeFromNib method:

[self setWindowFrameAutosaveName: @"LinkWindow"];

It’s so simple that there’s no reason not to do it. It took just a couple of minutes to add that to Blapp.

subscribe via RSS or JSON Feed