Sun’s finally ready to really support a free and open community of Java developers in a real way (or so it seems). Check out java.net.
“Oh. That’s not your baby … that’s the Stevie Ray Vaughan boxed set.”
One more thing about the typing debate in today’s expert panel. Dave Thomas said, at one point, “Java and C++ have equated an object’s type with it class, and that’s wrong. The type of an object isn’t its class; it’s what the object can do.”
I agree with that, and I think it’ll be interesting to go revive a four-year-old piece I wrote as part of a WikiWikiWeb debate on the merits of multiple inheritance.
I think multiple inheritance is relatively unimportant and rarely useful, and I’m happy to be working in a language (Java) that does not support it. In part, I’ve come to believe that inheritance is given far too much importance by most OO languages, designs, developers, and pundits.
My thoughts about inheritance have been evolving since I learned Java. The revelations I’ve had may not seem like much to a Smalltalk programmer, but they represent a complete shift in my thinking.
Nearly every introduction to OO concepts I’ve ever read or seen has dealt with inheritance very early, and then moved on to a cursory discussion of “polymorphism” as a sort of nice side-effect of inheritance. Partly as a result of this (and partly because of the underlying misunderstanding) the word “inheritance” is usually used to refer to some combination of behavior inheritance and subtyping.
Java is the first language I have used that mostly separates those two concepts. Extending or implementing an interface represents a subtyping relationship, whereas extending a class represents the more traditional combination of subtyping and behavior inheritance. The process of using and designing with interfaces has brought subtyping out of the shadows and into the foreground of my thinking.
I’ve begun teaching the concepts of inheritance and polymorphism the other way ‘round: polymorphism and interfaces come first, as the fundamental issue, and behavior inheritance comes later, as a nice facility (more, but not much more, than a convenience) when two subtypes share significant portions of their behavior or implementation. It seems to work well. Many of the common inappropriate uses of inheritance never occur to programmers who learn it this way. I’ve found that the method also helps in the explanation of abstract classes and when to use them.
Consider a language that makes the separation even more clear: subtyping uses one mechanism, and behavior inheritance can be specified without also producing a subtype relationship. (I’m speaking hypothetically, but I won’t be surprised to learn that such a language actually exists.) Behavior inheritance becomes a kind of specialized composition or delegation facility. Some of the traditional objections and implementation complexities of MI disappear.
My conclusion, then, is that the strongly-typed OO community may have been going down a side path for most of its history, conflating two concepts that are actually separate. Perhaps, once we’ve corrected that misdirection, the time will come for MI to move back to center stage. For now, though, I’m pleased that Java avoids MI, if only because having to do without it has helped me to understand the fundamental issues more clearly. (From talking to colleagues, I believe it is having a similar effect on others, too.)
I realized almost immediately after posting that that Smalltalk and its ilk (including, for example, Ruby) have essentially the characteristic I was talking about, where subtyping and inheritance are separate concepts. With those languages in particular, that distinction is present because there is no subtyping at all … the notion really doesn’t exist in those languages, because typing as Java and C++ folks think of it doesn’t exist.
The other assumption Ted Neward said he was questioning these days (at least, the other one I want to comment on) is that strongly typed languages are likely to be more efficient, because they give the compiler or VM more information that can be used to optimize things.
That may well be true. But I think the gap between the two is surely narrowing, to the point where (combined with increases in hardware speed) it’s usually a non-issue.
Untyped languages can be really fast. Check out all the cool Smalltalk things in Alan Kay’s keynote at ETech. I’ve been trying to find the time to start serious work on an app I want to write. As part of trying to decide what language to write it in, I’ve written essentially the same program in ObC/Cocoa, Java, and RubyCocoa. The ObC version is fastest, but not by much, and the Java and RubyCocoa versions perform almost identically.
It’s also interesting to reflect that the technology in Hotspot that makes modern Java pretty fast originated in attempts to make Self run fast, and Self is as dynamically typed as a language can be.
The history of programming language optimization is mostly the same story repeating. Languages include features and constructs designed to help the runtime system be efficient. And then implementation technology advances, and those same things that once helped the compiler generate fast code are suddenly inhibiting that process. I wonder whether the same thing will happen with static typing?
It’s expert panel time at the Rocky Mountain Software Symposium, and one of the first questions was “What do you guys think about the whole static/dynamic typing debate?” (I suspect he was a plant, because before the panel started the panelists had decided that they wanted to talk about that issue if they could.)
Ted Neward said a couple of interesting things. I don’t want to misrepresent him, because he ultimately said that he is now questioning all of the assumptions that he’s always believed about static typing. But I was interested in what he said those assumptions were, and I guess the purpose of this blog entry is to question those assumptions on Ted’s behalf.
One of the assumptions was that static typing really helps security analysis on platforms (like Java) with mobile (and therefore possibly untrustworthy) code. The VM (or interpreter, or what have you) is able to use the type safety to help enforce the security model.
When Java first hit the streets, mobile code was a hot topic. General Magic was promoting their Magic Cap environment, featuring mobile code (“agents”) heavily, and powered by a language called Telescript. Nathaniel Borenstein was researching “active mail”, sending active invitations and the like via email using a dialect of Tcl called Safe-Tcl. Someone (I can’t remember who at the moment) was developing roughly equivalent functionality in Perl (the Safe.pm module). Luca Cardelli at DEC was developing a beautiful and novel little language called Obliq.
All of those languages supported secure mobile code, and all of them used very different security models. My memory of Telescript is fuzzy, but I know for a fact that Java is the only one of the rest that is statically typed. And I remember from my evaluation at the time that Java and Telescript had the two most complex security models (and complexity is not a good thing in a security model).
Static typing is one of the tools you can use to build a security model, but there are many others.