(via my java.net blog)
I remember being at JavaOne in 1999 (I think) when I first heard the terms “J2SE”, “J2EE”, and “J2ME”. I understood the reasoning for such a move, but at the same time I hoped they wouldn’t go too far with the distinction.
It was both amusing and refreshing to hear Jonathan Schwartz acknowledge in this morning’s keynote that Sun has been guilty of pushing multiple, separate platforms rather than emphasizing Java as a single platform. He promised that they would do better.
Of course, they aren’t in a full retreat from the multiple editions, and such a retreat wouldn’t make sense anyway. There are real distinctions between those environments, and the facilities available on them need to reflect that. But I do hope they spend more time focusing on what all the editions have in common.
Unfortunately, for those of us who like to stay informed about what’s coming in future releases, it’s necessary to pick an edition. This afternoon at 3:30, the “Overview and Roadmap” sessions for J2SE and J2EE are scheduled opposite one another.
Sun’s finally ready to really support a free and open community of Java developers in a real way (or so it seems). Check out java.net.
“Oh. That’s not your baby … that’s the Stevie Ray Vaughan boxed set.”
One more thing about the typing debate in today’s expert panel. Dave Thomas said, at one point, “Java and C++ have equated an object’s type with it class, and that’s wrong. The type of an object isn’t its class; it’s what the object can do.”
I agree with that, and I think it’ll be interesting to go revive a four-year-old piece I wrote as part of a WikiWikiWeb debate on the merits of multiple inheritance.
I think multiple inheritance is relatively unimportant and rarely useful, and I’m happy to be working in a language (Java) that does not support it. In part, I’ve come to believe that inheritance is given far too much importance by most OO languages, designs, developers, and pundits.
My thoughts about inheritance have been evolving since I learned Java. The revelations I’ve had may not seem like much to a Smalltalk programmer, but they represent a complete shift in my thinking.
Nearly every introduction to OO concepts I’ve ever read or seen has dealt with inheritance very early, and then moved on to a cursory discussion of “polymorphism” as a sort of nice side-effect of inheritance. Partly as a result of this (and partly because of the underlying misunderstanding) the word “inheritance” is usually used to refer to some combination of behavior inheritance and subtyping.
Java is the first language I have used that mostly separates those two concepts. Extending or implementing an interface represents a subtyping relationship, whereas extending a class represents the more traditional combination of subtyping and behavior inheritance. The process of using and designing with interfaces has brought subtyping out of the shadows and into the foreground of my thinking.
I’ve begun teaching the concepts of inheritance and polymorphism the other way ‘round: polymorphism and interfaces come first, as the fundamental issue, and behavior inheritance comes later, as a nice facility (more, but not much more, than a convenience) when two subtypes share significant portions of their behavior or implementation. It seems to work well. Many of the common inappropriate uses of inheritance never occur to programmers who learn it this way. I’ve found that the method also helps in the explanation of abstract classes and when to use them.
Consider a language that makes the separation even more clear: subtyping uses one mechanism, and behavior inheritance can be specified without also producing a subtype relationship. (I’m speaking hypothetically, but I won’t be surprised to learn that such a language actually exists.) Behavior inheritance becomes a kind of specialized composition or delegation facility. Some of the traditional objections and implementation complexities of MI disappear.
My conclusion, then, is that the strongly-typed OO community may have been going down a side path for most of its history, conflating two concepts that are actually separate. Perhaps, once we’ve corrected that misdirection, the time will come for MI to move back to center stage. For now, though, I’m pleased that Java avoids MI, if only because having to do without it has helped me to understand the fundamental issues more clearly. (From talking to colleagues, I believe it is having a similar effect on others, too.)
I realized almost immediately after posting that that Smalltalk and its ilk (including, for example, Ruby) have essentially the characteristic I was talking about, where subtyping and inheritance are separate concepts. With those languages in particular, that distinction is present because there is no subtyping at all … the notion really doesn’t exist in those languages, because typing as Java and C++ folks think of it doesn’t exist.
The other assumption Ted Neward said he was questioning these days (at least, the other one I want to comment on) is that strongly typed languages are likely to be more efficient, because they give the compiler or VM more information that can be used to optimize things.
That may well be true. But I think the gap between the two is surely narrowing, to the point where (combined with increases in hardware speed) it’s usually a non-issue.
Untyped languages can be really fast. Check out all the cool Smalltalk things in Alan Kay’s keynote at ETech. I’ve been trying to find the time to start serious work on an app I want to write. As part of trying to decide what language to write it in, I’ve written essentially the same program in ObC/Cocoa, Java, and RubyCocoa. The ObC version is fastest, but not by much, and the Java and RubyCocoa versions perform almost identically.
It’s also interesting to reflect that the technology in Hotspot that makes modern Java pretty fast originated in attempts to make Self run fast, and Self is as dynamically typed as a language can be.
The history of programming language optimization is mostly the same story repeating. Languages include features and constructs designed to help the runtime system be efficient. And then implementation technology advances, and those same things that once helped the compiler generate fast code are suddenly inhibiting that process. I wonder whether the same thing will happen with static typing?