Monday, October 3, 2011

Yes, I Have No Class

OK, I think I do kind of get what's going on with Beta. Yes, function activations are objects, and function bodies are object initializers. (I can find references to this being a concept that Beta inherited from Simula, but what I haven't found is any actual Simula documentation that says Simula does things this way. As far as I can see, Simula has rather conventional-looking, distinct syntaxes for declaring classes and procedures. But Beta has just one syntax for both.)

(There's some other stuff going on in Beta, like the ability to keep a function activation around and rerun it in the same activation frame. This is excusable as an optimization of tail recursion, but letting people do it by hand at arbitrary points in the program is a perfect example of the principle that just because you can think of something, you are not obliged to go off and implement it.)

Not Your Grandpappy's this

To illustrate the activation-as-object idea (without resorting to Beta syntax, which is baffling to the uninitiated), here's a bowdlerized version that I hope won't be too offensive to modern functional sensibilities. Consider a representation of binary expressions:

def Binary (
public left: Expr,
public op: BinOp,
public right: Expr) =
using the usual ill-defined mongrel syntax. In this case, this is understood not to mean what it would in Java or Scala (the object that owns the function where this appears), but rather the current invocation (or call or activation) of the Binary function itself. So Binary is the name of a class. It's also the name of a function that constructs an instance of the class. We can tell Binary is a class because it returns this. (We could also make the return type of Binary explicit, as in:

def Binary (
public left: Expr,
public op: BinOp,
public right: Expr): Binary =
but my imaginary compiler for this non-language is somehow smart enough to infer this.)

A class with no features is not very useful. Binary has three features, all functions (because the universe contains nothing except functions and activations of functions. Of course, the functions themselves must be activations of the function constructor function. Does my head hurt enough yet?). I've marked these functions (left, op, and right) with public, not the more Scalaish val, to emphasize that I'm specifying only the visibility of these features, not that they are variables (even constant variables) in the usual sense—because there are no variables in the usual sense, only functions that always return the same value. Things that look like variable bindings are just functions defined always to return the argument (a function activation) passed to them.

A fatter example:

def Binary (
public left: Expr,
public op: BinOp,
public right: Expr) = {
def eval: Int =
left.eval mumble mumble right.eval
def show: String =
"(" + + " " + + " " + + ")"
where I suppose the defs should actually be publics for consistency, but this train of thought is still way too half-baked to merit consistency (or unmixed metaphors). Since eval doesn't return an eval, and show doesn't return a show, we know these aren't constructors; they're just plain old functions.

Calling a plain old function, like calling a constructor function, allocates an activation record for the function (at least conceptually). But if the activation never escapes, it becomes garbage as soon as the function returns. An execution environment with a lick of sense would put such a thing on a stack instead of a heap, but let's loftily dismiss that as a mere implementation detail.

Found: One Grail (Slightly Holy)

Anyway, yes, activation-as-object does offer a solution to the puzzle of constructors.

To recap, that puzzle is: what is an object before its constructor is called? Bytecode fans will recall that the JVM constructs an object in two phases, first allocating it with the new instruction, and then invokespecialing its class's <init> method. The JVM verifier ensures that you can't get a hold of the object in between new and <init>, because it's not valid then: it's not a proper instance of its class yet, but (because new had to refer to the class to allocate it), it's clearly not an instance of anything else. It's just a nonthing from which you must avert your eyes.

But function invocation, at least as far as we let ourselves know, is atomic. Once the arguments are prepared, boom!, you're executing the function, with its activation record all nicely laid out for you, and with not so much as an eyeblink in between (assembly language programmers may dissent, but our fingers are firmly in our ears and we can't hear a word they're saying). That means that when objects are activations, there is no object to wonder about before its constructor is called: the object is the call to the constructor.

Further Adventures of a Vague Idea

In this non-language that I'm not describing, every identifier that appears in an expression (as opposed to a type declaration) is the name of a function. If that identifier is followed by an open paren or nothing, it's a function invocation. So left.eval above invokes left, which returns the first argument passed to Binary, and then invokes the eval function that is a feature of left (and presumably of every Expr). An identifier followed by an underscore is a reference to the function itself, as in Scala.

Since classes are functions, Expr _ is properly the name of the type of instances of (activations of, calls to) Expr. But after a colon, the trailing underscore is implied, so we can say left: Expr instead of left: Expr _. You might need the underscore in other contexts, like if (foo.class == Expr _) ....

Another random observation is that in Scala, val and var have different meanings in classes and functions. Declarations in a class can refer to each other (so that functions can recurse, and so on); but variable definitions in a function can refer only to local identifiers declared earlier in the function, not later. Since Non-Language functions are both classes and functions, you'll need some kind of syntax to distinguish between letrec-like declarations and sequential declarations, unless you can persuade yourself that you can do without the latter (can you? I never thought about this).

Hey, Look at the Ducks!

Please distract yourself immediately, lest you come up with questions about how this scheme might relate to inheritance, or parametric types, or anything else complicated. If you must, consider these an exercise for the reader (or for Gilad Bracha, whose Newspeak also borrows ideas, but thankfully not syntax, from Beta).

Saturday, September 17, 2011

A Funny Type of Contract

Alex Cruise posted the link on the Scala language mailing list to these slides by Benjamin Pierce (along with Nate Foster and Michael Greenberg). Pierce is the author of Types and Programming Languages (which I confess I'm still working my way through).

Even if you have absolutely no interest in type systems, these slides are worth a look for their punchy visuals. Perhaps Pierce should also teach a course on the effective use of PowerPoint.

What Pierce is getting at (if I understand right) is that type systems can get too heavyweight to be worth the trouble. He's looking at a particularly complex case, that of the Boomerang programming language, so perhaps the typechecking in your favorite language (if it's not Boomerang) may still be lightweight enough not to be under suspicion.

Pierce's conclusion seems to be that contracts (in the form of runtime assertion checks) may have to supplement compiletime typechecking to make the overall problem of verifying program correctness tractable.

I've been thinking for a while that it would be nice if there were some way that types and assertions could be unified. Types constitute a form of contract that is verified at compiletime (and/or loadtime, in languages that do loadtime verification), while assertions are verified at runtime. Runtime assertion checking is easy enough, so the difficulty in bridging the two is coming up with a general way to specify how contracts can be checked at compiletime: this requires giving your compiler and/or loader enough ammunition to do an automated proof, which is extremely hard except for the simplest cases (something like Java's nominal type system, before generics, constitutes a pretty simple case).

Eiffel is the only language I've seen that makes a big deal of method pre- and postconditions in a way that is analogous to typechecking on methods: that is, when you override a method m with a method m′, m′'s postconditions must be more specific than m's (covariantly), but its preconditions must be less specific (contravariantly).

It might be cool if a language had a sort of Third Way with respect to types and assertions, where you could tell the compiler to check a contract at compiletime if the compiler can figure out how (and it doesn't take too long), or at runtime otherwise. Runtime checking is generally inferior to compiletime checking, but most of us would settle for it if the compiletime check would take forever.

Always in Beta

My quest to understand What Programming Languages Are Made Of has led to me start reading the Beta Programming Language book. I've only gotten as far as Chapter 3 so far, but I've already seen enough quirks in the language to have repaid the time.

If I see the implications properly, it looks as if Beta unifies method calls and object construction. That is, the language has no functions or methods per se; rather, all code lives in what would Java or Scala would consider constructors or instance initialization. Instance variables play the role of local variables.

It's easy to observe that stack frames are like objects, in that both are usually laid out as structs accompanied (in languages with managed memory) by a pointer to some sort of descriptor. But it's a bold leap to say that stack frames are objects.

I haven't read far enough to be sure that my understanding is correct. It would also be interesting to see whether Beta has any notion of alternate constructors—which, when viewing Beta objects as functions, would have an effect something like default function arguments.

Is the Beta approach part of the answer to the puzzle of What Are Constructors Made Of? I haven't figured that out yet.

I've seen Beta mentioned a few times, but never heard of anyone using for it anything. I wonder whether it failed to gain traction because people always assumed it was still in beta.

Wednesday, July 27, 2011

Sticking to Mutability

In our last episode, we left Minty Fresh Construction lying tied to the tracks with the Locomotive of Hopeless Complexity bearing down on her at full steam. Well, Minty, you're just going to have to hang on for a bit, because we have more characters to introduce to the story. Maybe we'll see you again after the plot has thickened enough.

Immutability's Evil Stepsiblings

We all know that Mutability Is Bad. Bad to the bone. The language construct that dare not speak its name (at least in some languages). Or at any rate, those of us with a properly strict functional upbringing know this; there are still heathens out there who write Java (or even Scala or C++) code containing single equal signs without the least hint of shame.


Not talked about in the Best Families is the fact that the languages we all know and love are inclined, in the depths of night with the shades drawn, to treat the contents of memory as a big mutable array whose contents they can toy with as they please. An even darker secret is that the designers of libraries for ostensibly immutable data structures sometimes hide mutable variables in their cellars (at least temporarily, like Scala Builders).

Of course, there have been attempts to dress mutability up in civilized clothing and bring it into Polite Society. We've seen enough of Haskell's by-need evaluation and Scala's lazy vals that they rarely elicit anything more than mild titters.

Dare we ask ourselves now if there any more forms of mutability out there, longing to discard their black hats and shave their twirly mustaches, ready to submit themselves to the conventions of decent society and become upstanding, productive citizens?

We dare.

Lazy References

Lazy vals and by-need parameters can be understood as the combination of a mutable reference cell, a flag, and a nullary function. The flag tells whether the cell has been initialized; if set, the cell can be examined. If not set, any attempt to examine the cell invokes the function, puts the resulting value in the cell, sets the flag, and returns the value:

class LazyRef[T] (private var f: () => T) {
private var initialized = false
private var value: T = null.asInstanceOf[T]
def get: T =
synchronized {
if (! initialized) {
value = f()
initialized = true
f = null // lest f become garbage

There's mutability here, all right, but it's a very disciplined sort of mutability that can mingle with the inhabits of Immutableland without causing the least alarm. value can be changed, but only once, and you can never observe it before it gets changed, so it's as good as immutable if you overlook the peculiar circumstances of its birth—er, conventions of its initialization.

Compulsive optimizers may observe that a straightforward implementation of LazyRef can take more space than necessary. At any given moment, either value or f is of interest, but never both at the same time. You might reimagine LazyRef as:

class LazyRef[T] (private var funOrVal: Either[() => T,T]) {
def get: T =
synchronized {
funOrVal match {
case Right(v) =>
case Left(f) =>
val v = f()
funOrVal = Right(v)

Whether this version of LazyRef is any faster or more compact than the previous version depends on your execution environment. The point is, though, that because this construct is so useful, you're likely to make heavy use of it, and attention (from the language or VM) to make it more efficient might be well repaid.

Sticky References

Lazy Val has a more rakish cousin I call a sticky reference, or stickref for short. Stickref is mostly safe around Respectable Society, but you'll want to ensure that he's properly chaperoned whenever he's in the presence of impressionable young algorithms.

Stickref differs from Lazy Val in that he doesn't start life with an initialization function. Instead, he remains brazenly uninitialized until given a value, which he then holds steadfastly ever after—he's immutable once initialized:

class UninitializedException extends RuntimeException
class AlreadyInitializedException extends RuntimeException
class StickRef[T] {
private var value: Option[T] = None
def get: T =
synchronized {
value match {
case Some(v) => v
case None => throw new UninitializedException
def freeze (newValue: T): T =
synchronized {
value match {
case Some(_) =>
throw new AlreadyInitializedException
case None =>
value = Some(newValue)

Stickref clearly feels less safe than Lazy Val; using a stickref can result in an exception, whereas using a lazy val cannot (unless its initialization function throws an exception).

On the other hand, a stickref can keep mutability from getting out of hand even when there's no readily available initialization function suitable for constructing a lazy val. This might bring to mind, for example, the setting of a final instance variable in a constructor.

Minty, are you still holding on?

Construction Revisited

The whistle blows continuously and sparks fly from the great driving wheels as the engineer brakes, having spotted our heroine in her peril. It seems the heavy express train will never stop in time—but wait, who rides over the hill on a white horse? Could it be that wild fellow Stickref? It is! And quick as a flash he leaps down, undoes the ropes, and pulls Minty to him, averting disaster by a hair's breadth!

To revisit the last example from the previous post:

class T {
private final int i;
private final String s;
public T (int i, String s) {
this.i = i;
this.s = s;
public int i { return this.i; }
public String s { return this.s; }

Can translating this into something with a stickref prevent access to uninitialized final instance variables?

It can. In not-quite-Java:

class T {
private final StickRef<int> i = new StickRef<>;
private final StickRef<String> s = new StickRef<>;
public T (int i, String s) {
public int i { return this.i.get(); }
public String s { return this.s.get(); }

Unlike the solutions of the previous post, I've made no attempt to catch invalid accesses at compiletime. Instead, I've wrapped the final instance variables in something that gives a runtime error for such accesses. It does, at least, catch the accesses as soon as they occur—unlike the regular Java semantics, which don't immediately throw an exception unless the default null or zero value for the final variables triggers such an exception.

You'll notice that this solution is quite a bit simpler than my previous ideas; I don't need to generate any synthetic partially-constructed classes.

Minty-fresh constructor semantics look feasible after all, provided that you're patient enough to wait until runtime to enforce them.

But Can I Afford It?

Readers on a budget will have noticed that the indirection involved in a stickref has a cost. freezeing isn't free, as they say. I haven't tried to benchmark the difference between access to a straight instance variable and access to one wrapped in a StickRef, but I'll bet you that any VM you try it on will show that the stickref is quite a bit slower. (Let me know if there's some VM for which that's not true!)

So how can I be pushing this stickref thing as a practical solution? What kind of Scala library author spendthrift would act as if allocating additional wrapper objects all over the place had no adverse consequences?

Well, I'm not really envisioning StickRef as a standalone thing. Existing VMs probably aren't tuned to detect this particular pattern (or LazyRef, for that matter). But if you built StickRef into a VM that knew to look for it, it doesn't take very sophisticated flow analysis to see that in most real-world constructors, the stickrefs are all frozen (the final variables are all set) before the constructor returns. Seeing that, a smart VM would just discard the stickrefs and set the variables directly, which should eliminate the cost except when there's genuine uncertainty about whether an illegal access will occur.

Multitrick Pony

OK, fine, so Stickref rescues Minty. Big deal. I mean, what kind of Itanium instruction set designer madman would come up with such an elaborate Sarah Winchester architecture of quasi-mutable references just to get one little lousy effect, an error message for something hardly anyone (besides me) is actually stupid enough to do?

As it happens, stickref semantics are a perfect fit for binding variables in Scheme's letrec*. And since letrec* is more or less the archetype for initialization of mutually referential expressions, I can fairly claim that, in a multilingual environment, stickrefs are going to show up in quite a few places. And if you optimize stickrefs for minty-fresh construction, you optimize them for all those places, too.

Bring the Whole Family

Incarnations of the idea of controlled mutability start popping out of the woodwork as soon as you start looking for them. Imaginge, for example, a variation on the Java semantics that guards against changing final variables at runtime rather than compiletime. The regular Java semantics set finals exactly twice: the first time to the default value, and the second time to the constructed value. To achieve this effect you need something a little more flexible than a sticky reference: a freezable reference, which can be examined and set as many times as you like before you freeze it into immutability:

class AlreadyInitializedException extends RuntimeException
class FreezaRef[T] (private var value: T) {
private var frozen = false
def get: T = synchronized(value)
def set (newValue: T): T =
synchronized {
if (frozen)
throw new AlreadyInitializedException
else {
value = newValue
def freeze (newValue: T): T =
synchronized {
if (frozen)
throw new AlreadyInitializedException
else {
value = newValue
frozen = true

Beyond FreezaRef we can spot the outlines of some more exotic members of the family, such as meltable references. These references are also freezable, but they can also be unfrozen and become mutable again. Unlike ordinary mutable variables, meltable references expect to be unfrozen only infrequently—that is, the VM can assume they will be unfrozen infrequently in choosing how to optimize them. You could use something like meltable references as building blocks in a data structure holding code that can be optimized and deoptimized: when the reference to a piece of code is frozen (perhaps it's a method call whose target has been resolved), the code becomes eligible for optimization. When you melt the reference (perhaps another method has been loaded so that the original call site is no longer monomorphic), you'll need to deoptimize the code (perhaps optimizing it again later if it can be refrozen).

The common theme with all these forms of mutable references is that they're only mutable sometimes. Over large and well-defined portions of a program's execution, they're immutable. That makes it easier for people to understand how they'll behave, and easier for compilers and VMs to figure out how to optimize them by eliding the synchronization and flag checks implied by the sort of code I've shown above.

So maybe we don't have to be so afraid of mutability. Give it a bath, teach it some table manners, keep it on a short-short leash, and we can welcome it into our home like a member of the family. Only VM and language designers need lie awake at night in a cold sweat over what might happen if something goes wrong.

Meanwhile, Back at the Construction Site

Controlled immutability is all very well, but it's distracted me from my original mission. What ever happened to unifying constructors and factory methods? I still have some half-baked ideas on that, but my blogging muscles are getting kind of sore, so I'll have to save it for another time.

Monday, July 25, 2011

Deconstructing Construction

Construction Semantics

Construction in many OO languages, including Java (and Scala), still mystifies me. First you have nothing, then you have something... but in between, you have neither nothing nor something, or at least not a properly initialized something. “Properly initialized”, in this context, means that all the immutable parts of the object you're creating (its final variables, in Javaspeak) have been given their final values. I know I'm not the only one foolish enough to have inadvertently written a Java constructor that passed the object under construction to code that observed the object's final variables in their default state—the null or zero to which Java sets final variables before they're assigned. It would be nice if there were some mechanism that could prevent me from sabotaging myself like this.

By contrast with Java, in Haskell (if my limited knowledge is accurate), there is no in-between when you invoke a data constructor; the arguments to the constructor are instantaneously wrapped in the result, and the constructor does nothing else. The price for this simplicity is that the constructed value exposes just the values used in its creation, neither more nor less; OO languages are more flexible in that they let you hide (or forget) the values passed to the constructor, and instead expose other values derived from the constructor values—the constructed object has complete control over how it appears to the outside world.

For convenience, let's name the types of constructor semantics I've mentioned above:

  • Regular semantics are those of Java: construction is nonatomic, and it's possible to observe uninitialized final variables as nulls or zeros.
  • Diet semantics are the simpler semantics of Haskell: construction is atomic, and does nothing but wrap the constructor arguments in the result.
  • New minty-fresh semantics are like those of Java, except that you can never observe a final variable in its uninitialized state.

Bonus Track: Factory Methods

While we're here, it would be nice to have a better handle on the relationship between constructors and factory methods. Java enforces the view that constructors and factory methods are entirely different by requiring different syntax to invoke them. You can wrap a constructor in a factory method, but not vice versa (see my previous whine). Can we imagine a world without this constraint, or if not, at least clarify why the constraint is necessary?

A Brief and Maybe Totally Wrong History of Constructors

As far as I know, earlier OO languages didn't worry much about the consistency of object construction. Smalltalk doesn't have (except by convention) constructors as distinct from ordinary methods. The default built-in class method new allocates an uninitialized object, and you're free to wrap new in any method you like, or not. Smalltalk also lacks built-in immutability, although it gives you the information-hiding tools to hide variables behind accessors, if you like.

The first language I can recall with detailed rules for construction was C++, whose constructors are distinct from regular methods. C++ requires that you initialize an object with a constructor, and that you call superclass constructors before subclass constructors. These rules avoid some conceptual inconsistencies, but since C++ is not a particularly safe language in general, are hardly sufficient to keep you out of trouble.

Java adds rules for constructing final variables to constructor rules that are similar to C++'s. In Java, you call new followed by the name of the class you're allocating, along with the arguments from which the correct constructor can be deduced. At the JVM level, this does two things: executes a new instruction (which is invisible at the language level), and then passes the result to the constructor proper, which looks to the VM like a method named <init>.

I'm too ignorant and lazy to figure out whether I've omitted important details about constructors here or gotten my history wrong, but as usual, Wikipedia will set you straight if I've steered you wrong.

The Sun Always Shines in My Imaginary Little World

So where are we going with all this?

I'm trying to imagine the details of an execution model that accommodates all three flavors of construction: regular, diet, and minty-fresh. I'm not saying any existing system currently does or doesn't support all three varieties, and I'm not saying I'm going to build one that does. I'd just like to figure out what such a thing would look like if it existed.

And of course I'd like it all for free. That is, construction should be as simple as possible, in terms of the numbers of concepts needed to support it, and as cheap as possible, in terms of runtime overhead. At least in my imaginary little world.

Oh Yeah, and Type Safety Too

And I'd like to be able to phrase my solution in a way that is consistent in terms of types. That is, I don't want programs to have to use casts or reflection or otherwise take heroic measures to get the various forms of constructor to work properly or typecheck correctly.

The argument to a Java constructor for a type T is nominally a T, but under the minty-fresh rules, it's not a fully valid T until the constructor completes, because you can observe uninitialized finals. The regular-flavor rules do make a halfhearted attempt to ensure that you only deal with fully constructed objects, in that you have to call superclass constructors before accessing any subclass features, but those rules aren't airtight enough to preserve the minty-fresh flavor I'm after.

One reason for leaving loopholes in the regular rules is that you'd like to be able to create immutable cyclic data structures: a partially constructed object A can pass itself to the constructor for an object B in such a way that A and B wind up with final references to each other.

class A {
final B b;
A () { this.b = new B(this); }
class B {
final A a;
B (A a) { this.a = a; }

You can imagine rules that forbid a partially constructed object from ever escaping its constructor (like the diet rules), but then there's nothing else (at least in the Java model) that would let you create a cycle of final variables. I'd like the minty-fresh rules to let you create cycles, too.

Dieting is Easy

Implementing the diet constructor rules is easy in terms of JVM semantics: any constructor that consists only of assignments from constructor arguments to final variables implements the diet rules. Because such constructors don't allow partially constructed objects to escape, the Java memory model rules regarding final instance variables ensure that no thread can ever see a partially constructed object.

So for the rest of this post, I can concentrate on the minty-fresh rules.

Construction in Terms of Types

One way to understand construction of a value of type T with arguments args might be to treat it as a function of type (T0,args) => T (apologies to anyone offended by mixing Scala and Java conventions), where T0 is a synthetic supertype of T that represents an uninitialized T—that is, it doesn't provide the features of T that depend on the constructor arguments to T.

Factory Work

Treating a constructor as an ordinary function opens the door to unifying factory methods and constructors. If you can declare a factory method that has the same signature as a constructor, then maybe there's a way to do it the other way around, and redirect construction to a factory method that returns a subtype of the type you're nominally constructing.

For Example

As an example of the above proposal, imagine treating:

class T {
private final int i;
public T (int i) { this.i = i; }
public int i () { return this.i; }

as (ignoring illegalities):

interface T {
T T (T0 this, int i);
int i ();
synthetic class T0 {
public TImpl T (/* T0 this, */ int i) {
TImpl t = (TImpl) this;
t.i = i;
return t;
synthetic class TImpl extends T0 with T {
private final int i;
public int i () { return this.i; }

so that new T(n) translates to (new T0).T(n).

What this accomplishes is that T0, the uninitialized version of T, doesn't declare the accessor i(). i doesn't exist yet, so you can't get in trouble by referring to it, if you somehow get hold of a T0 that has escaped its constructor.

Assumptions made include:

  • new still allocates an uninitialized version of a class.
  • A cast from the uninitialized version of a class to the full version is allowed, at least within the constructor.

The rules need more elaboration. What if T has a method that doesn't depend on its own state, like this?:

public void sayHello () { System.out.println("Hello"); }

(Functional purists, please avert your eyes from the side effect.) And what about methods that are not themselves simple accessors, but depend on them?:

public int iPlusOne () { return this.i + 1; }

Is either sayHello or iPlusOne declared in T0? And if the constructor allows a T0 t0 to escape:

  • Does a call to t0.sayHello, if declared, actually succeed?
  • What's to prevent (T) t0 outside the constructor?

But wait, there's more! What if T contains more than one final variable that needs to be initialized?:

class T {
private final int i;
private final String s;
public T (int i, String s) {
this.i = i;
this.s = s;
public int i { return this.i; }
public String s { return this.s; }

All Is Not So Clear and Simple

At the call to escapeFromHere, T is partially constructed—i is valid, but s isn't. So do there now have to be multiple stages of uninitialized classes corresponding to each order in which T's final variables could be initialized, something like this?:

interface T {
T T (T0 this, int i);
int i ();
String s ();
synthetic class T0 {
public TImpl T (/* T0 this, */ int i, String s) {
T1 t = (T1) this;
t.i = i;
return t.T1(i,s);
synthetic class T1 extends T0 {
public TImpl T1 (/* T1 this, */ int i, String s) {
TImpl t = (TImpl) this;
t.s = s;
return t;
synthetic class TImpl extends T1 with T {
private final int i;
private final String s;
public int i () { return this.i; }
public String s () { return this.s; }

I suppose this kind of thing is feasible, but it's starting to look unattractive.

The requirement that a cast to the full T fail on an incompletely initialized T outside the constructor is particularly vexing, because although you can munge the class pointer for the object under construction (so that the object never claims to be more initialized than it actually is), you need to ensure that the object makes this claim in every thread that can observe it—that is, you have to introduce synchronization guarantees for partial initialization of final instance variables, as well as the guarantees for full initialization provided by the JVM.

This naive typesafe approach to construction appears to be collapsing under its own weight. Can minty-fresh construction be rescued from death by a thousand implementation details? Are we doomed to stew forever in a conceptual muddle over constructors? Stay tuned.

Saturday, July 23, 2011

Modularly More Generally

I finally got around to reading Gilad Bracha et al's paper about Newspeak modules. Newspeak's modularity is based on the idea that the only thing an object can see initially is what is passed in at construction time. There's no global class loader, no global state—so that when you construct an object, you have total control over the universe it lives in.

At first glance, this sounds very different from the way Java views the world. But I think there's a simple way to view Java modularity in terms of Newspeak modularity: just act as if each object's constructor were implicitly passed a reference to its class loader (along with its other arguments). Unless I've missed something, this gives you access to the same universe as the standard Java semantics.

So you could implement a JVM (and a bunch of other stuff, presumably) on top of a system with Newspeak-like modularity, although not the other way around. Doing so might offend the sensibilities of the Newspeakers, but seeing that one thing is a more general case of another always gives me the warm fuzzies.

Wednesday, July 20, 2011

Kotlin, the Diamond Inheritance Problem, and a Provisional Solution

Yesterday afternoon at the JVM Language Summit brought a pleasant surprise in the form of the Kotlin language. The language itself is remarkably ambitious, if largely unfinished (one of the presenters informed us that the largest Kotlin codebase in existence was 250 lines). The language is charmingly expressive and contains some sophisticated concepts, including many of those found in Scala (Paul Phillips of the Scala team said it looked as if the Kotliners had gone through the Scala manual item line by line, checking off the features they wanted to keep or reject. It looks to me, though, that Kotlin has borrowed from more than one language).

Kotlin makes nullability annotations mandatory and easy to express: a type that can be null ends in a question mark. A Kotlin Int is a Java primitive int, never null; a Kotlin Int? is a java.lang.Integer that can be null.

Kotlin's pattern matching is prettier than Scala's; I've often resented having to waste four whole characters on the case keyword at every pattern.

Like Scala, Kotlin also variance at the declaration site of a class. But it also has use-site variance, and the compiler enforces the latter to ensure that you can't abuse the use-site type to violate the promises made in the class's declaration. This idea was clever enough to prompt Paul Phillips to burst out that he was going to steal it for Scala (although folks on the scala-debate mailing list have pointed out that there are already other ways to do this in Scala).

The Kotlin team took the most heat when they described their inheritance rules, which allow generalized multiple inheritance. They solve the diamond inheritance problem with duplicated superclasses, like C++. Multiple inheritance is obviously not a popular choice among language designers these days. There's no doubt that a number of languages have gotten multiple inheritance wrong, and that even where the language itself hasn't made some hideous blunder, an API designer can use multiple inheritance to make a bad design much worse.

But but but but but. All that said, when not abused, multiple inheritance is quite expressive. Java programmers use multiple inheritance of interfaces all the time, and nobody makes them wear scarlet letters “MI”. Purists will object that of course this is not inheritance; inheritance means inheritance of implementation, in the form of code or data, not just method signatures.

Well, whatever.

To be sure, Java-style multiple inheritance is virtual inheritance: if a class C inherits a method m from two interfaces A and B, C gets only one implementation of m; there is not a distinct A.m and B.m within C.

C++, by contrast, allows nonvirtual multiple inheritance that can pick up multiple distinct versions of inherited features (as well as virtual inheritance, in case you are insufficiently confused). Nonvirtual multiple inheritance may occasionally allow for some clever implementation tricks, but for those of us who are trying to cling to the shreds of our object-oriented faith that an object has sole control on how it responds to messages, nonvirtual inheritance (which allows a message sender to choose which inherited response it wants for a given message) is a heresy, an abomination, and by Turing, an affront to all that is good and decent.

Scala, of course, allows multiple (virtual, of course) inheritance of traits. Traits can, and often do, contain both code and data. Purists will object that of course this is not true multiple inheritance because, uh, well, the things being inherited are called traits, not classes. And multiple inheritance refers only to classes.

Well, whatever.

Scala traits may be well down the slippery slope of multiple inheritance, but they do differ from classes in that they can't have constructors. Or more precisely, they can't have constructors with arguments—but they can contain initialization code, which gives the effect of no-argument constructors.

Here's a question: ignoring for the moment the broadly agreed inadvisability of generalized multiple inheritance (I realize that asking you to ignore this may, in some cases, require a superhuman act of forbearance), what would such inheritance look like, if inheritance were virtual and constructor inheritance were allowed?

Consider the classic diamond:

/ \
Bird Cooing
\ /

Dove inherits directly from Bird and Cooing, each of which in turn inherits directly from Animal. Dove's constructor therefore has to invoke constructors for Bird and Cooing, and Bird's and Cooing's constructors each invoke an Animal constructor.

Remember now that we're aiming for inheritance from a single copy of Animal, lest we fall into the mortal sin of nonvirtual inheritance.

First, an easy case. What if Animal takes no constructor arguments (or more precisely, neither Bird nor Cooing invokes an Animal constructor with arguments, in the case that Animal has more than one constructor)? There's an obvious course of action: the first invocation of the Animal constructor is performed normally, and the second is intercepted and not done. This seems plausible on the basis that one Animal constructed with no arguments is likely to be more or less indistinguishable from another constructed with no arguments, so we fulfill our intention that Bird and Cooing are the same Animal—as long as Animal is constructed just once, our immortal souls are not imperiled.

The above scheme works even if the Bird and Cooing constructors take arguments. Constructors of immediate supertypes are performed in some order (presumably, by default, in reverse linearization order, on the principle that the type with the highest priority gets last say; in this case, this would mean Bird is constructed before Cooing). Dove just passes the respective arguments to each of the constructors in order (so that Bird's constructor constructs Animal) before embarking on its own constructor code.

The poop hits the scoop when Bird and Cooing invoke Animal constructors with arguments. How do we preserve the property that they construct the same Animal? One idea is, again, just to elide the second construction. However, if Cooing (whose construction of Animal would be skipped) wanted to specify an Animal with properties different from those specified by Bird, wouldn't that mess up Cooing? Cooing, after all, is well within its rights to depend on the property values that it sets in Animal.

The solution I've come up with is so obvious it's stupid: just check the constructor arguments. That is, firstly, if Bird and Cooing invoke different constructors of Animal from their constructors invoked by Dove, then Dove is illegal to begin with, and the compiler will complain (this has the possibly nasty side effect of requiring Bird and Cooing to expose to Dove precisely how they construct Animal; feel free to lie awake all night over this on my behalf). If Bird and Cooing invoke the same constructor, though, you're not yet free and clear; at runtime, you perform the first construction, but you redirect the second construction to a check that the arguments you pass to the second construction are the same as those you passed to the first construction. If the check fails, throw an exception so that the construction of Dove does not complete.

There is no shortage of hassles with this strategy, including actually writing the code to preserve the arguments passed to the first invocation of the Animal constructor long enough to be checked with those passed to the second, and choosing the appropriate definition of “same constructor arguments”. (As for efficiency, in practice, I would hope that a smart enough VM would usually observe statically that the same values are passed to the second constructor as to the first and thereby deduce that it could elide the check, since many constructors pass their own arguments unaltered to their superclass constructors.) Anyhow, this is the best solution to the generalized diamond inheritance problem I've managed to come up with. Let the hole-poking begin.

Tuesday, July 19, 2011

Erasure Wars: No End in Sight

Among the other excellent sessions at the JVM Language Summit yesterday, Mads Torgersen talked about .NET and its CLR multilanguage VM. The CLR runs C#, among other languages (including Java, if you get Jeroen Frijters's IKVM—he treated us to the astonishing spectacle of Eclipse running on .NET in a later session yesterday).

For those of who spend most of our time with the JVM, the CLR is a fascinating alternate universe. Originally inspired by the Java VM, most of its details are familiar, but enough are different that contemplating the CLR gives me that hint of vertigo you feel when you've been looking at the world through one eye and then suddenly switch to stereo vision.

Perhaps the most conspicuous difference between the two VMs is that the CLR does not erase type parameters, a choice made to support the semantics of C#'s parameterized types. By not throwing away the information about how a parameterized class instance is allocated, the CLR enables things like specialization based on the type arguments, as well as the ability to answer obvious runtime questions like, what is that List a list of?

To be sure, unerased types have a cost; you actually have to construct and store the runtime representation of the types in question. Furthermore, introducing unerased types to the JVM at this late date would be extremely disruptive, and the Java folks discourage speculation about it, although I got the impression yesterday that they're a bit of envious of what the CLR can do with them. But if there are other negative consequences of unerased types, I'm not aware of them—although someone said yesterday that Martin Odersky thinks type erasure is a Good Thing, which is enough to give one pause.

Not erasing types does raise some exotic possibilities. If type arguments are real arguments actually passed when an object is constructed, and the constructed object's type depends on those arguments, what if the object's type depended on arguments that weren't types? This is a peek into the world of dependent types, which should be enough to keep the mathematically minded awake at night.

Cooler heads may have decided that I don't get my unerased type arguments, but I still can't help looking over the fence and drooling. There may be workarounds in theory to the lack of runtime type arguments on the JVM, but in practice, it seems there's never a manifest around when you need one.

Wednesday, July 6, 2011

A Human Condition

A while ago I was talking with a friend of mine about some science fiction I'd read. I don't remember the story's premise, but it was something different from the everyday reality we moderns are familiar with. My friend's reaction was to dismiss the story as absurd: “But that would change the human condition!”

People often use the term “human condition” as if it referred to something immutable and eternal. Although I suspect that some aspects of our existence are forced upon us by the fact that we are tool-using social animals, a lot of the assumption we and everyone we know make about our lives seem more likely to be specific to our time and place. So I've wondered from time to time: what does constitute a change to the human condition? (And is the human condition shared by all humans at all times? Could some nonhumans also experience the human condition?)

I'll take a shot at answering my question, even though other people have answered it often and with more thought. A human condition, I'll say, is a broad cultural outlook constrained by human institutions, and includes the possibilities people perceive for their own lives when they live under those institutions. A human condition is not specific to a particular culture, in the usual sense of the world “culture”, but belongs instead to a broad complex of cultures at roughly the same level of development. A change in the human condition is a change in the shape of the psychological space of the people who share that condition, or at any rate in the consensus of those people on what psychological spaces exist.

I think agriculture was probably a technology big enough to have changed the human condition. It turned the landscape from a natural place to a made place. It created property and wealth in the modern sense—you could own more than you could carry. Agriculture required longer-term planning and longer persistence at a single activity than hunting and gathering, and so created the beginnings of the modern sense of time.

Agriculture enabled cities, another human-condition disruption. Cities meant that, for the first time, most people you encountered on a daily basis were strangers. That meant rules for how to behave in public, and so there was now a distinction between public and private spaces.

Cities required larger and more anonymous systems of government than agricultural villages, and so begat the state. States, and the elaborate religions that accompanied them, created the possibility of loyalties to things and people other than your family and friends.

And perhaps my favorite of all, there's writing. Writing, for the literate, turns memory from a short-term feat shared with few to enduring knowledge shared with many—with the world and with the ages, if you're lucky and you write skillfully enough. Knowledge has always been a form of power, but writing let you acquire a whole heap of knowledge and literally lock it up.

For the five millennia from the invention of writing and the rise of the first city-states until the industrial revolution, I would say that the human condition remained largely unchanged in the most densely populated parts of the world. States came and went; religions came and went; languages and customs came and went; technologies came and went. But the shape of psychological possibilities in most urban and agricultural societies was similar: if you were a member of the elite, you had some measure of freedom and some scope for ambition, and often a reasonably broad education even by modern standards. And if you were not one of the elite, you usually had little or none of those. And it was obvious to everyone that there had to be an elite who had the freedom to obey their desires, and a much larger mass of people who had to obey the elite.

The harnessing of fossil fuels and the invention of mass production again changed the shape of society and of human ambitions. Even relatively poor people could afford more than the minimum needed to keep them alive. Societies could afford to educate everyone. Knowledge was still power, but there was a lot more power around, both in terms of the physical expenditure of energy and the degree of control people had over their lives. The assumption that an elite is inevitable became open to question. Democracy became widely viewed not as a dangerous experiment but as the default way to organize government.

The latest thing to change the human condition is the internet, or more precisely, inexhaustible storage and retrieval of information and ubiquitous connectivity to that information. Remember conversations before Google? Traveling before GPS? Those experiences are shrinking in the rear view mirror, off to join the telegraph and the steam locomotive. Now everyone knows everything, or at least everything for which they can come up with decent search terms. Although there are types of power other than knowledge, the power that is knowledge belongs to everyone in the world who can afford a phone or a computer.

I don't think I've fully internalized what it means to be human now that our condition has changed again. Have you? What do we do with ourselves now?

Saturday, July 2, 2011

Pushing Back

Streams are Scala's flagship lazy data structure, and they let you do all the good things that lazy evaluation enables. But although laziness can have performance benefits (by not evaluating something that will never be used, or delaying a computation that would entail deep recursion if performed eagerly), it can also incur a performance cost by unnecessarily wrapping an already-evaluated value in a thunk.

If every item in a sequence is already evaluated, you'll want to use a List, not a Stream. But sometimes you have a sequence that is a mix of evaluated and unevaluated items, often because you look ahead in a stream and evaluate and then push back items that are not processed immediately. This can occur in compiling programming languages, or in normalizing the order of a prettyprinting token stream, as in S. Doaitse Swierstra and Olaf Chitil's 2009 paper Linear, Bounded, Functional Pretty-Printing.

It occurred to me that you can write a Scala-style stream that avoids the thunk-wrapping and unwrapping that occurs when you push an item onto the head of an already-evaluated stream (in Scala, only the tail of the stream is ever passed by name; the head of a stream is passed by value when the stream is created, and so is always preevaluated). Every function that constructs a stream knows whether its tail is passed by name, and could theoretically create different types of cons cell for evaluated and unevaluated streams:

abstract class Cons [+A] extends Stream[A] {

class LazyCons [+A] (val head: A, after: => Stream[A])
extends Cons[A] {
lazy val tail = after

case class EagerCons [+A] (head: A, tail: Stream[A])
extends Cons[A]

In SI-4698 I submitted a timing test that lets you observe the cost of LazyCons vs. EagerCons in the case where the tail has already been evaluated. On my machine it takes over 50% longer to construct or examine a lazy cell than an eager cell.

I don't know whether Scala streams users add items to the head of already-evaluated streams often enough to make it worth complicating the standard streams implementation with a change like this just for the speedup. But it's an idea.

Monday, June 27, 2011

Exceptionally Lazy Messages

In a perfecter world, wouldn't Java exceptions have been better defined with their messages as by-name arguments?:

class Throwable (message: => String, cause: Option[Throwable] = None)

Not that that's Java syntax, but you know what I mean.

Any number of times, I've gone to throw an exception with a nice detailed message, but confused myself by getting an exception during the construction of the message instead. The latter exception obscures the occurrence of the problem that would have thrown the former exception.

Maybe I'm the only one who writes message-building code complicated enough to have bugs of its own, but maybe not. I'm just sayin'.

Sunday, May 22, 2011

Too Lazy to Recurse

If a tree doesn't fall in the forest, but instead you record some instructions (which you might or might not carry out later) on how to make the tree fall, does the Scala compiler make a sound?

The answer appears to be yes, at least when you're trying to use the @tailrec annotation (which complains about recursive calls when they don't appear in tail position).

For example, consider this completely useless function, which computes a value equal to its (nonnegative) argument by recursing in nontail position:

def foo (i: Int): Int = if (i == 0) 0 else 1 + foo(i - 1)

If you mark this declaration @tailrec, the Scala compiler complains, as expected; the call to foo(i - 1) is not the last thing evaluated in foo, because the 1 + remains to be done after foo returns.

Now consider:

case class Later1 (deferredI: () => Int) {
lazy val i = deferredI()
override def toString: String = i.toString

def foo1 (i: Int): Later1 =
if (i == 0) Later1(() => 0) else Later1(() => 1 + foo1(i - 1).i)

Like foo, foo1 contains (textually) a call to itself. But foo1 doesn't actually call itself during an invocation of itself; it just computes and stores a function that would call foo1 later, if anyone bothered to invoke it. It's clear that such a call to foo1 isn't tail-recursive—you could never replace it with a goto—but then you would never expect it to be, since it's not called from foo1.

So is the Scala compiler being too picky about @tailrec? To be fair, you might want it to complain if instead you did something like:

def foo1a (i: Int): Int =
if (i == 0) 0 else Later1(() => 1 + foo1a(i - 1)).i

foo1a defers the call to itself only long enough to wrap the call in a Later1, then goes ahead and forces the call to complete anyway before returning. Even if you didn't want a warning for the foo1 case, you might want one here.

And even in the previous case, although foo1 returns a Later1 without ever reinvoking foo1, the computation captured in the Later1 will recurse, if it's ever carried out. Later1.i calls the deferredI created by foo1, and that deferredI calls Later1.i (where the latter Later1 results from calling foo1 again, although there are never two calls to foo1 on the stack at the same time).

As an aside, Scala does provide a more succinct way to express a deferred call to a function with no arguments. Although by-name vals are illegal, so that this won't compile:

// BAD: case class pararameter i is a val:
case class Later2 (i: => Int) {
override def toString: String = i.toString

you can trade some added boilerplate at the declaration of Later2 for some reduced boilerplate when you construct a Later2:

class Later2 (deferredI: => Int) {
lazy val i = deferredI
override def toString: String = i.toString

object Later2 {
def apply (i: => Int) = new Later2(i)

def foo2 (i: Int): Later2 =
if (i == 0) Later2(0) else Later2(1 + foo2(i - 1).i)

Note that there's no need for “() =>” in the arguments to Later2.

The version using Later2 works as expected, but foo2 still provokes the compiler's wrath if you annotate it with @tailrec.

Thursday, March 31, 2011

Scala Internal DSL for Lambda Calculus

I've been playing with the untyped lambda calculus recently, and come up with this notation for it in Scala:

'x ^: 'y ^: 'x ! 'y
'x ^: 'y ^: 'x ! ('x ! 'y)
'x ^: 'y ^: 'x ! ('x ! ('x ! 'y))

which translates to:


(You might recognize these as the Church numerals one through three.)

To digest this, you first have to imagine abstraction as an infix operator rather than a prefix operator. If you squint a little, you can see a caret as a lambda (the colon is necessary to get Scala to make the operator right-associative, as is conventional).

The exclamation point is the application operator by analogy with the notation used in Scala Actors. It takes only a wee bit of mental squinting to see passing an argument to a function as sending a message to an actor.

The single quote prefix in Scala turns an identifier into a Symbol, which is like a String but with only half the quoting. Scala Symbols are intended for purposes that are, uh, symbolic—I think a lambda calculus variable qualifies.

As cute as dancing kittens? You be the judge.

Sunday, February 27, 2011

Almost Literate Syntax Trees

Suppose you're translating a lambda calculus expression like:
into a Lisp-like syntax tree:
  (fun 'x
(fun 'y
(fun 'z
(apply (sym 'x) (sym 'y))
(sym 'z)
where sym expresses a reference to a symbol that is (or should be) bound in the current context.

If you replace fun with givenA and sym with the, the result looks like:
  (givenA 'x
(givenA 'y
(givenA 'z
(apply (the 'x) (the 'y))
(the 'z)
This exploits an English speaker's understanding that “a” introduces a term, and “the” refers to an occurrence of that term in the same context.

The second example reads better to me, but I haven't been able to come up with an everyday-English equivalent for apply. Maybe it's just not an everyday concept.

Saturday, February 19, 2011

God Lizard

When designing something new, nothing seems to matter more than getting the terminology right. If you don't name things precisely—one concept, one word—you're likely to confuse yourself. And if you don't name them concisely, people won't bother to use your terms, and they'll confuse themselves when they try to talk about what you've built.

The API designer's most important tool is the thesaurus. Which, contrary to what you might think, does not come from the Greek words for “god lizard”, but from the word for “treasure”.