Wednesday, December 24, 2014

Five for '15

2014 was a pretty good year for me in terms of learning new things. My meetup group held a bunch of meetings at which we explored type-theoretic topics and worked our way through some computer-assisted theorem proving.

But next week is another year, and I'm trying to figure out what I should be trying to figure out next. I've come up with five things I'd like to learn about in 2015:

  1. How concurrent can logic programming get, and what techniques are used to parallelize operations in a language like Prolog? This is a separate question from how to create Prolog predicates that perform conventional sorts of thread management operations, which is mostly what you find if you search for “concurrent Prolog”. It's also separate from the question of how concurrent logic programming should get—the consensus seems to be that Japan's fifth generation computer project yielded the answer “not very” (people found it hard to write programs in a way that took full advantage of concurrent logical operations). But I'm still curious to figure out just how you'd adapt a logic programming implementation (like the one described in Structure and Interpretation of Computer Programs) if you pretended that you could run arbitrarily many processes at once with no communication costs between them.

  2. On a related note, how might you compile a dependently typed programming language to Prolog? There seems to be a straightforward translation of dependent algebraic data types to Prolog clauses, but I haven't sat down to figure out how you'd integrate that with an interpreter for the dependently typed lambda calculus. As a subquestion, is it possible to generate the Prolog such that it does not use any sequential operations like cuts, so as to take advantage of my infinitely parallel Prolog implementation from the previous question?

  3. What type systems have been developed for process calculi? Are any as expressive as the dependently typed lambda calculus? What sort of mathematics do such type systems lead to? (These are some topics we may explore in the meetup group this year.)

    I realize that a fair amount has been written on this topic; I just haven't read it yet. But straight off, it seems as if a type system for something like the pi calculus couldn't look much like one for the lambda calculus. A function type in the lambda calculus assumes that control returns to the caller of the function; the function type describes the relationship between the inputs and outputs to the function. In the pi calculus, control could go off in any (or no) direction—the closest analog to a function type would be a channel type that describes what types of values get sent to the channels that are inputs to the channel in question (I think). This sounds a lot wilder than the tidy walled garden of the lambda calculus.

  4. Is there an elegant way to create a sort of dependently typed Lisp? By which I don't mean that I just want more parentheses so that I can stuff complex type declarations into them—I'm not that fond of traditional Lisp syntax, and I'm even a fan of infix operators. What I'm looking for is a way to create a language with a sophisticated type system (like the dependently typed lambda calculus) that is homoiconic—that is, not only is there is a straightforward correspondence between a source program and the resulting syntax tree, but syntax trees themselves are easy to understand and manipulate (at least that's what I mean by “homoiconic”; there seems to be plenty of controversy out there about how the term should be understood). More specifically, I'd like a language where equivalents of the standard Lisp operations quote and eval are easy to define and use.

  5. Looking in a slightly different direction, how can one develop neural networks that implement functions of arbitrary (dependent) type? I've read a little about how recurrent neural networks can handle variable-sized data as input to an artificial neural network, but how does that work for output?

If 2014 for me was the Year of the Lambda Calculus, it looks as if 2015 will be the Year of the Pi Calculus, given how frequently I've mentioned concurrency above.

I hesitate to formulate my interest in these topics as any sort of New Year's resolution, because we all know what happens to New Year's resolutions. But I have agreed to give a short presentation on opportunities for concurrency in Prolog implementations on the 4th of January. I've found that promising a talk on a topic one knows nothing about is an excellent motivator for learning about said topic.

Sunday, December 14, 2014

Poker with Tarot Cards

So recently I was learning to play Poker along with an inquisitive 10-year-old named Eli, and the question came up: how would you play Poker with Tarot cards?

(The comedian Steven Wright is known for saying, “Last night I stayed up late playing Poker with Tarot cards. I got a full house and four people died.”—but in fact, the Tarot deck has been used for playing games since it appeared in Europe in the 15th century, presumably with few fatalities. According to the Wikipedia article on Tarot cards, the earliest record we have for the practice we now consider the primary purpose of the Tarot—telling fortunes, where each card has a particular significance—dates from the 18th century. So if historical precedence is any guarantee, you are most likely not courting spiritual disaster by sneaking in a few hands of Tarot Klondike.)

Of course, making the rules is always more fun than playing by the rules, so I'm hardly the first person to suggest alternate ways of playing Poker. Variations include:

The knobs that can be twiddled in designing a new Poker variant (ignoring the betting, which is a whole nother can of worms) include:

  • The deck:
    • The number of suits
    • The number of ranks
    • Whether all suits have the same ranks (is the deck a rectangular grid?)
  • The number of cards that count as a hand (could be more or less than the usual 5)
  • The types of card combinations that are scored:
    • Straights (card ranks are sequential)
    • Flushes (all cards are of the same suit)
    • n of a kind (that is, groups of n cards of the same rank)
    • Or something else, like the super mentioned above. You could also eliminate or modify one of the standard combinations—maybe flushes don't count for anything, or you allow a “small straight” (all but one of the cards' ranks are sequential)

Let the twiddling begin!

The deck of interest here is the 78-card Tarot deck of 5 suits, where the trumps have 22 cards and the other suits (the standard 4 from the 52-card deck) 14 cards each. The trumps are numbered 0 through 21; the other suits 1 through 10 and then Jack, Knight, Queen, King (or Valet, Cavalier, Dame, Roi in the French Tarot deck I have).

A 5-card hand seems a bit small for a deck that is half again as large as the standard one, so I'd prefer 6. Eli agrees, which settles it.

How to score the hands is a bit of a puzzle. What is a straight? Do we just do a sequential mapping of the trump ranks to the ranks in other suits, and decree that a Jack is an 11, a Knight a 12, a Queen a 13, and a King a 14? Somehow that doesn't seem very satisfactory. I gnawed on this problem for a while and finally decided that straights are based on the nominal ranks of the cards, and the permissible straights are defined by how those ranks are ordered in each suit. That is, the ranks in a straight must be a subsequence of either:

    0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 0


    1 2 3 4 5 6 7 8 9 10 J N Q K 1

(where the lowest card in each suit may also be the highest, for purposes of a straight, as is conventional).

For example, a 6-card straight 3-4-5-6-7-8 may consist of a mix of trump and nontrump suits, but a straight that begins with 8-9-10 must continue with either the 11-12-13 of trumps or the J-N-Q of some other suit(s), so as to preserve a sequence of ranks that actually appears in some suit.

Increasing the number of suits means that flushes become rarer, and increasing the number of cards in the hand means that straights become rarer. These both seem like aesthetic flaws, but I've decided to allow small straights (that is, sequences of 5 cards within a 6-card hand; big straights of 6 cards are also allowed), while ignoring the flush problem (you still need all 6 cards to be of the same suit for a flush. Mixing small straights and small flushes just makes the names of the hands too long).

Here's the ranking of hands (with frequency), based on the above choices:

     big straight flush 0.000023%
     small straight flush 0.000235%
     5 of a kind 0.000284%
     4 of a kind + pair 0.002403%
     3 of a kind + 3 of a kind 0.002412%
     flush 0.033468%
     big straight 0.049992%
     4 of a kind 0.052961%
     pair + pair + pair 0.097513%
     small straight pair 0.106750%
     3 of a kind + pair 0.353574%
     small straight 0.504444%
     3 of a kind 2.466884%
     pair + pair 6.093654%
     pair 40.184611%
     high card 50.050791%

A “small straight flush” consists of 6 cards of the same suit, but only 5 of them are in sequence. A “small straight pair” has 5 cards in sequence, and the rank of one member of the sequence is repeated in the sixth card (as in 4-5-6-7-8-5).

If the whole big straight/small straight thing gives you the willies, an alternative ranking that uses only 6-card straights is:

     straight flush 0.000023%
     5 of a kind 0.000284%
     4 of a kind + pair 0.002403%
     3 of a kind + 3 of a kind 0.002412%
     flush 0.033703%
     straight 0.049992%
     4 of a kind 0.052961%
     pair + pair + pair 0.097513%
     3 of a kind + pair 0.353574%
     3 of a kind 2.466884%
     pair + pair 6.093654%
     pair 40.291361%
     high card 50.555235%

And in case anyone is curious what happens in a standard 52-card deck with 5-card hands where you allow small (4-card) straights, the rankings are:

     big straight flush 0.001539%
     small straight flush 0.012159%
     4 of a kind 0.024010%
     3 of a kind + pair 0.144058%
     flush 0.184381%
     big straight 0.392465%
     small straight pair 0.650106%
     3 of a kind 2.112845%
     small straight 3.100471%
     pair + pair 4.753902%
     pair 41.606797%
     high card 47.017268%

For reference, the standard scoring in the standard deck is:

     straight flush 0.001539%
     4 of a kind 0.024010%
     3 of a kind + pair (full house) 0.144058%
     flush 0.196540%
     straight 0.392465%
     3 of a kind 2.112845%
     pair + pair 4.753902%
     pair 42.256903%
     high card 50.117739%

If you're interested only in standard scoring of 5-card hands in alternate decks, Travis Hydzik has a calculator that quickly figures out hand values for any deck with at least 4 suits and at least 5 ranks, where every suit has the same ranks. If you play with this tool a little bit, you'll see that 13 ranks is something of a sweet spot for decks with a moderate number of suits (between 5 and 9)—with fewer ranks, pairs become even more common than hands with nothing in particular (that is, hands ordered by high card), which strikes me as perverse.

So, I've answered the Tarot Poker question to my own satisfaction. If you want to play Poker with Tarot cards and make different rules, by all means go ahead. Discovering the analogies that work for you between the standard deck and a different deck is more than half the fun.

I thought I would be done by now, but I'm not. Traipsing around the Web in search of alternate card decks led me to something wonderful that you can actually buy: a Fanucci deck. If you are sufficiently Old School, you might recognize this as belonging to the Zork subgame Double Fanucci. What's wonderful about the Fanucci deck? Well, it's got 15 suits named Books, Bugs, Ears, Faces, Fromps, Hives, Inkblots, Lamps, Mazes, Plungers, Rain, Scythes, Time, Tops, and Zurfs—11 cards each, numbered 0 through 9, then ∞—along with 9 unranked trumps called Beauty, Death, Granola, Grue, Jester, Light, Lobster, Snail, and Time (that's right, there's a Time trump that is not part of the Time suit).

Thank you, Internet!

If you don't immediately want to play with something like that, then I just don't know what's wrong with you. Unfortunately, this is just a blog, so I can't actually play a game with the Fanucci cards with you here, but I can tell you what happens when you try to apply the standard Poker scoring rules to the Fanucci deck (5-card hands—I'm treating only the cards ranked 0 through 9 as able to participate in straights; the trumps have no numeric rank and so cannot participate in straights or n of a kind):

     straight flush 0.000007%
     flush 0.000555%
     5 of a kind 0.002634%
     4 of a kind 0.190342%
     straight 0.363255%
     3 of a kind + pair 0.418992%
     3 of a kind 4.593350%
     pair + pair 6.961714%
     high card 41.277900%
     pair 46.191251%

Hmm, high card is more common than a pair. A bit disappointing, but not unexpected, given how many more suits there are than ranks.

One way to try to fix Fanucci Poker is by not letting pairs count for anything, which gets you:

     straight flush 0.000007%
     flush 0.000555%
     5 of a kind 0.002634%
     4 of a kind 0.190342%
     straight 0.363255%
     3 of a kind 5.012342%
     high card 94.430865%

but this means that 94% of hands are scored by high card, which is way off the 50% for standard Poker. That can't be right.

More in the spirit of things is probably to reverse the role of ranks and suits: ranks count only when all 5 of your cards are the same (a flush-like 5 of a kind), whereas any 2 or more of the same suit count (an n-of-a-kind-like flush). So, for example, a straight 3-flush is a straight where 3 of the cards have the same suit:

     straight 5-flush 0.000007%
     straight 4-flush 0.000502%
     5-flush 0.000555%
     straight 3-flush + 2-flush 0.001005%
     5 of a kind 0.002634%
     straight 3-flush 0.013059%
     straight 2-flush + 2-flush 0.019589%
     4-flush 0.065484%
     straight 2-flush 0.156714%
     3-flush + 2-flush 0.163567%
     straight 0.172385%
     3-flush 2.518297%
     2-flush + 2-flush 4.194274%
     2-flush 39.967163%
     high card 52.724764%

This treatment makes 53% of hands scored by high card, which is satisfyingly close to conventional.

I've calculated the odds above with the help of a Scala program I wrote. Because I didn't trust myself to do the combinatorial math for quirky decks where the suits don't all have the same length, the program actually enumerates all possible hands (ignoring order) and checks each of them—this takes many hours on my laptop for 6-card Tarot hands or 5-card Fanucci hands. You're welcome to try the program out for yourself, but if you want to play 7-card Fanucci, you might want to take a different approach.

Anyway, I hope all this encourages you to get some unfamiliar cards and try something new. All decks on hand!

Saturday, September 6, 2014

Code Dependent More

The more I learn about the dependently typed lambda calculus, the cooler it seems. You can use it to describe refinement types (aka subset types), which are about as expressive as a type system can get; you can prove theorems (see chapter 8 of the Idris tutorial); you can even, if the Homotopy Type Theory people have their way, use it to build an entirely new foundation for mathematics.

However, I think dependent types need a bit of a marketing makeover. They have a reputation for being abstruse, to say the least. I mean, have you even read the dependent types chapters in Benjamin Pierce's Types and Programming Languages, long the bible of the aspiring type theorist? No, of course you haven't, because those chapters weren't written (although dependent types do merit a whole section in Chapter 30). The dependently typed calculus is apparently so arcane that even the place you go to find out everything about types won't say much about them.

(Not that Pierce is in any way a slacker; his more recent Software Foundations is all about the use of dependent types in the automated proof assistant Coq.)

Although following all the mathematics to which dependent types lead takes a good deal of sophistication (a lot more than I've got), and the implementation choices facing a programming language designer who wants to use dependent types are considerable (let's sweep those under a Very Large Rug), a description of what the dependently typed lambda calculus is is not particularly difficult to grasp for anyone who has used a statically typed programming language.

But then, of course, there's the word “dependent”, which pop psychology has not endowed with the most favorable of connotations. (I'd rather be addicted to coding than addicted to codeine, but still.) Surely more programmers would satisfy their curiosity about more expressive type systems if the dependently typed lambda calculus were just called something else (and while we're at it, preferably something shorter).

So What Is It?

If you haven't seen the dependently typed lambda calculus before, then you might want to start at the beginning, with the untyped lambda calculus (which, if you've used any kind of conventional programming language, you already sort of know, even if you don't know you know it). In the concise notation favored by programming language geeks, the grammar for the untyped calculus looks like:

     e  :=      v        a variable
λv.e a function with parameter v and body e
e1 e2 the function e1 applied to the argument e2
built-in value whatever you want: true, 42, etc.

This may look too succinct to make anything out of, but it just means that the untyped lambda calculus is a language that contains variables, functions, function calls, and built-in values. Actually, you can leave out the built-in values (I put them in blue so you can distinguish the pricey add-ons from the base model)—you'll still have a Turing-complete language—but the built-in values (along with parentheses, which you can use to group things) make it feel more like a Lisp dialect, maybe one where curried functions are the norm.

If you're a fan of static types you'd probably prefer a variant of the lambda calculus that has those, and the simplest way to do that is named (surprise!) the simply typed lambda calculus. It has a more complicated grammar than the untyped calculus, because it divides expressions into two categories: terms, which evaluate to a value at runtime, and for which I'll use the letter e; and types, which constrain terms in the usual statically typed way, and which get the letter t:

     e  :=      v        a variable
λvt.e a function with parameter v of type t and body e
e1 e2 the function e1 applied to the argument e2
built-in value whatever you want: true, 42, etc.
     t  :=      t1t2        a function type, where t1 is the input type and t2 is the output
built-in type whatever you want: Bool, Int, etc.

You'll notice (because it's bright red) that I added a type annotation (: t) as the only change to the term syntax from the untyped lambda calculus. Types can be either function types or built-in types.

This isn't a very sophisticated type system, and you'd hate trying to write any real code in it. Without more add-ons, we can't get correct types out of it for things like product types (think tuples) or sum types (disjoint unions).

Historically, and in most textbooks, the simply typed lambda calculus is augmented with a bunch of special-purpose gadgetry (like those missing product and sum types), and then followed by several progressively more powerful type systems, like the parametric polymorphism that is at the core of languages like Haskell and Scala.


Instead, let's just skip to the good bits. The dependently typed lambda calculus is not only more powerful than everything we're skipping over; it's also got a shorter grammar:

     e  :=      v        a variable
λve1.e2 a function with parameter v of type e1 and body e2
e1 e2 the function e1 applied to the argument e2
e1 → e2 a nondependent function type, where e1 is the input type and e2 is the output
(ve1) → e2 a dependent function type, where v, the input, has type e1, and e2 is the output type (which can mention v)
* the type of types
built-in values and types whatever you want: true, 42, Bool, Int, etc.

But whoa, what happened to the types? Unlike the simply typed calculus, there's no t in this grammar! Well, the big thing about the dependently typed lambda calculus (aside from the dependent function types) is that types are themselves values (the OO folks would say they are reified), and type expressions may involve arbitrary computation. In short, types are just terms!

Everyone's Favorite Example

So what can you actually do with the dependently typed lambda calculus? Well, the traditional (practically mandatory) example is one of a list (let's follow convention and call it a Vec) that has its length encoded in its type.

(This is not really the most exciting thing you can do with dependent types. It's actually possible to encode natural numbers in the type system of Scala or Haskell, so you can already implement Vec in either of those languages. But the two languages' type systems are oriented towards doing typical type-level stuff, so numbers-as-types are pretty awkward—it turns out that regular numbers are still the most convenient way of representing numbers.)

You can specify the type of Vec in the dependently typed lambda calculus as:

    Vec: * → Nat → *

where Nat is the type of natural numbers. That is, a Vec is parameterized by the type of its elements (remember that * is the type of types) and the number of its elements—Vec is actually a function that, given those arguments, returns a type.

The constructor for the empty Vec, Nil, has type:

    Nil: (A: *) → Vec A 0

That is, given a type A, Nil gives you back a Vec of that type with zero elements. You'll remember that we said Vec itself is a function that takes two arguments and returns a type—so the type of Nil is indeed a type, even though it looks like a (plain old, value-level) function call. (If this is the first time you're seeing this, I hope that's as much of a rush for you as it was for me.)

It's clearly not fair for me to leave with you with just Vecs of zero elements, so here's the type of Cons, which constructs a nonempty Vec:

    Cons: (A: *) → (n: Nat) → A → (Vec A n) → (Vec A (n + 1))

That is, you feed Cons the element type A and the size n of the old list you are prepending to, along with an A and the old list, and you get back a new list with size n + 1.

Back to the Whinging

All this technical exposition is fine (although I've left out a whole lot of stuff, like how you actually execute programs written in these calculi), but I really came here to complain, not to explain.

When talking about the polysyllabically-named dependently typed lambda calculus, I have to keep using the terms “depedent function type” and “nondependent function type” (hardly anyone says “independent”), which further pushes up the syllable count. By the time I name what I'm talking about, I've forgotten what I was going to say.

I like to think of nondependent function types (say, A → B) as cake types, because the arrow (like frosting) separates two layers (A and B) with distinct, relatively homogeneous structure, like cake layers. By extension, dependent function types (say, (xA → b x)) are pie types: the variable x gets mixed from the left side of the arrow into the right, making the whole type chunky with xs, kind of like the bits of apple in an apple pie. So how about calling the whole calculus the pie calculus?

(Now, it may have occurred to you that you don't really need to have two different function types. Wouldn't a cake type be a degenerate version of a pie type, one that didn't happen to mention its variable on the right side of the arrow? Well, yes, you're right—the cake is a lie, albeit a little white lie. But the upshot is that the pie calculus is even simpler than I've made it out to be, because you can drop the production for the cake type from the grammar.)

As it happens, there are a couple of alternate notations for dependent function types. Instead of:

    (x: A) → B

you can say:

    ∀x: A. B


    Πx: A. B

Hey, that last convention makes it look as if the pie calculus is actually the Π calculus! But no, that would make life too easy. It turns out that the pi calculus is already something else, namely a formalism for describing concurrent computations. It's related to the lambda calculus, so no end of confusion would ensue if you tried to hijack the name.

So I guess I'll have to keep calling my favorite calculus by its mind-numbingly long name. Life is unfair, etc., etc.

And So Can You!

You can mitigate some of life's unfairness by mastering the dependently typed lambda calculus yourself. Check out Coq (the aforementioned Coq-based Software Foundations is probably the easiest way to get into dependent types), or Idris, or Agda, or even F* (or, for a quite different take on what it means to implement the calculus, Sage), and start amazing your friends with your new superpowers!

Saturday, February 22, 2014

A Completely Objective Observation

When you talk to Scala programmers and use the word “object”, they hear “composable parameterized namespace that is a first-class value”.

When use the word “object” in front of Haskell programmers, they hear “Voldemort”.