Wednesday, December 24, 2014

Five for '15

2014 was a pretty good year for me in terms of learning new things. My meetup group held a bunch of meetings at which we explored type-theoretic topics and worked our way through some computer-assisted theorem proving.

But next week is another year, and I'm trying to figure out what I should be trying to figure out next. I've come up with five things I'd like to learn about in 2015:

  1. How concurrent can logic programming get, and what techniques are used to parallelize operations in a language like Prolog? This is a separate question from how to create Prolog predicates that perform conventional sorts of thread management operations, which is mostly what you find if you search for “concurrent Prolog”. It's also separate from the question of how concurrent logic programming should get—the consensus seems to be that Japan's fifth generation computer project yielded the answer “not very” (people found it hard to write programs in a way that took full advantage of concurrent logical operations). But I'm still curious to figure out just how you'd adapt a logic programming implementation (like the one described in Structure and Interpretation of Computer Programs) if you pretended that you could run arbitrarily many processes at once with no communication costs between them.

  2. On a related note, how might you compile a dependently typed programming language to Prolog? There seems to be a straightforward translation of dependent algebraic data types to Prolog clauses, but I haven't sat down to figure out how you'd integrate that with an interpreter for the dependently typed lambda calculus. As a subquestion, is it possible to generate the Prolog such that it does not use any sequential operations like cuts, so as to take advantage of my infinitely parallel Prolog implementation from the previous question?

  3. What type systems have been developed for process calculi? Are any as expressive as the dependently typed lambda calculus? What sort of mathematics do such type systems lead to? (These are some topics we may explore in the meetup group this year.)

    I realize that a fair amount has been written on this topic; I just haven't read it yet. But straight off, it seems as if a type system for something like the pi calculus couldn't look much like one for the lambda calculus. A function type in the lambda calculus assumes that control returns to the caller of the function; the function type describes the relationship between the inputs and outputs to the function. In the pi calculus, control could go off in any (or no) direction—the closest analog to a function type would be a channel type that describes what types of values get sent to the channels that are inputs to the channel in question (I think). This sounds a lot wilder than the tidy walled garden of the lambda calculus.

  4. Is there an elegant way to create a sort of dependently typed Lisp? By which I don't mean that I just want more parentheses so that I can stuff complex type declarations into them—I'm not that fond of traditional Lisp syntax, and I'm even a fan of infix operators. What I'm looking for is a way to create a language with a sophisticated type system (like the dependently typed lambda calculus) that is homoiconic—that is, not only is there is a straightforward correspondence between a source program and the resulting syntax tree, but syntax trees themselves are easy to understand and manipulate (at least that's what I mean by “homoiconic”; there seems to be plenty of controversy out there about how the term should be understood). More specifically, I'd like a language where equivalents of the standard Lisp operations quote and eval are easy to define and use.

  5. Looking in a slightly different direction, how can one develop neural networks that implement functions of arbitrary (dependent) type? I've read a little about how recurrent neural networks can handle variable-sized data as input to an artificial neural network, but how does that work for output?

If 2014 for me was the Year of the Lambda Calculus, it looks as if 2015 will be the Year of the Pi Calculus, given how frequently I've mentioned concurrency above.

I hesitate to formulate my interest in these topics as any sort of New Year's resolution, because we all know what happens to New Year's resolutions. But I have agreed to give a short presentation on opportunities for concurrency in Prolog implementations on the 4th of January. I've found that promising a talk on a topic one knows nothing about is an excellent motivator for learning about said topic.

Sunday, December 14, 2014

Poker with Tarot Cards

So recently I was learning to play Poker along with an inquisitive 10-year-old named Eli, and the question came up: how would you play Poker with Tarot cards?

(The comedian Steven Wright is known for saying, “Last night I stayed up late playing Poker with Tarot cards. I got a full house and four people died.”—but in fact, the Tarot deck has been used for playing games since it appeared in Europe in the 15th century, presumably with few fatalities. According to the Wikipedia article on Tarot cards, the earliest record we have for the practice we now consider the primary purpose of the Tarot—telling fortunes, where each card has a particular significance—dates from the 18th century. So if historical precedence is any guarantee, you are most likely not courting spiritual disaster by sneaking in a few hands of Tarot Klondike.)

Of course, making the rules is always more fun than playing by the rules, so I'm hardly the first person to suggest alternate ways of playing Poker. Variations include:

The knobs that can be twiddled in designing a new Poker variant (ignoring the betting, which is a whole nother can of worms) include:

  • The deck:
    • The number of suits
    • The number of ranks
    • Whether all suits have the same ranks (is the deck a rectangular grid?)
  • The number of cards that count as a hand (could be more or less than the usual 5)
  • The types of card combinations that are scored:
    • Straights (card ranks are sequential)
    • Flushes (all cards are of the same suit)
    • n of a kind (that is, groups of n cards of the same rank)
    • Or something else, like the super mentioned above. You could also eliminate or modify one of the standard combinations—maybe flushes don't count for anything, or you allow a “small straight” (all but one of the cards' ranks are sequential)

Let the twiddling begin!

The deck of interest here is the 78-card Tarot deck of 5 suits, where the trumps have 22 cards and the other suits (the standard 4 from the 52-card deck) 14 cards each. The trumps are numbered 0 through 21; the other suits 1 through 10 and then Jack, Knight, Queen, King (or Valet, Cavalier, Dame, Roi in the French Tarot deck I have).

A 5-card hand seems a bit small for a deck that is half again as large as the standard one, so I'd prefer 6. Eli agrees, which settles it.

How to score the hands is a bit of a puzzle. What is a straight? Do we just do a sequential mapping of the trump ranks to the ranks in other suits, and decree that a Jack is an 11, a Knight a 12, a Queen a 13, and a King a 14? Somehow that doesn't seem very satisfactory. I gnawed on this problem for a while and finally decided that straights are based on the nominal ranks of the cards, and the permissible straights are defined by how those ranks are ordered in each suit. That is, the ranks in a straight must be a subsequence of either:

    0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 0

or:

    1 2 3 4 5 6 7 8 9 10 J N Q K 1

(where the lowest card in each suit may also be the highest, for purposes of a straight, as is conventional).

For example, a 6-card straight 3-4-5-6-7-8 may consist of a mix of trump and nontrump suits, but a straight that begins with 8-9-10 must continue with either the 11-12-13 of trumps or the J-N-Q of some other suit(s), so as to preserve a sequence of ranks that actually appears in some suit.

Increasing the number of suits means that flushes become rarer, and increasing the number of cards in the hand means that straights become rarer. These both seem like aesthetic flaws, but I've decided to allow small straights (that is, sequences of 5 cards within a 6-card hand; big straights of 6 cards are also allowed), while ignoring the flush problem (you still need all 6 cards to be of the same suit for a flush. Mixing small straights and small flushes just makes the names of the hands too long).

Here's the ranking of hands (with frequency), based on the above choices:

     big straight flush 0.000023%
     small straight flush 0.000235%
     5 of a kind 0.000284%
     4 of a kind + pair 0.002403%
     3 of a kind + 3 of a kind 0.002412%
     flush 0.033468%
     big straight 0.049992%
     4 of a kind 0.052961%
     pair + pair + pair 0.097513%
     small straight pair 0.106750%
     3 of a kind + pair 0.353574%
     small straight 0.504444%
     3 of a kind 2.466884%
     pair + pair 6.093654%
     pair 40.184611%
     high card 50.050791%

A “small straight flush” consists of 6 cards of the same suit, but only 5 of them are in sequence. A “small straight pair” has 5 cards in sequence, and the rank of one member of the sequence is repeated in the sixth card (as in 4-5-6-7-8-5).

If the whole big straight/small straight thing gives you the willies, an alternative ranking that uses only 6-card straights is:

     straight flush 0.000023%
     5 of a kind 0.000284%
     4 of a kind + pair 0.002403%
     3 of a kind + 3 of a kind 0.002412%
     flush 0.033703%
     straight 0.049992%
     4 of a kind 0.052961%
     pair + pair + pair 0.097513%
     3 of a kind + pair 0.353574%
     3 of a kind 2.466884%
     pair + pair 6.093654%
     pair 40.291361%
     high card 50.555235%

And in case anyone is curious what happens in a standard 52-card deck with 5-card hands where you allow small (4-card) straights, the rankings are:

     big straight flush 0.001539%
     small straight flush 0.012159%
     4 of a kind 0.024010%
     3 of a kind + pair 0.144058%
     flush 0.184381%
     big straight 0.392465%
     small straight pair 0.650106%
     3 of a kind 2.112845%
     small straight 3.100471%
     pair + pair 4.753902%
     pair 41.606797%
     high card 47.017268%

For reference, the standard scoring in the standard deck is:

     straight flush 0.001539%
     4 of a kind 0.024010%
     3 of a kind + pair (full house) 0.144058%
     flush 0.196540%
     straight 0.392465%
     3 of a kind 2.112845%
     pair + pair 4.753902%
     pair 42.256903%
     high card 50.117739%

If you're interested only in standard scoring of 5-card hands in alternate decks, Travis Hydzik has a calculator that quickly figures out hand values for any deck with at least 4 suits and at least 5 ranks, where every suit has the same ranks. If you play with this tool a little bit, you'll see that 13 ranks is something of a sweet spot for decks with a moderate number of suits (between 5 and 9)—with fewer ranks, pairs become even more common than hands with nothing in particular (that is, hands ordered by high card), which strikes me as perverse.

So, I've answered the Tarot Poker question to my own satisfaction. If you want to play Poker with Tarot cards and make different rules, by all means go ahead. Discovering the analogies that work for you between the standard deck and a different deck is more than half the fun.

I thought I would be done by now, but I'm not. Traipsing around the Web in search of alternate card decks led me to something wonderful that you can actually buy: a Fanucci deck. If you are sufficiently Old School, you might recognize this as belonging to the Zork subgame Double Fanucci. What's wonderful about the Fanucci deck? Well, it's got 15 suits named Books, Bugs, Ears, Faces, Fromps, Hives, Inkblots, Lamps, Mazes, Plungers, Rain, Scythes, Time, Tops, and Zurfs—11 cards each, numbered 0 through 9, then ∞—along with 9 unranked trumps called Beauty, Death, Granola, Grue, Jester, Light, Lobster, Snail, and Time (that's right, there's a Time trump that is not part of the Time suit).

Thank you, Internet!

If you don't immediately want to play with something like that, then I just don't know what's wrong with you. Unfortunately, this is just a blog, so I can't actually play a game with the Fanucci cards with you here, but I can tell you what happens when you try to apply the standard Poker scoring rules to the Fanucci deck (5-card hands—I'm treating only the cards ranked 0 through 9 as able to participate in straights; the trumps have no numeric rank and so cannot participate in straights or n of a kind):

     straight flush 0.000007%
     flush 0.000555%
     5 of a kind 0.002634%
     4 of a kind 0.190342%
     straight 0.363255%
     3 of a kind + pair 0.418992%
     3 of a kind 4.593350%
     pair + pair 6.961714%
     high card 41.277900%
     pair 46.191251%

Hmm, high card is more common than a pair. A bit disappointing, but not unexpected, given how many more suits there are than ranks.

One way to try to fix Fanucci Poker is by not letting pairs count for anything, which gets you:

     straight flush 0.000007%
     flush 0.000555%
     5 of a kind 0.002634%
     4 of a kind 0.190342%
     straight 0.363255%
     3 of a kind 5.012342%
     high card 94.430865%

but this means that 94% of hands are scored by high card, which is way off the 50% for standard Poker. That can't be right.

More in the spirit of things is probably to reverse the role of ranks and suits: ranks count only when all 5 of your cards are the same (a flush-like 5 of a kind), whereas any 2 or more of the same suit count (an n-of-a-kind-like flush). So, for example, a straight 3-flush is a straight where 3 of the cards have the same suit:

     straight 5-flush 0.000007%
     straight 4-flush 0.000502%
     5-flush 0.000555%
     straight 3-flush + 2-flush 0.001005%
     5 of a kind 0.002634%
     straight 3-flush 0.013059%
     straight 2-flush + 2-flush 0.019589%
     4-flush 0.065484%
     straight 2-flush 0.156714%
     3-flush + 2-flush 0.163567%
     straight 0.172385%
     3-flush 2.518297%
     2-flush + 2-flush 4.194274%
     2-flush 39.967163%
     high card 52.724764%

This treatment makes 53% of hands scored by high card, which is satisfyingly close to conventional.

I've calculated the odds above with the help of a Scala program I wrote. Because I didn't trust myself to do the combinatorial math for quirky decks where the suits don't all have the same length, the program actually enumerates all possible hands (ignoring order) and checks each of them—this takes many hours on my laptop for 6-card Tarot hands or 5-card Fanucci hands. You're welcome to try the program out for yourself, but if you want to play 7-card Fanucci, you might want to take a different approach.

Anyway, I hope all this encourages you to get some unfamiliar cards and try something new. All decks on hand!