Wednesday, December 24, 2014

Five for '15

2014 was a pretty good year for me in terms of learning new things. My meetup group held a bunch of meetings at which we explored type-theoretic topics and worked our way through some computer-assisted theorem proving.

But next week is another year, and I'm trying to figure out what I should be trying to figure out next. I've come up with five things I'd like to learn about in 2015:

  1. How concurrent can logic programming get, and what techniques are used to parallelize operations in a language like Prolog? This is a separate question from how to create Prolog predicates that perform conventional sorts of thread management operations, which is mostly what you find if you search for “concurrent Prolog”. It's also separate from the question of how concurrent logic programming should get—the consensus seems to be that Japan's fifth generation computer project yielded the answer “not very” (people found it hard to write programs in a way that took full advantage of concurrent logical operations). But I'm still curious to figure out just how you'd adapt a logic programming implementation (like the one described in Structure and Interpretation of Computer Programs) if you pretended that you could run arbitrarily many processes at once with no communication costs between them.

  2. On a related note, how might you compile a dependently typed programming language to Prolog? There seems to be a straightforward translation of dependent algebraic data types to Prolog clauses, but I haven't sat down to figure out how you'd integrate that with an interpreter for the dependently typed lambda calculus. As a subquestion, is it possible to generate the Prolog such that it does not use any sequential operations like cuts, so as to take advantage of my infinitely parallel Prolog implementation from the previous question?

  3. What type systems have been developed for process calculi? Are any as expressive as the dependently typed lambda calculus? What sort of mathematics do such type systems lead to? (These are some topics we may explore in the meetup group this year.)

    I realize that a fair amount has been written on this topic; I just haven't read it yet. But straight off, it seems as if a type system for something like the pi calculus couldn't look much like one for the lambda calculus. A function type in the lambda calculus assumes that control returns to the caller of the function; the function type describes the relationship between the inputs and outputs to the function. In the pi calculus, control could go off in any (or no) direction—the closest analog to a function type would be a channel type that describes what types of values get sent to the channels that are inputs to the channel in question (I think). This sounds a lot wilder than the tidy walled garden of the lambda calculus.

  4. Is there an elegant way to create a sort of dependently typed Lisp? By which I don't mean that I just want more parentheses so that I can stuff complex type declarations into them—I'm not that fond of traditional Lisp syntax, and I'm even a fan of infix operators. What I'm looking for is a way to create a language with a sophisticated type system (like the dependently typed lambda calculus) that is homoiconic—that is, not only is there is a straightforward correspondence between a source program and the resulting syntax tree, but syntax trees themselves are easy to understand and manipulate (at least that's what I mean by “homoiconic”; there seems to be plenty of controversy out there about how the term should be understood). More specifically, I'd like a language where equivalents of the standard Lisp operations quote and eval are easy to define and use.

  5. Looking in a slightly different direction, how can one develop neural networks that implement functions of arbitrary (dependent) type? I've read a little about how recurrent neural networks can handle variable-sized data as input to an artificial neural network, but how does that work for output?

If 2014 for me was the Year of the Lambda Calculus, it looks as if 2015 will be the Year of the Pi Calculus, given how frequently I've mentioned concurrency above.

I hesitate to formulate my interest in these topics as any sort of New Year's resolution, because we all know what happens to New Year's resolutions. But I have agreed to give a short presentation on opportunities for concurrency in Prolog implementations on the 4th of January. I've found that promising a talk on a topic one knows nothing about is an excellent motivator for learning about said topic.

No comments:

Post a Comment