# Lambda Closures (revisited)

I was looking at an old post of mine about how I implemented closures using closure generation, and it seems… overly complex.

So I went about doing a simpler implementation.

For review, we implement a LISP interpreter with a special instruction which essentially takes the following arguments:

`(\$\$closure function arg1 arg2 ...)`

This special function takes the lambda declaration, pushes the arguments provided as constants, and generates a new function which in essence pushes the constants and calls a special CLOSURE instruction in the VM.

```    PUSHC function
PUSHC arg1
PUSHC arg2
...
CLOSURE n```

The closure instruction modifies the stack by putting the arguments at the top of the stack (through rotating the stack), then jumps to the function.

The purpose of this instruction is to allow us to add the closure of a lambda to the argument list of the lambda function; thus, if we have the following declaration:

```(let ((a 0))
(lambda (b) (+ a b)))```

We rewrite this using our special \$\$closure function (using the notation that ‘epsilon’ is ‘lambda’ without closures) as:

```(let ((a 0))
(\$\$closure (epsilon (a b) (+ a b))
a)))```

Our \$\$closure instruction then generates a new procedure which takes one argument ‘b’, adds ‘a’ to the stack, then calls our original epsilon (think “lambda without closures”) function.

We can handle variables from an outer scope by wrapping it; we declare the following methods:

```(define (\$\$wrap v) (cons v nil))
(define (\$\$fetch v) (car v))
(define (\$\$store v val) (rplaca v val) val)```

This has the net effect of using an indirect reference to refer to a particular element, and has the nice property that it can be passed as a “constant” to our closure function.

Thus, if we have the following:

```(let ((a 0))
(lambda (b) (set! a (+ a b))))```

Our rewrite function should generate the following:

```(let ((a (\$\$wrap 0)))
(\$\$closure (epsilon (a b) (\$\$store a (+ (\$\$fetch a) b))) a))```

In other words, we wrap a as a list of one item, and this gives us the ability to access the contents of a even as our let statement passes out of scope.

This means we can do the following:

```(define (makeproc)
(let ((a 0))
(lambda (b) (set! a (+ a b)))))

(set! Function1 (makeproc))
(set! Function2 (makeproc))

(Function1 3)
> 3
(Function2 5)
> 5
(Function1 4)
> 7
(Function2 -4)
> 1```

Each invocation of makeproc creates a new list item (0) which is then stored in our new lambda function. This means Function1 and Function2 point to two separate (0) objects, which are then independently updated as we call either function.

Now my previous implementation seemed overly complex, so I re-implemented the algorithm.

The essence of the state that we track as we parse through the LISP code to detect closures is driven by the following observations:

First, we only need to make the following modifications to our statement as we perform closure analysis:

(1) Wrap all ‘lambda’ declarations with a \$\$closure statement with the closure inside the lambda. We don’t rewrite ‘lambda’ declarations that do not read or write variables outside of the scope of the lambda function.

(Note: while in our discussion above we use the keyword ‘epsilon’ to distinguish from ‘lambda’, in our implementation we use ‘lambda’ exclusively.)

(2) For those variables from an outer scope that is written to by an inner scope, we need to replace all ‘set!’ invocations with ‘\$\$store’, all instances of the variable with ‘\$\$fetch’, and instances where we create the variable with ‘\$\$wrap’.

Second, as we rewrite statements, the contents of a statement does not affect the outer part of the statement. Meaning as we recurse down we can rewrite statements inline; there is no context inside a statement (aside from if it is accessed) that we need to maintain.

Third, each variable is declared only in one place. Yes, a variable can be masked by an inner declaration, but that refers to a distinct variable.

My second point and third point means I can rewrite statements as I recurse and perform analysis.

Now the goal of my analysis and rewrite is the following:

(1) Determine which variables from an outer scope is written to or read from. Variables declared in my own scope are ignored, and we only need to understand the usage state of the first instance of a variable declared outside of my scope.

(2) We only need to determine if a variable is read from or written to; we do not need to track any other state.

So the state which I track as I recurse through the statements in my system is:

`( local-scope usage )`

where local-scope is a set of variables that have been declared in my local scope, and usage is a push-down stack of variables and their usage, as a pair of values–the first the variable name, the second a number 0 (declared but unused), 1 (read from) or 2 (written to).

For example, if we encounter the following declaration:

`(let (a b c) ...)`

Our state is:

`((a b c) ((a 0) (b 0) (c 0)))`

if, in an inner scope we encounter:

`(let (c d e) ...)`

our state is updated to

`((d e a b c) ((c 0) (d 0) (e 0) (a 0) (b 0) (c 0)))`

This means that inside if we encounter a lambda function, when it updates the usage, it only updates the usage of the variable in the usage list which is in our inner-most scope.

Meaning if we encounter the following inside our inner let:

`(lambda (a) (set! c a))`

First, note that we’re inside a lambda–a change of scope. So our state as we parse the contents of the lambda dumps the variables in local-scope list and replaces it with `(a)`:

`((a) ((a 0) (c 0) (d 0) (e 0) (a 0) (b 0) (c 0)))`

(Note that we push (a 0) on our usage stack because inside our lambda, the variable a has local scope.)

Next we examine the statements. We see a set!, and we examine c: since it is not in our local-scope we modify our state to show that our variable c is written. We only change the first item in the stack, since the variable c we’re interested in is in the inner scope:

`((a) ((a 0) (c 2) (d 0) (e 0) (a 0) (b 0) (c 0)))`

We also examine ‘a’, which is read from–but because it is inside the local-scope, we do not mark the usage of ‘a’.

Now our method we use to do this scanning (\$\$ScanUsage) returns a list of the variables in this inner scope which were found and modified; in our case, this is the list `(c)`.

On return, our inner let now sees the following:

`((d e a b c) ((c 2) (d 0) (e 0) (a 0) (b 0) (c 0)))`

We also get back the return value `(c)`, indicating that the variable c was modified in the inner scope of the lambda function.

The lambda function then can be modified for closure to pass in the variable found inside of it: our lambda above is rewritten in-place as:

`(\$\$closure (lambda (c a) (set! c a)) c)`

Now on return we have our return value `(c)`, which we now examine against usage to see if the usage of the variable needs to be rewritten. Since in our usage list, the usage value is 2 (written to), we call another method \$\$RewriteWrite which recurses down and rewrites our closure to its final form:

`(\$\$closure (lambda (c a) (\$\$store c a)) c)`

This also rewrites our inner let as:

`(let ((c (\$\$wrap nil)) d e) ...)`

That is, ‘c’ is rewritten as a list, initially initialized to `(nil)`.

Our \$\$RewriteWrite method, of course, must examine each statement and if it is a declaration which masks the variable we are rewriting, we stop recursion at that point; the inner scope is another variable, albeit with the same name.

And discovering that scope in terms of our \$\$closure statement must take into account that the only variables in our lambda which are in the inner scope are those not in the parameter list of our \$\$closure statement.

Also note that our algorithm heavily leans on ‘rplaca’ and ‘rplacd’ to rewrite our statement rather than generating a new statement. This has implications about statements that may be generated and then later evaluated, where large parts of the code is contained in quote statements.

I believe this is an easier implementation in that the data structure being passed around is local to just the statement being examined, and is far simpler to understand than the older implementation. The code I have appears to correctly rewrite closure statements, though at this stage I haven’t finished testing yet.

# Geek Moment: LISP IDE.

Excuse me while I have a geeky moment.

So in the background for a couple of hours here and there I’ve been tinkering with a LISP interpreter, with the goal of eventually inserting the LISP engine into an iOS application in order to make a programmable calculator with some ability to do basic calculus, as well as standard scientific calculator stuff, and automatic unit conversion stuff. (I even have some code somewhere which can handle arbitrary unit conversions, including arbitrary conversions in and out of standard scientific units.)

(Well, properly it’s a LISP compiler which compiles to a VM instruction set.)

A little while ago I managed to write a LISP compiler in LISP, compiled by a bootstrap compiler written in C, which was capable of compiling itself.

Well, now I have a primitive IDE for writing LISP code, which supports syntax coloring, automatic indentation, as well as setting breakpoints in my LISP code (step over and step out turns out to be a bitch) and examining the variable stack.

Breakpoints are managed by maintaining a table (including PC references into the instruction set) and setting a special instruction which halts the VM, and “single step” is handled by executing one VM instruction at a time until I hit one of the PC breakpoints.

And when not single stepping instructions, the VM is run on a second thread, to allow me to stop execution arbitrarily (in case of infinite loops).

Some pictures: showing breakpoints Showing the debugger in progress, examining the stack and variables. Of course there are some usability issues; I certainly wouldn’t want to ship this as a product. (In particular, you need to specify the entry point of a routine to debug, and breakpoints require the code to be compiled first–and compilation is a manual process; you have to press the compile button before adding breakpoints or starting to debug.)

And there are some gaps–the LISP implementation is incomplete, closures haven’t been written into the compiler (yet), and I don’t have a means to write to the standard out via a LISP instruction.

But there is enough to start actively building these components and testing them within the IDE, and fleshing out my LISP implementation–particularly since my LISP compiler is built in LISP, so adding features can be done with my IDE.

Next steps: include a window in my IDE which emulates the planned UI for the iOS product (a calculator keyboard, menu and display system–with keyboard events executing predefined LISP instructions, the menu built from a LISP global variable and the display formatting display instructions from another LISP global variable), as well as fixing bugs, fleshing out the LISP interpreter, and creating a mechanism for printing from LISP into my output window.

After that I can start writing the calculator code in LISP.

The syntax coloring was hung off the NSTextView (by setting a 1 second timer on update text events, and running a simplified LISP parser to do syntax coloring and variable detection to color local variables), and autoindent was handled by listening for newline events and calculating the proper indentation for the current parenthesis level.

And the nice part is that similar hooks exist on iOS in UITextView, and the debugger wrapper class should all port fairly easily to iOS. Meaning the functionality of the IDE should port rather easily to iOS.

# On the development of my own Lisp interpreter, part 3: revisiting closures.

I managed to solve my lisp closure problem outlined in this post by adopting the ideas behind this paper: Closure generation based on viewing LAMBDA as EPSILON plus COMPILE. The basic idea of the paper is that you can rewrite Lambda expressions as “epsilon” expressions (that is, ‘lambda’ but which all variable state outside of the lambda block is passed in as arguments), and compiling those functions with a special method which modifies the stack on calling the lambda function.

Let me explain. Suppose we have the following bit of Lisp:

```    (define (funct a)
(lambda (b) (+ a b)))
```

When we call this function with an argument, say (funct 5), this returns a function pointer to a function that takes one argument (b) and adds 5 to b. So:

```> (define tmp (funct 5))
> (tmp 3)
8
```

See what happened here? By calling funct we created a new function (which we stored in tmp). We can then call that function, and get a result.

The idea of the paper is that in order to get proper closure rules, you can walk the expression (because the beauty of Lisp is that Lisp programs are just lists of lists that a Lisp program can easily parse), and perform a substitution to rewrite our function in terms of “epsilon”, or rather, lambda functions which only have local scope.

Thus, the lambda expression of our function funct above:

```    (lambda (a)
(lambda (b) (+ a b)))
```

which describes a function taking one argument that returns a function taking one argument, would be rewritten:

```    (lambda (a)
(\$\$closure (lambda (a b) (+ a b)) a))
```

Now this special function \$\$closure is magic. It takes as it’s first argument a precompiled function, and as the tail of the arguments a list of parameters. It then creates a new function with pre-pends it’s arguments on the stack and jumps to the original function. Thus:

```    (\$\$closure #<procedure:anon-1> 5)
```

In the above case, the above procedure would generate the following function:

```	POP DS   ; Pop the marker giving the size of the stack
PUSH 5   ; Push the parameter 5 onto the stack
PUSH DS  ; Push the stack size marker back onto the stack
JMP <procedure:anon-1>
```

That is, the function would simply pre-pend the list of arguments to the start of the parameter list (adjusting the stack frame as needed to properly mark the start and end of the stack frame), and then jump to the original procedure.

The code rewriter I built also tracks the current usage of variables so I properly ignore variables not used interior to a function. For example:

```    (lambda (a b)
(set! f1 (lambda (c) (+ a c)))
(set! f2 (lambda (c) (- b c))))
```

This is rewritten as:

```    (lambda (a b)
(set! f1
(\$\$closure (lambda (a c) (+ a c))
a))
(set! f2
(\$\$closure (lambda (b c) (- b c))
b)))
```

Meaning we run our \$\$closure function pushing only the additional parameters that are needed to each of my interior lambda functions.

We also handle the case where you can create a closure that is only accessible from an interior method. For example, if we define a function foo:

```    (define foo (lambda (a)
(lambda (b) (set! a (+ a b))
a)))
```

This defines a function which, once run, will store the value ‘a’ away in a variable that is only accessible from the function that is returned. Calls to that interior function can then add to the interior value, and return that value.

Thus:

```> (define bar (foo 5))
> (bar 1)
6
> (bar 2)
8
> (bar -3)
5
```

This interior variable is of course not shared:

```> (define bletch (foo 3))
> (bletch 1)
4
> (bar 1)
6
```

This is accomplished by our routine by detecting if any of the interior lambdas perform a set! operation to update an interior variable held by closure. If so, we then “wrap” the variable in a list, so in effect we use the variable by reference.

Compiling our routine with the lambda closure function, we get:

```    (lambda (a)
(set! a (\$\$wrap a))
(\$\$closure (lambda (a b) (\$\$store a (+ (\$\$fetch a) b))
(\$\$fetch a))
a))
```

The \$\$wrap function wraps the variable with a list. The function \$\$store sets the wrapped value, and \$\$fetch obtains the wrapped value. These three functions are in fact declared as:

```(define (\$\$wrap v) (cons v '()))
(define (\$\$fetch v) (car v))
(define (\$\$store v val) (set-car v val))
``` The way my code works is by doing an analysis pass, which computes the variable usage across the various lambdas in my declaration, then a rewrite pass, to use the information gathered in the variable scope parameters to determine how to rewrite my lambda functions.

The first method is \$\$eval-scope, which walks through the lambda function, maintaining a record of all the variables that have been encountered. The specific record that I track contains the following information:

1) An association list of the currently encountered variables, maintained as a “push-down” association list. When I encounter a new variable, the variable information is pre-pended to the list of variables. This variable state information contains the following information:

a) The variable name
b) A flag indicating if the variable was written to by an interior lambda (that is, by a lambda function not where this one was declared). This actually has three states: \$\$read (the variable was read), \$\$write (the variable was written) and \$\$unused (the variable was not used by an interior lambda).
c) A pointer to the lambda where this variable was declared,
d) A list of pointers to the lambdas where this variable was used.

Note that we only create records for the variables that we find inside declaration headers: that is, for an expression like (lambda (a b)), we only note ‘a’ and ‘b’. If we see a variable that hasn’t been declared in this lambda or an outer lambda, we assume it was a global variable and thus ignore it.

2) A flag next to the association list indicating if we’ve seen multiple lambdas in this pass. We can use this flag to ignore rewriting the function if we don’t see any embedded lambdas.

3) A push-down list of the lambdas that we’re currently examining. This is important because if we have three nested lambdas, but the middle lambda doesn’t use a variable declared in the outer that is used in the inner, we must pass the used variable through the middle lambda. So a lambda used in the interior is considered “used” by all of the lambdas between the current one we’re examining and the lambda where this variable was first declared.

4) A push-down list of pointers to our association list. This is used essentially to push and pop variable scope: when we enter a lambda function we push the old scope state, and when we’re done with the lambda we pop the scope state. This allows us to quickly unwind the current variable state once we’re done processing an interior lambda.

The \$\$eval-scope method walks through all of the declarations we’re currently examining, and updates the scope variable. We also, for convenience sake, rewrite lambdas we encounter (once we’ve processed them) with a special token (\$\$scope), which notes for each lambda we’ve encountered (a) the lambda method itself (transformed by \$\$eval-scope, naturally), (b) the un-transformed lambda expression (which we need because our scope variable uses pointers to the un-transformed lambda expression as an index), (c) the variables that are used outside the lambda function (that is, the state of the closure when the lambda was encountered), and (d) the variables created and used inside the lambda.

This can be a quite complicated: for our above example:

```    (define foo (lambda (a)
(lambda (b) (set! a (+ a b))
a)))
```

The function \$\$eval-scope returns:

```(\$\$scope
(lambda (a)
(\$\$scope
(lambda (b) (set! a (+ a b)) a)
(lambda (b) (set! a (+ a b)) a)
((a \$\$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))
((b \$\$unused (lambda (b) (set! a (+ a b)) a) (lambda (b) (set! a (+ a b)) a))
(a \$\$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))))
(lambda (a) (lambda (b) (set! a (+ a b)) a))
()
((a \$\$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a)))))
```

Ignoring the inner lambda, the outer lambda is:

```(\$\$scope
(lambda (a)
(\$\$scope ... ;; rewritten method)
(lambda (a) (lambda (b) (set! a (+ a b)) a))
()
((a \$\$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a)))))
```

This indicates that our outer lambda method declares one variable a (see it in the fourth list but not in the third? It starts “((a \$\$write …”), which is written to. It is used by our outer and inner lambda functions, and declared in our outer lambda. (Let me reformat the association list so you can see it:)

``` ((a \$\$write                                         ;; The variable is written to
(lambda (a) (lambda (b) (set! a (+ a b)) a))    ;; Where the variable was declared
(lambda (b) (set! a (+ a b)) a)                 ;; All the places where it was used...
(lambda (a) (lambda (b) (set! a (+ a b)) a)))))
```

The interior lambda gets the same treatment:

```   (\$\$scope
(lambda (b) (set! a (+ a b)) a) ;; The transformed lambda (which needed no transformation)
(lambda (b) (set! a (+ a b)) a) ;; The original declaration
;; Variable state outside this lambda
((a \$\$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))
;; Variable state inside this lambda
((b \$\$unused (lambda (b) (set! a (+ a b)) a) (lambda (b) (set! a (+ a b)) a))
(a \$\$write (lambda (a) (lambda (b) (set! a (+ a b)) a)) (lambda (b) (set! a (+ a b)) a) (lambda (a) (lambda (b) (set! a (+ a b)) a))))))
``` Our second pass, the \$\$rewrite-lambda function, takes the state information expressed in our rewritten lambda function, and does two things:

(1) We rewrite all variable accesses to variables that are ‘written’ to (marked \$\$write in the association list) by substituting all ‘set!’ operations with ‘\$\$store’, all read operations with \$\$fetch, and we add to the start of the declaring lambda function a \$\$wrap call to wrap our arguments. (Our particular dialect of Lisp doesn’t have macros (yet), so ‘do’ and ‘let’ are explicit objects within our language–and the rewrite engine handles those correctly as well, rewriting things like ‘(do ((x 1 (+ 1 x))…’ as ‘(do ((x (\$\$wrap 1) (\$\$wrap (+ 1 (\$\$fetch x))))) …’.)

(2) We rewrite all \$\$scope expressions as lambda + closure. That is, our rewrite rules examine the \$\$scope function and generates a (\$\$closure (lambda …) …), prepending the variables that are passed outside of our function in as new parameters (added at the start), and creating a \$\$closure declaration with those added variables, in the same order.

In our above example, the lambda declaration gets it’s final form (for our compiler):

```    (lambda (a)
(set! a (\$\$wrap a))
(\$\$closure (lambda (a b) (\$\$store a (+ (\$\$fetch a) b))
(\$\$fetch a))
a))
``` If this seems a little heavy-weight, it’s because it probably is: for the Lisp compiler I’m building, I’m not expecting to use the closure feature a lot–and so, having something that rewrites my functions during the compilation process seems a reasonable trade-off for dealing with closures. It also means that my Virtual Machine doesn’t need to understand closures at all; I can just build it as a normal stack-based VM engine with garbage collection.

# On the development of my own Lisp interpreter, part 2: lambdas, fexprs and closures.

My thinking has been to make the keyword ‘lambda’ (and ‘nlambda’) represent entry-points to a byte-code compiler: the idea is that if we see an expression like:

```(lambda (x y) (+ x y))
```

This function invokes the compiler and returns a procedure symbol which represents the bytecode that adds x and y together and returns a value. So internally, somewhere in my system, this would return a data structure like: Also recall that my reason for using an ‘nlambda’ construct was to provide me a way to define fexprs, so I could easily write certain primitives without having to build those primitives into the interpreter. The idea was that if I could create an ‘nlambda’ declaration, then certain primitives could easily be written as fexprs, rather than having to complicate my interpreter or compiler. Thus:

```(define quote (nlambda (x) x))

(define begin (nlambda ins)
(cond ((null? ins) '())
(#t (eval (car ins))
(begin (cdr ins)))))
```

The first is the definition of the ‘quote’ instruction, the second defines ‘begin’, a method for evaluating a sequence of instructions in a list of instructions.

Also recall that I decided that I wanted a lexical Lisp rather than a dynamically bound Lisp, for reasons of given in the previous post. And thus my problem with closures arises.

Let me spell it out.

Suppose we assume that the binding “closure” of our function is the stuff within the function itself: that is, when we compile our procedure above, the scope of the accessible variables is either (a) the variables declared within the procedure (defined by the lambda function), or (b) the global variables in our system.

This makes sense, since it means if we write:

```(define add (lambda (x y) (+ x y gz)))  ;; gz is global
```

```> (define gz 3)
6
```

Which makes sense since 1 + 2 + 3 = 6.

It also means if we use our ‘add’ routine in another function:

```(define foo (lambda (gz) (add gz 2)))
```

It’s pretty clear our intent is that ‘gz’ within ‘foo’ is a local variable, and if we call ‘foo’ with 1:

```> (foo 1)
6
```

This behavior makes intuitive sense because while the variable ‘gz’ in ‘foo’ hides the global variable ‘gz’, it shouldn’t change the intent of ‘add’ simply because I called it from within another function which hides a global variable. Meaning the intent of ‘add’ was always to use the global variable ‘gz’, and we should expect 6 from ‘(foo 1)’, not 4 (1 + 2 + 1), which we would have gotten in, say, a dynamically bound Lisp environment.

But…

If we also assume that the lexical binding rules apply to ‘nlambda’, then what happens if we define a function ‘setq’:

```(define setq (nlambda (a b) (set a (eval b))))
```

And suppose we call ‘setq’ from another function:

```(define bar (nlambda (q) (setq gz q)))
```

Now think for a moment: nlambda is a function call with local variable scope. This means that when we get to the statement ‘(set a (eval b))’, well, ‘a’ evaluates to the variable ‘gz’. And ‘b’ evaluates to the variable ‘q’.

And so we call (eval q).

But remember: nlambda is a function in a lexical language: calling it pushed the variable context. So when we call ‘eval’, here is what our stack looks like: And because we’re a lexical version of Lisp, our evaluator only looks at the head of the stack, and at the global variable list. Meaning our variable ‘q’ is not in scope.

So instead of having the intended effect, our setq will fail: it is unable to evaluate ‘q’. It’s worse than that, however. Suppose we define our closure for all lambda functions based on the environment in which they are encountered. In other words, if we define (for example) the list of available variables by running our interpreter so that all new variables defined by ‘lambda’, ‘let’ and other constructs push onto a local list of available functions, so (for example) writing:

```(let ((x 1)) (lambda (y) (+ x y)))
```

This operation tracks all variable being defined by lambda on a single local variable environment stack, and the lexical closure of lambda includes both the variables x and y, this still does not solve my problem, and that is the lexical closure of my ‘setq’ function does not actually involve q, so (eval b) will fail every time. This problem does not occur in the original dynamically bound Lisp language invented in the 60’s simply because there was no closure. The entire stack of variables was searched in order to find the referring variable. Of course this makes variable resolution a non-trivial problem, because every time you access a variable you have to scan the entire state space for variables. You can’t just compile a variable name as a fixed offset within an array of variables and make variable access linear time. At this point I think I need to make a few decisions.

(1) It’s pretty clear I need to build a full interpreter that is able to execute the language as well as a compiler within the lambda function.

(2) I’m unclear if I want to deal with closures right now. I suspect the correct answer is to bite the bullet and deal with closures.

(3) I’m going to have to bite the bullet and include a whole bunch of fundamentals (such as ‘let’, ‘setq’ or ‘set!’, ‘do’, ‘loop’, ‘prog’ or whatever other looping constructs that define local variable state) within my compiler, rather than defining them as fexprs. Fexprs don’t seem to work well for the purpose I want them for.

(4) Rethink my fexpr verses macro decision. Fortunately I can drop nlambda from my prototype, build in my primitives, and worry about macros later.

# Part 1: On the development of my own Lisp interpreter

Back when I was in high school (back in the early 1980’s) I learned Lisp. One of the things I really liked about the language was it’s expressiveness: because the language was such as simple one, it was fairly easy to build recursive pattern-matching and recursive search algorithms on arbitrary S-expressions, which made the development of things like a recursive descent algorithm for theorem proving relatively straight forward.

So my goal is to build an embedded Lisp interpreter/byte-code compiler that can run on a “small” footprint such as the iPhone.

(Note: I put “small” in quotes because, honestly, an iPhone is not a “small” device, CPU wise or memory-wise in the scope of the types of CPUs and memory footprints where Lisp was originally used. Back in the 80’s a computer as powerful as the iPhone with as much memory would have filled a room and cost millions.)

There are some language decisions that need to be made first. Dynamic binding verses Lexical binding

“Binding” expresses the relationship between a variable and it’s value. A variable “a” is bound to “1” when you write an expression “a = 1;” in C.

In the Lisp world there are two types of binding: lexical binding and dynamic binding.

To summarize, lexical binding is similar to C: there are global variables and variables within the scope of a function. A function defines the variables used within that function, and when a variable is resolved, the interpreter looks for variables either within the immediate function scope or in global scope.

Dynamic binding, first demonstrated in the original Lisp language, is different. Instead, there are nothing but a global stack of variables. A function call may declare new variables or variables that are already existing in the global pool; the new variables are added to the list of variables, and when the function returns, those variables that shadowed others already in the pool have their values effectively put back.

One way to think of dynamic binding is to think of all of the variables in the current system in a linked list of variables: This shows the variables a, b and c defined in our system. When we resolve a variable ‘a’, we start from the pointer ‘globals’, and walk the list until we find ‘a’; we then use that location for our variable binding.

Now suppose we call into a new function ‘foo’ defined as

`(define (foo a) ...some function...)`

When we invoke the function `(foo 5)` the invocation will push a new variable ‘a’ onto our global list of variables, saving the old value of the global pointer onto a stack: When we then resolve the value of variable ‘a’, we walk the global list, and find the first variable ‘a’, which shadows the old variable ‘a’. The upshot of dynamic verses lexical assignments is that the variable value to use within a function is dependent on the current global context in a dynamic environment, while in a static environment the value is completely defined: it is either locally declared or a global (albeit undefined) value.

So, in the following example:

```(define a 1)
(define b 2)
(define c 3)

(define (foo a) (bar))
(define (bar) a)
```

Invoking `(foo 5)` in a dynamic environment would mask the value of a = 1 in our global stack, so when (bar) is invoked, a resolves to 5:

```> (foo 5)
5
```

However, in a lexical environment, when bar is defined, the variable ‘a’ is bound to the global definition of ‘a’, as it would be in C. Thus, invoking `(foo 5)` does not mask the value of the global variable ‘a’ in bar, and invoking `(foo 5)` returns 1:

```> (foo 5)
1
``` From my perspective, I would personally prefer a lexical LISP to a dynamic LISP, for two reasons:

(1) Because the variables are completely defined within the context of a single function call, it is easier to compile Lisp statements: as we process each statement, we know where in the list of global variables the variable will be. If we haven’t seen the variable, we can simply create a new undefined variable in the global variable space. This means our compiler can create a global array of variables, and each variable name resolves to a fixed index in that global array of variables.

The same can be said of local variables: as we invoke a function we create a new array of local variables: entering a function pushes a fixed size array onto an invocation stack, and all local variable references become a fixed index into that array.

Thus, variable resolution is linear in time.

(2) Functions are “well-defined” rather than depends on side effects, and we minimize accidentally breaking a function that relies on global variable state.

Poor function ‘bar’ would be in serious trouble if we depended on the global variable ‘a’ and it was aliased by accident by another function that calls ‘bar’. Fexprs verses macros

Yes, I know supposedly this has been settled in favor of macros since the 1980’s and the publishing of Kent Pitman’s “Special Forms in Lisp” where he condemned the use of fexpr functions because you cannot perform static analysis on the forms. (Paper)

But I’d like to revisit the whole thing for just a moment.

In Lisp, a function call such as `(+ a b)` first evaluates a, then b, then calls the function ‘+’. So if a = 5 and b = 3, then we evaluate a, evaluate b, then add them together to return 8.

So far so good. But what if you want to set the value of a to 5 via a function via a hypothetical ‘set’ function. The natural way to write this would be `(set a 4)`; however, this would first evaluate a (to 5), evaluate 4 (which is a special form which evaluates to itself), then calls the function ‘set’ with the arguments 5 and 4–which is illegal.

To get around this, we could use the ‘quote’ “function”, a special form of “function” which returns the literal value of it’s parameter: `(quote a)` evaluates to the literal ‘a’, and then write `(set (quote a) 4)`–which will call the function ‘set’ with the arguments ‘a’ and 4.

Ideally, though, we’d like to create a new function, call it ‘setq’, which automatically quotes the first parameter, evaluates the second, then passes them to our ‘set’ function.

When Lisp was first invented in the 1960’s, this was done using a special form, the fexpr function, which does not evaluate it’s parameters prior to invoking the function. This allows us to then define our ‘setq’ function. Using ‘nlambda’ syntax which indicates a fexpr function which does not evaluate it’s parameters, we could define setq as follows:

```(define setq
(nlambda (var val)
(set var (eval val))))
```

This defines the symbol ‘setq’ as a lambda form which does not evaluate it’s parameters. Setq takes two parameters: ‘var’ and ‘val’. We then invoke our ‘set’ function with the value of ‘var’, but we evaluate the value in ‘val’. So when we invoke `(setq a 5)`:

```(setq a 5) ->
[nlambda is invoked with var = 'a, val = '5, which expands to]
(set 'a (eval '5)) ->
(set a 5)
```

The beauty of being able to define our own fexprs via the nlambda mechanism is that we can quickly define a number of the special forms, such as quote:

```(define quote (nlambda (a) a)))
```

Now, reviewing the Kent Pitman paper, it’s clear the first complaint he has, under the section “The Compiler’s Contract”, in fact has nothing to do with fexprs, and everything to do with compiling in a dynamic environment. But since I’ve decided above to use a lexical binding environment in my form of Lisp, the problem outlined in this section just goes away.

The second problem with fexprs, however, boils down to the problem of compilation: when compiling a function we cannot know how to generate the correct code to invoke a function within our compiled function unless we know if the code we’re calling is a regular lambda function or a special ‘nlambda’ function. To illustrate, suppose we define our ‘lambda’ keyword to invoke a compiler to compile our S-expression into a compiled form. So, we write the function:

```(define foo (lambda (a) (bar a 5)))
```

Now, assuming ‘lambda’ compiles the parameters and body of our program into machine pseudo-code, and if we know all functions are lambda functions, then we could easily generate the following code:

```function_prefix(1,0);   -- Special form which pulls 1 local variable from the stack into 'a'

push(0);                -- Push a (variable 0) onto stack
push_const(5);          -- Push constant 5 onto stack
call(bar);              -- Call bar. Return value puts value of bar onto stack

function_postfix;       -- Unwind the local variables, leaving the return value alone.
```

Here’s the thing, though: If ‘bar’ was a special ‘nlambda’ function, this behavior is incorrect. Instead, we would have wanted to generate:

```function_prefix(1,0);   -- Special form which pulls 1 local variable from the stack into 'a'

push(a);                -- Push the symbol a onto stack
push_const(5);          -- Push constant 5 onto stack
call(bar);              -- Call bar. Return value puts value of bar onto stack

function_postfix;       -- Unwind the local variables, leaving the return value alone.
```

That is, how we compile a call to bar requires us to know if bar is a lambda or an nlambda function. There are three solutions to this problem.

• Punt on fexpr forms and use macros instead.

The disadvantage of this solution is that we cannot define a number of very basic forms in Lisp; instead, our interpreter is complicated by having to redefine a number of basic forms (quote and cond, for example) as function calls into a special interpreter. This significantly increases the bootstrapping of an interpreter, since we need to handle a whole bunch of special forms, rather than just writing those special forms in Lisp and compiling them with our Lisp compiler.

• Require that any compiler we build compile against the full environment.

What this would mean is that rather than using the lambda and nlambda keywords to generate a compiled expression, we just use these keywords as markers, and our apply method (which evaluates lambda expressions) use those keywords to determine how it will interpret the expression. We then create a separate compile keyword which compiles functions based on the complete environment present at the time: at compile time we depend on all of the functions being available so we can make the determination if a function is a lambda or an nlambda form.

So our compile function would have the semantics of replacing a variable ‘a’ defined as `(lambda vars prog)` to a special #proc object containing byte code. It would fail if all of the functions invoked within the lambda procedure are not defined.

• Presume any function not defined (and any variable not defined) are lambda form functions.

This third option allows us to define lambda and nlambda as entry points into our bytecode compiler, with nlambda also marking the procedure as a special fexpr function. If we don’t see the function declared already, we then assume the function is a lambda form.

Because compilation does two things: it marks the type of function, and it compiles the function, if we have a situation where the definition of an fexpr form relies on a later fexpr form that hasn’t been defined yet, we could potentially use this fact by writing:

```(define a (nexpr (b c) '()))   ;;; Dummy form

(define b (nexpr (i)
...
(a foo bar)                 ;;; Our call to a, knowing a is an fexpr
...)

(setq a (nexpr (b c)
... the real code for a ... )
```

I suspect this shouldn’t happen very often. We could even put in a special check in our assignment operation to complain if we change the type of our global variable assignment between a normal function and an fexpr style function. My take is to use fexpr forms; they greatly simplify bootstrapping a Lisp interpreter. I also plan to structure the compiler with the third option: variables that haven’t been defined yet in the global namespace will be automatically defined as the special symbol ‘undefined’ (so we know it’s index in the global space), and will be presumed to be a normal lambda function if invoked. This allows me to use the ‘lambda’ and ‘nlambda’ keywords as entrypoints into my bytecode compiler. This afternoon I’m going to see if I can’t hack a little on my interpreter in Lisp; as I go along I will update this category with my results.