Building Abstractions with Procedures

Notes on the first chapter

1.2.1 Linear Recursion and Iteration

The way they formalize the definition of an iterative process, is enlightening:

Coupled with their simple iterative factorial example:

(define (factorial n) (fact-iter 1 1 n)) (define (fact-iter product counter max-count) (if (> counter max-count) product (fact-iter (* counter product) (+ counter 1) max-count)))

This is something obvious when made apparent like this, but it is something that existed as a vague notion in my mind, and seeing the idea laid out in front of me has refined the way I think about recursion/iteration. I hope I have more moments of refinement like these.

Musing: Rewriting recursive solutions by iterating

I had some trouble doing solving problems iteratively, the recursive solution comes naturally. Then I realized that the iterative solution is about working backwards. Recursive is top-down, you keep expanding the tree, while the iterative solution build the nodes of the tree and work your way up until you combine them into a single root.

Recursive solutions feel more intuitive, since they feel more mathematical. Take for example the mathematical definition of fib(n):

F(n) = F(n - 1) + F(n - 2)

Intuition will lead you to directly transliterating this mathematical definition into code, leading you to the recursive solution. The iterative solution requires you to start at the leaves of the computation tree, joining them until you arrive at the root.

Stockholm Syndrome

I think I've Stockholm Syndrome'd myself into enjoying Scheme's syntax, having fun arranging parentheses.

Formulating Abstractions with Higher Order Procedures

Modularity and testability of functions is great

The lack of control flow constructs and the fact that functions are data, means at this point the book is employing a pretty functional style. At first I was worried about debugging my programs, because in imperative land I would simply throw print lines every where, how I would achieve this in FP?

But the nature of the functional style means you will innately be compelled to abstract your logic into separate and pure functions, so if you have a hunch that a certain chunk of logic is breaking down, you only have a small surface area of code to test when you narrow down the logic into its functions.

Take for example my solution to an exercise in approximating the sum of pi:

(define (product-iter term a next b) (define (iter a result) (if (> a b) result (iter (next a) (* result (term a))))) (iter a 1)) (define (pi n) (define (term-num i) (cond ((= i 0) 2) ((even? i) (term-num (- i 1))) (else (+ (term-num (- i 1)) 2)))) (define (term-denom i) (cond ((or (= i 0) (= i 1)) 3) ((even? i) (+ (term-denom (- i 1)) 2)) (else (term-denom (- i 1))))) (define (incr x) (+ x 1)) (* 4 (/ (product-iter term-num 0 incr n) (product-iter term-denom 0 incr n))))

The logic is delineated clearly by the boundaries of each function. When I was writing this, I noticed I wasn't getting the right answer. I had the premonition that I wasn't calculating the terms for the numerator or the denominators correctly, so my first instinct is to throw some print lines around term-num and term-denom and validate the output.

Then I realized I could just copy those two functions into a Scheme REPL and validate them myself. Because of the functional style, the logic isn't dependent on anything else so I could just drop it in the repl with no headache.

Declaring variables in procedures is just syntactic sugar for calling a new function

(define (multiply-then-add-five a b) (let ((c (* a b))) (+ c 5))) (define (with-lambda a b) ((lambda (c) (+ c 5)) (* a b)))

Got any questions? Did I say something completely bogus and wrong? Hit me up on Twitter.