Andrue's Lab

Notes on Chapter 3

Two core principles of functional programming are that we don't update variables and we don't modify mutable data structures. This begs a few questions: what data structures can we use like this?, how do we define them in TypeScript?, and how do we operate on them?.

This chapter is all about answering these questions with the concept of functional data structures. We'll learn about how to define data types, how to use pattern matching to discern data types, and how to write generalized pure functions.

Functional Data Structures

What exactly is a pure functional data structure? Well, it's exactly what it sounds like: data and structure. In fact it's exactly the same data and structure as in other languages, the only difference is in the implementation.

Okay, great, but what exactly is different? Well, the implementation of a data structure is typically some definition of the structure, such as a type, class, or struct, and some definition of the means of constructing, changing, and extracting the data, such as functions, methods, or procedures.

With pure functional data structures, we take the approach of using types to define the structure of the data and pure functions to construct, change, and extract the data.

This leads to the very desirable trait of immutability. Every data structure remains constant. Much like how numbers don't change when you add them, now our complex data structures will remain the same when we use functions on them! Imagine a complex data structure such as a tree or a graph in which calling insert leaves the original untouched, returning a new fresh copy [1]. Think of how easy it would be to reason about the code, how easy it would be to debug, how easy it would be to serialize, or to create a command pattern! How easy it would be to save a history of the structure to the disk! Time-travel debugging! Immutable data structures are a powerful tool.

[1]: A copy is not typically made in a literal sense in the underlying generated machine code as that would be very inefficient. Don't worry about machine representations, let the compiler do its job. This might be a challenge for you if you've got experience in other languages with operational semantics that translate to "doing things" like "returning values" and "calling functions." but keep in mind that pure functional programming is about describing your algorithms and data structures as compositions of pure functions. Imagine yourself like Neo from The Matrix: this is your "try to realize the truth...there is no spoon" moment.

A long-winded aside on mainstream appeal

At this point you may be asking yourself "Why hasn't this been done before?" or "Why isn't this the default mode of programming?" or "Why isn't my favorite social media personality using this?" and while there are many approaches to take to answer these questions, I'd like to specifically tackle the social aspect.

The truth is, each of us makes decisions based entirely on our own personal experience, preferences, and circumstances and unfortunately we don't know what we don't know. This is especially true for things (like functional programming) that are abstract and require training, education, or context to understand. As you've probably seen at this point, functional programming is one of the least accessible paradigms to newcomers due to the up-front cost of acquiring the foundational knowledge to be able to apply it.

For example, this means that, as you're developing in your programming career, even though you're likely to be introduced to the concepts behind functional programming, you're very unlikely to be able to understand and contextualize them. The value of modeling your code as a composition of pure functions, enabling things like referential transparency and the substitution model (and the qualities they bring such as reusability, modularity, testability, maintainability, etc) isn't necessarily apparent until you've already invested significant time and energy into learning more common paradigms. The psychology of the sunk cost fallacy, increasing responsibility as one ages, having less time dedicate to learning, and numerous other circumstances both personal and societal, makes it very unlikely that we ever have the time or desire to learn a new programming paradigm. After all, if the paradigm you've got ain't broke, don't fix it, right?

At the end of the day, programming is about configuring a machine's behavior to solve a problem, and it's entirely possible (and certainly requires less up-front investment) to do that by providing it a list of increasingly detailed instructions. It's much faster to get started programming imperatively, but the skill ceiling is much lower. All you really need to get started is a reference for the commands the machine can take, no abstract math required! Once you gain some experience, you start doing the simulation in your head, predicting what the machine will do, able to move even faster. Unfortunately this is where you hit the ceiling. Our human brains have a hard cap of 7-9 things we can hold in short-term memory. This one fact means many programming paradigms have a very low maximum efficacy. Functional programming, however, is not limited by short-term memory, as you don't need to simulate the state of the machine or keep track of the value of variables over time due to immutability and referential transparency. Every part of a functional program can be understood in isolation. We're able to avoid the hard limit of short-term memory because we don't need to simulate the program in our heads, we don't need to hold the value of variables in our mind while trying to predict what the machine will do. In other words, functional programming is a real life zero-cost abstraction. All of our brain's processing power goes toward the solution, none of it goes toward simulating the machine. It's the ultimate realization of the promise of high-level languages.

A ubiquitous data structure made functional

Getting back to functional data structures, we're going to take a look at one of the most ubiquitous data structures and see how it can be implemented in a purely functional way. We're going to implement a singly-linked list, but we're going to have to take a bit of a detour first to learn about a functional programming technique called pattern matching.

If you're coming from a more procedural or imperative programming language, you could think of pattern matching is a way of "branching" based on the structure of a data type. A more functional way to think of it is as the declaration of a logical implication.

Aside: One benefit of programming in a functional style (or any high-level paradigm really) is that we sufficiently abstract ourselves from lower-level implementation details such that the compiler is able to generate efficient code for us. We want to be able to write code that describes the algorithmic solution to our computational problem rather than concerning ourselves with specifics of how or where the code will run.

An example of this abstraction in TypeScript would be using the array prototype functions (map, filter, reduce, etc) to describe how to transform an array as opposed to using mutable variables and for-loops. Language devs can improve the implementation of the array prototype functions and we reap the benefits without requiring any changes to our code because these are pure, declarative functions that describe a computation e.g. "this array is that array with each element multiplied by 2" or "this array is that array with all elements greater than 10 removed" or "this array is the sum of all elements in that array."

If we were to implement the algorithms using loops and mutation, there would be no possibility to take advantage of transparent algorithmic optimizations because they describe the how of the computation rather than the what. For the most part, loops and mutation are as fast as they can be. Mutation prevents the possibility of (direct) parallelization, unlike with pure functions.

Pattern Matching in TypeScript

Pattern matching is a way of branching based on the structure of a data type. In its simplest form, this requires two things: (1) branching and (2) a way to discern the structure of a data type.

We'll start with the simplest case: a pattern that matches any structure. For this we will create a Case type to accept a parameter which can identify the "anything" pattern. This will allow cases to identify that we should match anything in that position in the data structure:

type Case = (
  anythingPattern: any
) => [typeof anythingPattern, (value: any) => any];

const match =
  (...cases: Case[]) =>
  (value: any): any => {
    if (cases.length === 0) return null;
    const anythingPattern = {};
    const [firstCase, ...restCases] = cases;
    const [firstPattern, firstFunc] = firstCase(anythingPattern);
    if (firstPattern === anythingPattern) return firstFunc(value);
    return match(...restCases)(value);

const multiFunc = match((_) => [_, (value) => typeof value]);

  multiFunc("hello!"), // => "string"
  multiFunc(3), // => "number"
  multiFunc(true), // => "boolean"
  multiFunc({ message: "FP is cool" }) // => "object"

Next we'll want to be able to match against a specific value. This is pretty simple, we'll just add a check for equality:

const match =
  (...cases: Case[]) =>
  (value: any): any => {
    if (cases.length === 0) return null;
    const anythingPattern = {};
    const [firstCase, ...restCases] = cases;
    const [firstPattern, firstFunc] = firstCase(anythingPattern);
    if (firstPattern === anythingPattern) return firstFunc(value);
    if (firstPattern === value) return firstFunc(value); // <== Added check for equality
    return match(...restCases)(value);

We're getting somewhere now, but this is still basically just a fancy switch statement. Matching a specific value is a good start, but we want to be able to match against complex data structures. In TypeScript, these data structures typically come in the form of arbitrarily nested objects and / or arrays.

Let's continue toward this goal by implementing matching against objects. To start, we should refactor a bit. We'll extract the check for a matching pattern into its own function:

const matchPattern = (anythingPattern: any) => (pattern: any, value: any) => {
  // Any
  if (pattern === anythingPattern) return true;

  // Exact Value
  if (pattern === value) return true;

  // Not equal + Not an object = Not a match
  if (typeof pattern !== "object") return false;
  if (typeof value !== "object") return false;

  // Null pattern or value is never a match
  if (pattern === null) return false;
  if (value === null) return false;

  // We're not handling arrays yet
  if (Array.isArray(pattern)) return false;

  // Objects
  const matchKeys = (keys: string[]): boolean => {
    if (keys.length === 0) return true;
    const [firstKey, ...restKeys] = keys;
    if (!(firstKey in value)) return false;
    if (!matchPattern(anythingPattern)(pattern[firstKey], value[firstKey]))
      return false;
    return matchKeys(restKeys);
  return matchKeys(Object.keys(pattern));

Next we just need to update the match function and we should be good to go:

const match =
  (...cases: Case[]) =>
  (value: any): any => {
    if (cases.length === 0) return null;
    const anythingPattern = {};
    const [firstCase, ...restCases] = cases;
    const [firstPattern, firstFunc] = firstCase(anythingPattern);
    if (matchPattern(anythingPattern)(firstPattern, value))
      // ^== Use our new function to determine a pattern match
      return firstFunc(value);
    else return match(...restCases)(value);

Perfect! Now we're really cooking with fire. We can match arbitrary values, specific values, and any object-shaped value. We're almost there, but we still need to be able to match against arrays.

Let's update our matchPattern function to check for matches with arrays. The implementation will look very similar to the check for an object pattern because an object is basically an array of key/value pairs.

const matchPattern = (anythingPattern: any) => (pattern: any, value: any) => {

  // Arrays
  const matchItems = (patternItems: any[], valueItems: any[]): boolean => {
    if (!Array.isArray(valueItems)) return false;
    if (valueItems.length < patternItems.length) return false;
    if (patternItems.length === 0) return true;
    const [firstPatternItem, ...restPatternItems] = patternItems;
    const [firstValueItem, ...restValueItems] = valueItems;
    if (!matchPattern(anythingPattern)(firstPatternItem, firstValueItem))
      return false;
    return matchItems(restPatternItems, restValueItems);
  if (Array.isArray(pattern)) return matchItems(pattern, value);


And because we abstracted the check for matching a pattern, we don't need to make any changes to our match function. Now THIS is pattern matching. We can match arbitrary values, specific values, any (arbitrarily nested) object-shaped value, and any (arbitrarily nested) array-shaped value. We're done! We've implemented pattern matching in TypeScript.

Now let's take a look at how we can use this to implement functions on a singly-linked list in a purely functional way. Let's start with the definition of the singly-linked list data structure:

type List<T> = {
  head: T;
  tail: List<T>;

Notice that in the same way that a function can be polymorphic or "taking many forms", a data type can also be polymorphic via generic type parameters. In our example, the data type List<T> could be a list of numbers, strings, booleans, etc, or no list at all!

const nil: List<never> = {
  head: undefined as never,
  tail: undefined as never,

The no list at all case is represented in TypeScript by the type List<never>. TypeScript's never type is what is known in type theory as a "bottom type" because it is a subtype of every other type. In other words, a never is a T but a T is not a never, and never itself cannot be constructed. This is why we had to cast head and tail in our nil value.

In our nil value we don't actually need a value in the head or tail properties (hence undefined) because the properties won't actually be accessed. The point of our nil value is that it acts as a link between the type system and the runtime system so we can branch appropriately (by comparing referential equality) in our functional algorithms while still maintaining type-level correctness.

Next we'll declare a data constructor cons (short for construct) to construct a list of an arbitrary type:

const cons = <T>(head: T, tail: List<T>): List<T> => ({

For example:

const numberList: List<number> = cons(1, nil); // (1 -> nil)
const stringList: List<string> = cons("hello", cons("world", nil)); // ("hello" -> ("world" -> nil))
const booleanList: List<boolean> = nil; // nil

Notice that our empty list nil is able to be assigned to a List<boolean> because never is a subtype of all other types.

Now let's put our pattern matching implementation to use by implementing a few list functions. First we'll implement a function which takes the sum of a list of numbers:

const sum = match(
  (_) => [nil, () => 0],
  (_) => [cons(_, _), (x) => x.head + sum(x.tail)]

As one might expect, the sum function states that the sum of an empty list (nil) is zero, and the sum of a non-empty list is the first element (x.head) plus the sum of the remaining elements (sum(x.tail)).

Pretty neat! Let's see what a definition of the product of a list of numbers looks like:

const product = match(
  (_) => [nil, () => 1],
  (_) => [cons(0, _), () => 0],
  (_) => [cons(_, _), (x) => x.head * product(x.tail)]

Likewise, the product function states that the product of an empty list is one, the product of a list starting with zero is zero, and the product of any other nonempty list is the first element multiplied by the product of the remaining elements.

Note that these are recursive definitions, just like the definition of the data type. It should become obvious that most computing tasks can be conceptualized in this way, it's only a matter of understanding the general principles and learning how to apply them.

These definitions are perfectly workable, but there's one more thing we can do to make pattern matching a little bit easier to work with and reduce duplicated code. You'll notice how in our result expressions we still have to access x.head and x.tail to use the values of the lists. This is undesirable because it means in general we need to know the implementation details of the structure we're matching against in order to make effective use of our implementation of pattern matching. As a fix, we will implement what I call "pattern marking", which is a way to give a name to parts of your pattern and reference those parts in the result expression.

With it, we can do something like the following and avoid the need to know implementation details like the specific names of the properties of a data structure:

const sum = match(
  (_) => [nil, () => 0],
  (_) => [cons(_`x`, _`xs`), ({ x, xs }) => x + sum(xs)]

In this example we create a mark "x" and a mark "xs" and pass them into the list constructor to receive the head and the tail of the list (which is where those marks ended up due to our implementation of the list constructor) inside the result expression.

Let's take a look at how we can implement this in TypeScript. We'll start with a simple function that gets the value at a given path:

const getValueAtPath = (path: string[] | null, obj: any): any => {
  if (path === null) return null;
  if (path.length === 0) return obj;
  if (typeof obj !== "object") return null;
  const [head, ...tail] = path;
  return getValueAtPath(tail, obj[head]);

In abstract terms you can think of this as a basic graph traversal algorithm for a graph with edges identified by strings, it simply follows the given edges and returns the value of the node at the end. We will use this in our pattern matching implementation in conjunction with the following function to extract the values we've marked:

const pathTo = (value: any, obj: any): string[] | null => {
  if (value === obj) return [];

  if (typeof obj !== "object") return null;
  if (obj === null) return null;

  if (Array.isArray(obj)) {
    const pathToInArray = (items: any[], index: number): string[] | null => {
      if (items.length === 0) return null;
      const [head, ...tail] = items;
      const path = pathTo(value, head);
      if (path) return [String(index)].concat(path);
      return pathToInArray(tail, index + 1);
    return pathToInArray(obj, 0);

  const pathToInObject = (keys: string[]): string[] | null => {
    if (keys.length === 0) return null;
    const [head, ...tail] = keys;
    const path = pathTo(value, obj[head]);
    if (path) return [head].concat(path);
    return pathToInObject(tail);
  return pathToInObject(Object.keys(obj));

Again this can be thought of in abstract terms as a graph algorithm, it's a depth-first search returning a set of edges identified by strings. This function provides a way to lookup the path to a given mark so we can use that path to extract the values from the value passed into the match function.

Finally, we'll implement our "marking" function which allows us to create marks and populate an object with values from a search object at keys given by a marked object:

const markStructure = (obj: any) => {
  const marks: any[] = [];

  const createMark = (name: string) => {
    const mark = { name };
    return mark;

  const lookupMarks = (markedObject: any) => {
    const getMarkValues = (items: any[], result: any): any => {
      if (items.length === 0) return result;
      const [head, ...tail] = items;
      const pathToMark = pathTo(head, markedObject);
      const valueAtMark = getValueAtPath(pathToMark, obj);
      return getMarkValues(tail, { ...result, []: valueAtMark });
    return getMarkValues(marks, {});

  return [createMark, marks, lookupMarks] as const;

Aside: To be effective with functional programming, it will help to be familiar with concepts like closure and recursion.

Now all we need to do is update our matching implementation to support marks. We'll treat the createMark function as our anythingPattern to get the same syntax for matching "anything" as before. Because the original implementation was using an object and a referential equality check, this should behave exactly the same way. We also need to treat all of the marks as anything patterns to avoid false negatives. We'll pass in the list of marks and use includes() instead of passing in the anythingPattern:

const matchPattern = (marks: any[] = []) => (pattern: any, value: any) => {
	// Any
	if (marks.includes(pattern)) return true;

  ... // nothing else changed in this function

Lastly, we'll need to update match to initialize the createMark and lookupMarks functions by calling markStructure with the value it receives, and then to pass createMark and the marks to the newly-updated matchPattern function. When it finds a match, we'll want it to transfer the marked values into the result object and pass it to the result expression:

const match =
  (...cases: Case[]) =>
  (value: any): any => {
    if (cases.length === 0) return null;
    const [firstCase, ...restCases] = cases;
    const [createMark, marks, lookupMarks] = markStructure(value); // <== init createMark() and lookupMarks()
    const [firstPattern, firstFunc] = firstCase(createMark);
    if (matchPattern([createMark, ...marks])(firstPattern, value))
      if (marks.length > 0)
        // ^== pass marks into updated markPattern()
        return firstFunc(
        ); // ^== found a match, pass mark values into result expression
      else return firstFunc(value);
    else return match(...restCases)(value);

There you have it, pattern matching (+ marking) in TypeScript!

Let's see what our updated sum and product definitions look like using this new call structure:

const sum = match(
  (_) => [nil, () => 0],
  (_) => [cons(_`x`, _`xs`), ({ x, xs }) => x + sum(xs)]

const product = match(
  (_) => [nil, () => 1],
  (_) => [cons(0, _), () => 0],
  (_) => [cons(_`x`, _`xs`), ({ x, xs }) => x * product(xs)]

Excellent! Exactly what we wanted. Let's create a little utility for building a list out of a variadic function so we don't have to write nested cons(cons(cons())) and then see some examples of sum and product in action:

const apply = (...list: any[]): List<any> => {
  if (list.length === 0) return nil;
  const [head, ...tail] = list;
  return cons(head, apply(...tail));

And then:

const list1 = apply(1, 2, 3, 4); // => (1 -> (2 -> (3 -> (4 -> nil))))
const result1 = sum(list1); // => (1 + 2 + 3 + 4) = 10

const list2 = apply(2, 2, 2); // => (2 -> (2 -> (2 -> nil)))
const result2 = product(list2); // => (2 * 2 * 2) = 8

Perfect. Let's look at a few more examples of pattern matching:

match((_) => [_, () => 42])(apply(1, 2, 3)); // => 42

Here we're using the "anything pattern" here to match any expression and return the constant value 42.

match((_) => [cons(_`h`, _), ({ h }) => h])(apply(1, 2, 3)); // => 1

Here we're using a data constructor pattern in conjunction with marking to capture a subexpression of the target (the list (1 -> (2 -> (3 -> nil)))).

match((_) => [cons(_, _`t`), ({ t }) => t])(apply(1, 2, 3)); // => (2 -> (3 -> nil))

Here again we use a data constructor pattern with marking to capture a subexpression of the target, namely the 'tail' of the list.

match((_) => [nil, () => 42])(apply(1, 2, 3)); // => null

This time we get null as the result because there was no match.

Data Sharing in Functional Data Structures

You might be wondering, if the data is immutable, how do we write functions that, for example, add or remove elements from a list? It's pretty straightforward actually: when adding an element to a list all we need to do is create a new list with the element appended, for example:

const el = 1;
const existingList = apply(2, 3, 4);
const withEl = cons(el, existingList); // => (1 -> (2 -> (3 -> (4 -> nil))))

Since our lists are immutable, we don't actually need to copy existingList; we just reuse it. This is called data sharing. Sharing immutable data often lets us implement functions more efficiently - we can always return immutable data structures without having to worry about subsequent code modifying our data. There will be no need to include the idea of making defensive copies in our algorithms (e.g. to avoid modification or corruption) because our data structures are immutable!

Defensive copying can become a problem in large mutable programs. Each component of the program (e.g. procedures, classes) that the mutable data passes through needs to make its own copy of the data because other components can make modifications. Immutable data, on the other hand, is always safe to share and does not require defensive copying. On a sufficiently large scale, functional programming can be much more efficient than approaches that rely on side effects due to data sharing and reduced computation.

In the same was adding an item to a list, we can remove an element from the front of a list by simply returning its tail. There's not actually any destruction or removal going on:

const list = apply(1, 2, 3, 4); // => (1 -> (2 -> (3 -> (4 -> nil))))
const listWithout1 = list.tail; // => (2 -> (3 -> (4 -> nil)))

The term for this property of immutable data structures is persistent. We say that functional data structures are persistent, meaning that existing references are untouched when doing operations on the data structure.

The Efficiency of Data Sharing

A surprising example of data sharing is the following function that adds all the elements of one list to the end of another list:

const append = (list1, list2) =>
    (_) => [nil, () => list2],
    (_) => [cons(_`h`, _`t`), ({ h, t }) => cons(h, append(t, list2))]

const l1 = apply(1, 2); // => (1 -> (2 -> nil))
const l2 = apply(3, 4); // => (3 -> (4 -> nil))
const result = append(l1, l2); // => (1 -> (2 -> (3 -> (4 -> nil))))

As you can see, the append function takes two lists and appends the elements of the second one to the first. The interesting part here is that the algorithm only makes copies equal to the number of elements in list1! When it matches nil in list1 it simply refers to list2 as the tail of the resulting list. Pretty neat.

If we were to implement this function with arrays, we'd have to copy every element of each array into the resulting array. In this case, the immutable linked list is much more efficient in both space and computational complexity.

With a singly-linked list, any time we want to replace a tail of the list, even if it's the last tail of the list, we have to make a copy of every item in the list. Writing efficient purely functional data structures is all about finding ways to exploit data sharing. In TypeScript, you can generally tell if data is being shared vs copied by whether you're creating a reference or a value. Our list implementation above uses a reference for the tail and a value for head - the tail is using data sharing while the head is copied.

It's entirely possible to create efficient implementations of sequences, vectors, trees, graphs, and so on in a purely functional way but we aren't going to cover these here.

Recursion Over Lists and Generalizing to Higher-Order Functions

Let's take another look at the definition of our sum and product functions. If we simplify the product implementation just a bit to avoid the short-circuiting logic of checking for zero, we get two very similar definitions:

const sum = match(
  (_) => [nil, () => 0],
  (_) => [cons(_`x`, _`xs`), ({ x, xs }) => x + sum(xs)]

const product = match(
  (_) => [nil, () => 1],
  (_) => [cons(_`x`, _`xs`), ({ x, xs }) => x * product(xs)]

The only differences here are the value to return in case the list is empty, and the operation applied to the head and tail of the lists. Any time we see duplication like this we can generalize it away by pulling out subexpressions into function arguments. We clearly need to parameterize two things: (1) the result given when matching an empty list and (2) the function to apply to combine the head and tail of the list.

Let's take a look at how that works:

const foldRight = <A, B>(
  list: List<A>,
  emptyResult: B,
  foldFunc: (a: A, b: B) => B
): B =>
    (_) => [nil, () => emptyResult],
    (_) => [
      cons(_`x`, _`xs`),
      ({ x, xs }) => foldFunc(x, foldRight(xs, emptyResult, foldFunc)),

Normally our match function implicitly takes the list we're operating on as the parameter, with foldRight we need to specify it and pass it through.

Why "foldRight"?

It's an analogy; something that allows a transfer of understanding from one domain to another.

Let's break it down into the two components of the name, "fold" and "right" and examine why we call it that:


"Fold" is an analogy to the folding of paper, it invokes the idea of the shape and orientation of paper and the motion of adding a crease and layering it upon itself. Consider how paper starts flat and when folded, becomes stacked. Also consider how the folding of paper causes a delineation between the two sides, one side leaving its original position to be stacked upon the other.

Now take this knowledge and imagine what it might mean to "fold" a list. There is already a clear delineation to our List structure: head and tail, so clearly the fold will be made there, but what is the analogous component in our list domain to the stacking of the paper after the fold? Well, once you complete a fold, with paper, there ceases to be two identifiable parts. The crease disappears and you're left with a single piece of stacked paper. Clearly, this stacking effect must be analogous to the "stacking" effect of the list-fold operation; in our example case, "stacking" numbers is simply adding them together, creating a "taller" "stacked" number. So in simple terms, folding the paper is analogous to adding the numbers. The important part is understanding how this analogy can be applied to any operation, not just addition.


Hopefully it's clear what the "fold" part of the analogy means now, but what about "right"? Why are we saying the fold happens "right"? Isn't that the opposite of what's happening? Doesn't the head of the list exist to the left while the tail of the list exists to the right? Doesn't it make more sense to fold the dangling value than to fold the rest of the list? Doesn't our fold function only work on single items?

You're correct. When we say "fold right" it's not an instruction of which direction to "bring the paper for the fold", it's actually a description of how the fold works. We're not "folding to the right", we're "folding starting from the (far) right."

You see, it is true that we start at the beginning of the list with the first item, e.g. the head, but the problem is that we don't have the rest of the list in foldable terms, all we have is another list: tail. How do we solve this? Well, taking a look at our definition of foldRight, the answer is clear: to complete the fold, we have to first fold the list on the right! If you follow the execution of the function, you'll see that we actually "pause" folding each head to "start with" the tail of the list because we're immediately going into a recursive call to get the value! While it may look like we're folding from the left, in fact, we're starting the fold on the (far) right and bringing each of those folds to the left into the second argument slots. This is why we call it "fold right": the first folds happen on the right. It's a bit counter-intuitive, but it makes sense once you realize what's happening. The key is understanding the order of evaluation and recognizing the recursion.

Now that we have that generalized function, let's take a look at what our new implementations of sum and product look like using it:

const sum = (list: List<number>) => foldRight(list, 0, (a, b) => a + b);
const product = (list: List<number>) => foldRight(list, 1, (a, b) => a * b);

Beautiful. Let's implement a few more list functions.

A function that takes the top n items:

const take = <A>(list: List<A>, n: number): List<A> =>
    (_) => [nil, () => nil],
    (_) => [
      cons(_`h`, _`t`),
      ({ h, t }) => (n === 0 ? nil : cons(h, take(t, n - 1))),

A function that returns the longest prefix of items that pass the predicate function:

const takeWhile = <A>(list: List<A>, f: (a: A) => boolean): List<A> =>
    (_) => [nil, () => nil],
    (_) => [
      cons(_`h`, _`t`),
      ({ h, t }) => (f(h) ? cons(h, takeWhile(t, f)) : nil),

A function that returns true if all elements of the list pass the predicate function:

const forall = <A>(list: List<A>, f: (a: A) => boolean) =>
  foldRight(list, true, (a, b) => f(a) && b);

A function that returns true if any element of the list passes the predicate function:

const exists = <A>(list: List<A>, f: (a: A) => boolean) =>
  foldRight(list, false, (a, b) => f(a) || b);

A function like foldRight except it returns the list of partial results, rather than just the final value:

const scanRight = <A, B>(
  list: List<A>,
  defaultValue: B,
  f: (a: A, b: B) => B
): List<B> =>
    (_) => [nil, () => defaultValue],
    (_) => [
      cons(_`x`, _`xs`),
      ({ x, xs }) =>
          foldRight(cons(x, xs), defaultValue, f),
          scanRight(xs, defaultValue, f)

Loss of efficiency in naive list functions

One of the problems with List is that despite being able to express operations and algorithms in very simple, elegant terms, the resulting implementation isn't always efficient. For example, when reusing functions (to avoid duplicating the same patterns over and over) we might end up making multiple passes over the same input.

Algebraic Data Types

List is just one example of what's called an Algebraic Data Type. Algebraic data types or ADTs are just data types defined by one or more data constructors, each of which may contain zero or more arguments. We say that the data type is the sum of its data constructors, and each data constructor is the product of its arguments, hence the name algebraic data type.

Here are these parts in our List example:

// A data type:
type List<T> = { head: T, tail: List<T> }

// Constructors of that data type:
const nil = (): List<never> => ({ head: undefined as never, tail: undefined as never })
const cons = <T,>(head: T, tail: List<T>): List<T> => ({ head, tail })

// Arguments:
(head: T, tail: List<T>)

Notice the absence of the class keyword. This is intentional. In functional programming, when we say constructor we are not referring to a language feature, we are referring to the ability of a function to construct data types.

Pairs and tuples of other arities are also algebraic types, given the definition we provided, and can also be used as data in pattern matching. This should be obvious in TypeScript since we have no distinction between arrays and tuples.


We can define other data structures as algebraic data types. Let's see what an example of a binary tree would look like:

interface Tree<A> {}

interface Leaf<A> extends Tree<A> {
  value: A;

interface Branch<A> extends Tree<A> {
  left: Tree<A>;
  right: Tree<A>;

Nice! Although, if we take a look at our definition of an algebraic data type, we'll notice we can simplify this example: "A data type which is just the sum (or union) of its data constructors, each of which are the product of their arguments"

Here's the simplified example:

type Tree<A> = Leaf<A> | Branch<A>;
type Leaf<A> = A;
type Branch<A> = [Tree<A>, Tree<A>];

Perfect. As an exercise, let's try to write a few functions for this data type!

A function size which counts the number of nodes (leaves and branches) in a tree:

const size = match(
  (_) => [
    [_`left`, _`right`],
    ({ left, right }) => 1 + size(left) + size(right),
  (_) => [_, () => 1]

const sizeExampleTree1: Tree<number> = 123;
const sizeExampleTree2: Tree<number> = [123, 123];
const sizeExampleTree3: Tree<number> = [[123, 123], 123];

const sizeExampleResult1 = size(sizeExampleTree1); // Root node: 1
const sizeExampleResult2 = size(sizeExampleTree2); // Root node + Branches: 3
const sizeExampleResult3 = size(sizeExampleTree3); // Root Node + Branches + Sub-Branches: 5

A function maximum which returns the maximum value in a Tree<number>:

const maximum = match(
  (_) => [
    [_`left`, _`right`],
    ({ left, right }) => Math.max(maximum(left), maximum(right)),
  (_) => [_`value`, ({ value }) => value]

const maxExampleTree1: Tree<number> = 1;
const maxExampleTree2: Tree<number> = [1, 2];
const maxExampleTree3: Tree<number> = [[4, [8, 2]], 1];

const maxExampleResult1 = maximum(maxExampleTree1); // => Max: 1
const maxExampleResult2 = maximum(maxExampleTree2); // => Max: 2
const maxExampleResult3 = maximum(maxExampleTree3); // => Max: 8

A function depth which returns the maximum path length from the root of the tree to any leaf:

const depth = match(
  (_) => [
    [_`left`, _`right`],
    ({ left, right }) => 1 + Math.max(depth(left), depth(right)),
  (_) => [_, () => 1]

const depthExampleTree1: Tree<number> = 1;
const depthExampleTree2: Tree<number> = [1, 1];
const depthExampleTree3: Tree<number> = [[1, 1], 1];

const depthExampleResult1 = depth(depthExampleTree1); // => Root node: 1
const depthExampleResult2 = depth(depthExampleTree2); // => Root node + branch layer: 2
const depthExampleResult3 = depth(depthExampleTree3); // => Root node + branch layer + branch layer: 3

A function map analogous to the function of the same name on List, modifying each element in the tree with a given function:

const map = <A, B>(tree: Tree<A>, f: (t: A) => B): Tree<B> =>
    (_) => [
      [_`left`, _`right`],
      ({ left, right }) => [map(left, f), map(right, f)],
    (_) => [_`value`, ({ value }) => f(value)]

const addOne = (a: number) => a + 1;

const mapExampleTree1: Tree<number> = 1;
const mapExampleTree2: Tree<number> = [1, 2];
const mapExampleTree3: Tree<number> = [[2, 3], 1];

const mapExampleResult1 = map(mapExampleTree1, addOne); // => 2
const mapExampleResult2 = map(mapExampleTree2, addOne); // => [2, 3]
const mapExampleResult3 = map(mapExampleTree3, addOne); // => [[3, 4], 2]

And finally, a function fold that generalizes over all these functions:

const fold = <A, B>(
  fBranch: (left: Tree<A>, right: Tree<A>) => Tree<B>,
  fLeaf: (a: A) => Tree<B>
) =>
    (_) => [[_`left`, _`right`], ({ left, right }) => fBranch(left, right)],
    (_) => [_`value`, ({ value }) => fLeaf(value)]

Well, that's all I've got for you on Algebraic Data Types in TypeScript. In the next chapter we'll look into how to handle errors without exceptions!

Notes on Chapter 2

With pure functions, how do we write the simplest program?

How can we change our view of programs as a sequence of instructions, each of which cause some effect, into a view of programs as a composition of pure functions?

Should I implement the abs function from page 2/9?

Code on the left-hand side of an equals sign is referred to as its "Signature", code on the right-hand side of an equals sign is referred to as its "Definition". Example: const signature: string = 'definition';

Despite type inference, it is generally good practice to declare the return type of functions you expect other people to use.

Functions that return void and have side-effects are often referred to as procedures to emphasize the fact that they are impure.

Functional Program State

Suppose we want to write a function to calculate the factorial of a given number:

const factorial = (n: number): number => {
  const loop = (n: number, state: number): number =>
    n <= 0 ? state : loop(n - 1, n * state);

  return loop(n, 1);

Notice that we do not use a mutable loop variable in this example, yet we're able to "loop" or iterate from number 1 to n. The key to functional iteration is recursion.

Another notable aspect of this code is the declaration of the inner function called loop: in TypeScript we can define a function in any block of code, including inside another function. Scoping rules keep the identifier (loop) local to the function (factorial) just like a local variable. This means the declaration is not a side-effect, because it does not affect the global context of the program!

Already you might be starting to get an understanding of how it is possible to perform stateful calculations with pure functions. Indeed, the total number accumulated so far during the calculation of the factorial is a form of program state!

Aside: now would be a good time to talk about tail-call optimization, the ECMAScript specification, and browser support for tail-call optimization, however, this will be left as an exercise for the reader. The important thing to note is that recursion of this nature is not in danger of overflowing the maximum number of call stack frames if you're using a runtime following the ECMAScript specification, as recursion in the "tail-call position" is transformed into the same lower representation as a while loop.

Denotational vs Operational Semantics

There's an important distinction between functional and imperative code, specifically a distinction in the meaning of the source code itself:

  • Imperative source code is composed of statements. The meaning of these statements are that the machine should perform the corresponding operations, e.g. "allocate memory for this variable", "place this value in this memory location", "jump to this memory location", etc. This is called operational semantics.
  • Functional source code is composed of expressions. The meaning of these expressions are that the given expressions are true, that the given relationships exist, e.g. function signatures and function bodies:
    • "factorial is a relationship between a number and a number"
    • "the inner loop of the factorial is a relationship between two numbers and a number"
    • "the factorial, for n to zero, is, given zero, the current accumulated state, or given a number greater than zero, the factorial of that number with the given state multiplied by the number"
    • This is called denotational semantics.

Higher-order Functions

Functions are values. Functions can be assigned to variables. Functions can be passed to other functions. Functions that accept other functions as values are called Higher-order Functions (HOFs).

Suppose we have two functions that both perform a calculation and print the result to the console:

const addThreeAndPrint = (x: number) => {
  const result = x + 3;
  console.log(`The result is: ${result}`);

// => 5

const multiplyTwoAndPrint = (x: number) => {
  const result = x * 2;
  console.log(`The result is: ${result}`);

// => 4

Noticing that the two functions are nearly identical, we can generalize the shared code as a function:

const calculateAndPrint = (n: number, f: (x: number) => number) => {
  const result = f(n);
  console.log(`The result is: ${result}`);

Now our function calculateAndPrint receives a function to perform a calculation and handles the duplicate responsibility in the previous examples, thus reducing the total code and promoting reuse! Let's see what the resulting code looks like:

const addThree = (x: number) => x + 3;

const multiplyTwo = (x: number) => x * 2;

const calculateAndPrint = (n: number, f: (x: number) => number) => {
  const result = f(n);
  console.log(`The result is: ${result}`);

calculateAndPrint(2, addThree);
// => 5

calculateAndPrint(2, multiplyTwo);
// => 4

Excellent. Here, calculateAndPrint is an example of a higher-order function: part of its definition is receiving another function! This is a way to compose functionality in a purely-functional way. Think of it as plugging a cord into a socket, or clicking a lego block into place.

Aside: you may be wondering why certain variable names are chosen, such as 'x', 'n', 'f', etc. These are just conventional names, mostly for convenience. The values they represent tend to be rather abstract and naming such abstract things tends to be hard to do in a concise manner.

Polymorphic Functions

  • Monomorphic functions operate on specific types of data
  • Polymorphic functions operate on generic types of data

Aside: polymorphism is a broad concept and has multiple meanings. Remember the meaning of the roots of the word in order to better remember the meanings in the different contexts. "Poly" means "many" and "morph" means "form", thus the meaning of "polymorphic" in a given context can usually be deduced by considering the pieces that can take many forms. In a functional context, with respect to the signature of a function, "polymorphic" refers to a parameter which does not have a single concrete type but can instead be one of many different types.

An example of a monomorphic function would be:

const findFirst = (strings: string[], key: string): number => {
  const loop = (n: number): number => {
    if (n >= strings.length) return -1;
    else if (strings[n] === key) return n;
    else return loop(n + 1);
  return loop(0);

As you can see, the function takes specific types of data: a string array to search from, a string key to search for, and it gives back a number indicating the index of the found string (or -1 if not found).

Notice how we avoided imperative looping again.

Now image we wanted to be able to search for numbers in an array of numbers:

const findFirst = (numbers: number[], key: number): number => {
  const loop = (n: number): number => {
    if (n >= numbers.length) return -1;
    else if (numbers[n] === key) return n;
    else return loop(n + 1);
  return loop(0);

As you can see, this is very much the same function. The only difference is in the type of the array and the type of the key we're searching for. This is a great use case for a polymorphic function which would allow us to search an array of any type for a value of that type!

Let's see what that looks like:

function findFirst<T>(items: T[], comparator: (item: T) => boolean) {
  const loop = (n: number): number => {
    if (n >= items.length) return -1;
    else if (comparator(items[n])) return n;
    else return loop(n + 1);
  return loop(0);

The one important difference to note is that we can no longer use the standard TypeScript triple-equals operator to compare because we have to be able to handle arbitrary types. We change the second parameter into a function (making findFirst a higher-order function!) which takes an item in the array and returns a boolean value indicating whether we've found the right one.

This is the correct way to preserve the semantics, or in other words the meaning or purpose, of the two original functions, otherwise it would behave differently for value types vs reference types. There's a much deeper philosophical consideration to be had of the notion of equality that will, for now, be left up to the reader but it will become important again later on.

For now, take notice of the fact that the universe of possible implementations is significantly reduced when implementing a polymorphic function. In fact, the only operations you can perform on the polymorphic type are the ones passed into the function as arguments![1] You may find that some type signatures constrain a function such that there is only one valid implementation.

[1] Except, of course, for the methods that are built-in to all types in TypeScript, but this is an implementation detail of the language that is not relevant in general to functional programming

In fact, why don't we take a look at such a function with only one valid implementation? We'll start with the type definition:

type Partial<A, B, C> = (a: A, f: (a: A, b: B) => C) => (b: B) => C;

This is a bit of a leap in complexity, considering we're dealing with three type parameters and a higher-order function, so let's unpack it:

  1. This is the type of a function, it has inputs and an output
  2. It has three type parameters, A, B, and C
  3. There are two inputs: a value of type A, and a function of type (A, B) => C
  4. There is one output: a function of type (B) => C

Interesting! Now let's consider what kind of function we could write that would conform to such a highly-generic signature. Where can we get a value of type (B) => C inside a function whose only inputs are a value of type A, and a value of type (A, B) => C? We aren't allowed to use global variables, so we can't reference one that already exists. We could try to declare a function that fits the signature and return that but that wouldn't work either because we have no means to construct an arbitrary type C! Well, maybe we can use the hints given to us by the type signature, considering that the name of the type is "Partial" and the arguments seem to be connected. Our first argument is of type A, and the second argument is a function which takes an argument of type A!

So let's try this, we'll declare a new function which takes an arbitrary type B, but we need a way to get a value of type C:

function partial<A, B, C>(a: A, f: (a: A, b: B) => C): (b: B) => C {
	const g = (b: B): C => ???
	return g;

Luckily, we have a function that produces a value of type C: the second parameter to partial! All we need to do is take the function f and use it inside our new function g. When g is called, it will have access to both the value of type A from partial and the value of type B from g! In fact, this is the only possible implementation[2].

function partial<A, B, C>(a: A, f: (a: A, b: B) => C): (b: B) => C {
  const g = (b: B): C => f(a, b);
  return g;

Aside: when declaring a function inside a function, the inner function has access to all the arguments of the outer function. This is called "closure". If you think about it, it makes a lot of sense. The definition of the inner function is part of the body of the outer function. The inner function comes into existence because of the evaluation of the outer function so it seems almost obvious that the inner function would have access to anything in scope in the body of the outer function.

[2] Again, disregarding the TypeScript/JavaScript specific implementation details such as the prototype chain. We strive to learn generally-applicable techniques because the return on the investment of time and energy required to learn is much greater. We'll be able to write modular, testable, maintainable, parallelizable code in any language, rather than limiting ourselves to languages that support (for example) prototype-based inheritance

Let's look at another example. Again we'll start with the type signature and see if we can determine what the allowable implementation would be:

type Compose<A, B, C> = (f: (b: B) => C, g: (a: A) => B) => (a: A) => C;

Considering the type signature:

  1. This is the type of a function, it has two inputs and one output
  2. There are three generic type parameters, A, B, C
  3. The inputs are also functions, a function f of type B => C and a function g of type A => B
  4. The output is also a function, a function of type A => C

We should note, again, that this function is highly generic, meaning the universe of implementations is going to be very narrow. Indeed, this is another example of a function with only a single valid implementation.

We need to return a function which takes a value of type A and returns a value of type C:

function compose<A, B, C>(f: (b: B) => C, g: (a: A) => B): (a: A) => C {
	const h = (a: A): C => ???
	return h;

There is no way to construct an arbitrary type C, we will need to make use of the functions passed in as parameters. Luckily, we have exactly what we need: f takes a value of type B and creates a value of type C, and g takes a value of type A and creates a value of type B. All we have to do is call function g and pass the result to function f! Then our return value will be the correct type!

function compose<A, B, C>(f: (b: B) => C, g: (a: A) => B): (a: A) => C {
  const h = (a: A): C => f(g(a));
  return h;

Let's go over one more example of highly generic functions that only seem to have one correctly implementation to really drive the point home. We'll start with the type:

type Curry<A, B, C> = (f: (a: A, b: B) => C) => (a: A) => (b: B) => C;

Considering the type signature:

  1. This is the type of a function, it has one input and one output
    • Input: (a: A, b: B) => C
    • Output: (a: A) => (b: B) => C
  2. The output itself is a function, which also has one input and one output:
    • Input: (a: A)
    • Output: (b: B) => C
  3. The output of that function, likewise, is a function with one input and one output:
    • Input: (b: B)
    • Output: C
  4. There are three generic type parameters, A, B, C

Given the highly generic type signature, we again assume the universe of implementations is highly limited. Let's give the implementation a try:

function curry<A, B, C>(f: (a: A, b: B) => C): (a: A) => (b: B) => C {
	const g = (a: A) => ???
	return g

So we know we must return a function which takes a value of type A and returns a value of type B => C. The question is, how do we get that value? We have no means of constructing an arbitrary generic primitive value (that would require that every type in our language has e.g. a new operator a.k.a. constructor), so clearly our options are limited to functions.

Observing that our input is a function with a very relevant signature, there must be a way to implement the given type. The input function has the two parameters we need, but it expects them at the same time. Given the principles of functions-as-values and closure, it's clear we have an answer. We'll simply return a function which takes the first argument a: A then passes back another function which takes the argument b: B and applies our input function to get the needed output value C!

function curry<A, B, C>(f: (a: A, b: B) => C): (a: A) => (b: B) => C {
  const g = (a: A) => {
    const h = (b: B) => {
      return f(a, b);
    return h;
  return g;

Notes on Chapter 1

What is functional programming?

  • Constructing programs from nothing but pure functions

What is a pure function?

  • A function that does not access global variables
  • A function that does not modify its input
  • A function that does not throw exceptions or modify program execution
  • A function that does not read user input or write to the file system
  • A function that does not draw on the screen or play a sound
  • ... in short:
  • A function that always produces the same result for a given input and has no effect other than returning its result

What are the benefits of functional programming?

  • Code that is highly modular
    • Modularity is a measure of how self-contained a code unit is, and thus, how easy that unit is to combine with other units
      • For example, Lego(TM) blocks are very modular because each block is entirely self-contained:
      • They have a well-defined boundary defined by the physical extent of the plastic that makes up the block
      • They have a mechanism for connecting to other blocks via the cylindrical protrusions on top
      • They have a mechanism for receiving a connection from other blocks via the void space on bottom
    • Pure functions only have access to the inputs they declare and are therefore entirely self-contained and maximally modular
      • Pure functions behave exactly the same way as lego blocks:
      • They have a well-defined boundary defined by the signature and body of the function
      • They have a mechanism for connecting to other pure functions via return values and function evaluation
      • They have a mechanism for receiving a connection from other pure functions via input values and function evaluation
  • Code that is easy to test
    • Pure functions cannot access global variables and are therefore easier to test because they require no specific global context to be set up before the run
    • Pure functions always produce the same result for a given input and are therefore easier to test because the tests only need to be run once
  • Code that is easy to reuse
    • Pure functions have no side-effects, always produce the same result for a given input, and do not access global variables so they are easy to safely and confidently reuse anywhere in the program
  • Code that is easy to parallelize
    • Pure functions cannot cause side-effects such as modifying their inputs, therefore they require no data locking mechanism when used in a parallel context, thus making it easier to write parallel algorithms
  • Code that is easy to reason about
    • Reasoning about code is the act of mentally evaluating the unit and predicting the resultant state of the program
    • Pure functions cannot modify their inputs, they cannot access global variables, and they cannot change program state, the result of evaluation of a pure function is entirely contained within its return value, reducing the total number of things required to be held in memory during reasoning to a bare minimum
    • To reason about a functional program, one can make use of the substitution model of program evaluation in which one simply replaces the name and arguments of a pure function with the result of the application of the function, allowing one to confidently forget the definition of the function after the substitution has been made
    • When reasoning about code written in other programming paradigms, global state must be remembered and procedure calls or method calls can have arbitrary effects on program state, eliminating the possibility of using the simple substitution model and increasing the difficulty of reasoning about the code
    • A procedure in Procedural Programming or a method in Object-Oriented Programming doesn't need to return a value at all, often intentionally mutating state in a higher context, thus requiring the programmer to hold many more things in mind while reasoning about the code
  • Code that is obviously correct
    • Functional programming eliminates entire classes of bugs due to its helpful restrictions on global state, immutable data, exceptions, and side-effects
    • Because pure functions cannot access global state, they cannot fail due to the order in which functions are called
    • Because pure functions cannot access global state, they cannot fail due to running in an incorrect context
    • Because pure functions cannot mutate the value of their inputs, they cannot cause other functions to fail when composed together by passing the output of one into the input of another
    • Because pure functions must always produce the same result given the same input, their behavior cannot change during the runtime of the program
    • Because pure functions are easier to test, they are more likely to be correct
    • Because pure functions cannot throw exceptions, the program cannot fail due to a runtime exception

For an example of an impure program, see the git repository

A formal definition of (pure, mathematical) functions

  • A function has exactly one type of input and exactly one type of output
  • A function relates every value of the input type to exactly one value of the output type
  • A function's output value is determined solely by its input value
  • Changes in the state of internal or external processes are irrelevant to the computation of a function's output value

From here onward, there will be a precise distinction between different types of computer code

  • "Function" will be used to refer to the above definition of a function
  • "Procedure" will be used to refer to parameterized code that may have side-effects
  • "Method" may be used to refer to a procedure attached to a class instance in an OOP context

One interesting thing to note, in a language like TypeScript, is that you can create a function without the need for the function or () => {} syntax. Consider the following code with respect to the definition of a function:

const not = {
  [true]: false,
  [false]: true

const catsAreDogs = false
const catsAreNotDogs = not[catsAreDogs]
// => true

How about a slightly more complex example?

const and = {
  [true]: {
    [true]: true,
    [false]: false,
  [false]: {
    [true]: false,
    [false]: false,

const fishCanSwim = true
const catsHaveFur = true
const fishCanSwimAndCatsHaveFur = and[fishCanSwim][catsHaveFur]
// => true

const dogsMeow = false
const fishCanSwimAndDogsMeow = and[fishCanSwim][dogsMeow]
// => false

Even more complex?

const plus = {
    0: {
        0: 0,
        1: 1,
        2: 2,
    1: {
        0: 1,
        1: 2,
        2: 3,
    2: {
        0: 2,
        1: 3,
        2: 4,

const zero = plus[0][0]
// => 0

const one = plus[1][0]
// => 1

const two = plus[1][1]
// => 2

const alsoTwo = plus[0][one]
// => 2

const three = plus[one][alsoTwo]
// => 3

Are you feeling it now Mr. Krabs? Are you feeling it now Mr. Krabs??

How to produce useful programs without effects

While changing process state must be irrelevant to the computation of a function's output value, this does not preclude the possibility of side-effects if they remain irrelevant to the computation!

  1. The program itself is a function
  2. An irrelevant side-effect is unobservable from inside the program
  3. Unobservable side-effects do not break our definition of a function

This is how functional programming is able to produce useful programs while remaining mathematically pure!

Functional purity implies a concept known as referential transparency:

  • An expression is referentially transparent if, for all possible uses of the expression, and for every occurrence of the expression in a given program, one can substitute the result of evaluating that expression in place without affecting the meaning of the code

Functional purity and referential transparency enables the substitution model of reasoning about code, also known as equational reasoning:

  • Equational reasoning, in the context of a computer program, is the process of drawing conclusions about the program by substituting equal parts of the code
    • "equa", "equi", is the latin root in both "equational" and "equal" meaning "equal; the same"

For example, consider a function shout which takes a string and produces a string: const shout = (s: string): string => s.toUpperCase()

Consider an application of this function to an arbitrary string in an arbitrary program:

const shout = (s: string): string => s.toUpperCase()

const message = 'learn fp for the good of humanity'

const shoutedMessage = shout(message);


Substitute equals for equals:

const shout = (s: string): string => s.toUpperCase()

const shoutedMessage = shout('learn fp for the good of humanity');


Substitute referentially transparent expressions:

const shoutedMessage = 'learn fp for the good of humanity'.toUpperCase()


Substitute referentially transparent expressions:

const shoutedMessage = 'LEARN FP FOR THE GOOD OF HUMANITY'


Substitute equals for equals:


As you can see, referential transparency and equational reasoning, which are enabled by writing pure functional programs, makes the code very simple to reason about.

Now let's take a look at an example of a function that's not referentially transparent and see how it makes the program harder to reason about.

const strings = ['hello'];
// => ['hello']


const var1 = strings;
// => ['hello', 'world']


const var2 = strings;
// => ['hello', 'world', 'world']

In this example, you cannot simply substitute any occurrence of strings for its definition. You need to mentally manage the side effects of each statement and the state of the program to determine what a variable's value might be at any given time. Likewise, equals symbols don't actually mean equality! What kind of madness is that?!

We have strings = ['hello'] and var1 = strings and yet, var1 evaluates to ['hello', 'world']! To put it bluntly, the program is lying to us. Imagine someone else reading your code, jumping through tens of procedure definitions, desperately trying to understand what's happening. There is a better way.

This is one of the reasons we choose functional programming. The substitution model is much simpler to reason about because we do not need to mentally simulate sequences of state updates to understand a block of code. A given function can be evaluated easily by substituting its arguments into its body.

The role of types in functional programming

Modularity and composition are two important concepts in functional programming. Modularity being the ability to separate and reuse a unit of code, and composition being the ability to combine a unit of code with another such that the output of one is the input of the other.

With type safety we are able to treat a function as a black box, meaning we don't need to know the implementation of the function to determine whether it results in a valid program. This enables fearless composition and increases the utility of modularity.

A type system is like a sort of compile-time unit testing framework. In common unit tests you might "arrange, act, and assert" to prove a theorem about a unit of code. We can do the same with a sufficiently advanced type system. In fact, a program is a proof, and the type for the program is the formula it proves.

To put it less formally, the return type of a function is analogous to a logical theorem, subject to hypotheses corresponding to the types of the argument values passed to the function; and the program to compute that function is analogous to a proof of that theorem. This is an observation known as the Curry-Howard Correspondence and it is the basis for all modern type theory.

A simple way to remember this is that whenever you see a function (remember the definition of a function), think "implies". Given a function, type F<A, B> = (a: A) => B, think of the phrase "F states that A implies B"

Let's see how this works with a concrete example.

// The simplest theorem I can think of, simply stating that values of type T exist, meaning they can be constructed either manually via declaration, or as the result of evaluating an expression

type Exists<T> = T;

// Now here is function using this theorem, proving it correct for Boolean values:

const booleansExist = (b: boolean): Exists<boolean> => b;

Given the Curry-Howard Correspondence, the fact that we are able to write a function which satisfies the return type means that we have proven the theorem it represents!

In other words, the function "booleansExist" proves the theorem that, given any boolean value (input), there exists a corresponding boolean value (output) related to it by taking the value and doing nothing to it (body).

While this may be a simple example, much more complex types can be created corresponding to much more complex theorems. We'll get into some truly mind-boggling examples in TypeScript later on.