Design Principles From Functional Programming

With Runar Bjarnason

Design Principles From Functional Programming

Runar Bjarnason has been exploring how writing in a functional style increases modularity and compositionality of software for many years. He is co-author of functional programming in scala, a book that teaches these principles in scala. It is a very challenging yet very rewarding book, sometimes referred to as simple ‘the red book’.

In this interview Runar explains how writing in a functional style involves limiting side effects, avoiding exceptions and using higher order abstractions. Writing in this style places constraints on what a module in a software system may do, but by constraining modules in this way, the software modules themselves become endlessly composable.


Note: This podcast is designed to be heard. If you are able, we strongly encourage you to listen to the audio, which includes emphasis that’s not on the page

Adam: Welcome to CoRecursive, where we bring you discussions with thought leaders in the world of software development. I am Adam, your host. Runar Bjarnason has been exploring how writing in a functional style increases modularity and compositionality of software for many years. He is the co-author of Functional Programming in Scala, a book that teaches these principles using the Scala programming language. It is a very challenging, yet very rewarding book. Sometimes referred to simply as The Red Book.

In this interview, Runar explains how writing in a functional style involves limiting side effects, avoiding exceptions, and using higher order abstractions. Writing in this style places constraints on what a module in a software system can do, but by constraining modules in this way, the software modules themself become endlessly composable. Enjoy.

Runar Bjarnason is an engineer at Takt. He is the co-author of Functional Programming in Scala. Runar, welcome to the show.

Runar: Thank you.

Adam: Your book is about purely functional programming and how it leads to more modular software. Before we get in to specifics, what is purely functional programming?

Runar: Well, purely functional programming is programming with functions only. When I say functions, I mean mathematical functions. So functions just takes an input and produces an output, doesn’t do anything else. And then, purely functional programming is purely programming with such functions.

Adam: So there is no side effects in a mathematical function? A mathematical function cannot write to disk, for instance.

Runar: Right, exactly. In functional programming, instead of having a function write to disk, the function returns a little program that requests of the caller… Instructs the caller that something might need to have writing to disk and then it’s up to the caller to pass that back to the [inaudible 00:02:26] to have that actually happen.

Adam: What problem does functional programming solve?

Runar: Well, functional programming solves the problem of modularity and compositionality. So the superpower of functions is that they compose. If you have a function that takes some type A and produces some type B, you can always compose that with some function that takes that type B through some other type C, then you have a composite function from A to C. That will always work. The side effect, it never crash or go wrong.

Adam: So composition is the superpower of functional programming?

Runar: Yeah.

Adam: If I write my entire programming in this functional programming style, how does that affect the architectural overall of the program?

Runar: I think it affects it in a rather profound way. Your whole program will be a single expression and then, to evaluate that expression, will be to [inaudible 00:03:49] the program. So it’s a fundamentally different way of architecturing software. Architecturing, is that even a word? Constructing. It’s a fundamentally different way of constructing software.

Adam: Yeah, constructing is a good word. What lead you personally to this way of structuring software?

Runar: My background is in Java programming. I did a lot of, sort of Enterprise Java for large systems in the past. What struck me that the system is more really difficult to manage, difficult to test, and a lot of bugs. It was difficult to just achieve the kind of stability that I wanted. So that lead me to investigate whether there was a way of having better static guarantees about software. That lead me to languages like Haskell. Haskell is a purely functional programming language. From there, I started NSCA functional ideas and I started importing those ideas into Java. I was a contributor to a library called Functional Java for a while and then a similar one for Scala which is called Scalaz.

Adam: Your book Functional Programming in Scala teaches functional programming using the Scala programming language. Is Scala required to do functional programming or can another language work?

Runar: Oh, yes, Scala is absolutely not required to do functional programming. I mean, the first words in the book are “this is not a book about Scala”. Yeah, so like I said we were doing functional programming back in the day when Java, before they even had closures. So all you really need is some ability to abstract that functions. We talk about functions as a first-class thing. The way that we were doing this in Functional Java was just as an anonymous object. So functional was just an anonymous object of a class that had a single method called apply and that works perfectly well. I wouldn’t say perfectly well, but it did work.

Adam: Maybe a little verbose, but it gets the concept to cast.

Runar: Right and then, it allows you to start your program in this way and reap the benefits such as they are of purely functional programing.

Adam: So a pure function is a function in a mathematical sense and it has no side effects.

Runar: Right.

Adam: You’re stating that a pure function is more modular and reusable than, say, an imperative procedure or a standard Java definition. Why is that?

Runar: Well, modularity is the property that you can take your system apart into modules and then, reassemble them in ways that you didn’t necessarily anticipate when you were designing those modules. A function is sort of the ultimate module in a sense, because a function can basically always participate in a composition where its input type is provided and where its output type is expected.

There’s never going to be a case when you pull a function out of the system, it’s never going to have sort of wires hanging off of it, like pulling the carburetor out of the car or something. It’s always going to just have an input node and an output node and you’re going to be able to sort of snap that in to whatever, if that makes sense.

Adam: This is what is meant by referential transparency?

Runar: It was meant by modularity. Referential transparency is the property that when you evaluate a program or when you evaluate an expression, it’s not going to have side effects. That is the results of evaluating expression is going to have the same meaning as the expression that you started with.

All right, so for instance, five plus two is referentially transparent because it just means seven. Now if we replace that expression with seven, you’re going to have the same program that you started with. In functional programming, you have this property at every level.

Adam: This is often called substitution model, right? So I can take the result of my five plus two which is seven and I can replace it with five plus two and get the same result.

Runar: Right, exactly. This is what it means to execute the program in functional programming. You just keep doing this to all of the sum expressions of the program. You keep evaluating them and replacing, substituting the evaluated results for the expressions. Once you have substituted for all expressions, then you go back to delete the program.

Adam: So I think that makes great sense in terms of math. It seems a little tricky when you are pulling in information from a file system. You mentioned before, the runtime system.

Runar: Yeah.

Adam: So what happens when there’s idle involved?

Runar: Well, so for instance, in Haskell, the main entry point to do your program is going to be an expression which is called main. It’s going to have a type which is called I/O. What ends up happening is that this expression is going to abstract a value of this type I/O. So it’s going to reduce to a single such value and that value is going to be returned to the caller, which is the runtime environment. So the Haskell that runs on environment is going to receive this sort of script, that I/O, and it’s going to execute that. The I/O effects like reading from files or whatever, that never happens inside of your program, they happen outside of your program in the runtime environment. Does that make sense?

Adam: Yeah, it does make sense. I think in some ways, it seems a little more complex than maybe we’re used to in an imperative sense, but it actually matches the way the computer works. Your program isn’t actually performing the I/O, right? The disk reading is done, like some sort of I/O system within the architecture of the computer.

Runar: Yeah, exactly. I mean, the way the traditional programming language is more procedural program, normally, is that it could be the I/O system or the runtime environment expects the callback. It expects the program to poke at it in certain ways, to call interrupts or other things like that. With functional programming, it’s simply like an inversion of control. Instead of calling the runtime environment back, we simply return a value to the runtime environment and lets it know what it needs to do.

Adam: How does mutation work? Or actually, why is immutability important to functional programming?

Runar: Well, immutability is important because mutation is a side effect. It’s not referentially transparent. It breaks the substitution model. The value of a particular variable, we change when you mutate it. It’s going to be important to not mutate the memory directly or if you do so, you have to do it in a very controlled way. There are methods for using mutable memory through function ability which we talked about in the book towards the end.

Adam: So if I think of the first program I ever wrote, it was the C program and it had a for loop, for I equals I plus one.

Runar: Right.

Adam: So I equals I plus one, that’s verboten-

Runar: It’s not totally forbidden. It depends on the context. It depends on whether anyone can observe that happen. So if someone has a reference to the value I and then I go ends… Let’s say I’m using the variable I in two places in the program. In one place, I increment I by one. Then, it’s going to come as a surprise to the other part of the program or it’s going to lose the sort of referential transparency. Substituting the value of i++ is not going to give you the same program as i++. So say I is four, then the value of i++ is going to be five. And then, taking i++ and substituting five is not going to have the same meaning.

Adam: Yeah, it’s breaking the substitution. In that case, it’s because the I is shared. Because there’s a shared state between these two areas of the program.

Runar: Yeah, so it’s usually safe to mutate local state. So if you are in a function, say, when you create this local variable I, like in a loop you said, you create this local variable I and you proceed to mutate it. You don’t actually ever look at it except in this one place and it doesn’t really matter. You can consider the for loop as just a single referentially transparent expression. If you don’t look inside of it, then you can consider it to be referentially transparent. Because it will always evaluate to the same result as long as it does with the expression that’s inside of the loop that [inaudible 00:16:07]

Adam: Yeah, I understand because my for loop is inside of a function and that function, from the outside, has no side effect. It doesn’t matter so much the inside. I’m actually mutating this variable because nobody from the outside can see that.

Runar: Exactly.

Adam: Is that right?

Runar: Yeah. I think it’s chapter 14 of the book, we talked about data structure that allows you to reason about this sort of formally. It mentioned it actually share around the values that compute its immutable state locally. You get this sort of guarantee that if you are looking at a mutable variable then it’s always safe to mutate it and that you’re the only one who can see it.

Adam: What data structure is that?

Runar: It’s called the ST monad.

Adam: So state transformer or-

Runar: Yeah, it’s not a totally clear what ST stands for. It’s like state thread or state transformer or… It’s a mutable state monad.

Adam: Okay, so back to my for loop example. Should I be writing for loops to loop over a list of whatever integers? Is that the functional programming style?

Runar: It’s not. So a for loop like that, it creates an incoherency and that the structure of the loop, it doesn’t really reflect the structure of the list. All right, it’s possible that I’m off by one errors and other things like that. So you don’t really have much of a guarantee that you’re going to collect the loop over the list.

So the functional way of going over a list is by using a fault. You start with the empty list and you say what is going to be the value if this list is empty. Then, you say how am I going to add or how am I going to evaluate one of the elements from this list and add that to my result. Then, we recurse over the list with those two things. [crosstalk 00:18:45] Sorry, go ahead.

Adam: You recurse over the list, but not exclusively, is that right?

Runar: You can recurse exclusively, but recursion is always going to have the same structure, so you can abstract out into a function. That function is usually foldLeft, foldRight for the list.

Adam: So I could write a recursive function wherein to operate over my list, I handle the base case. Then, for the next element, I call the function on itself. But what you’re recommending is to use fold. Fold abstracts over this concept, is that right?

Runar: Yeah, so if you write the expensive recursion of your list, you’re always going to write that the same way. You’re going to be repeating yourself a lot. You can actually just write the recursion over a list, once, as a concept and call that fold. Then, the only thing you have to specify are the base case and the function in which to fold.

Adam: I think fold is a great example of kind of the power of abstraction of functional programming language. In your book, you stated a fold is a polymorphic higher order function. Could you define those terms or explain that?

Runar: Sure. So it’s polymorphic, that just means it won’t set a list of any type. A list of integers, a list of strings or whatever. You choose which type that is at the call set. That’s what called parametric polymorphism. Higher model function is just a function that accepts another function as its target. So fold will take the base case and then it’ll take a function that accepts the elements of the list.

Adam: So parametric polymorphism is that similar to polymorphism that I learned in my intro to object oriented like a circle is a square.

Runar: It’s not. It’s been so long since I’ve thought about [inaudible 00:21:43] programming. I don’t really know anymore what polymorphism means.

Adam: I think parametric polymorphism is maybe more like generics and that polymorphism, in the traditional OOP sense is like an inheritance. Does that make sense?

Runar: Yeah, that make sense. Although, I think the purpose of both is sort of similar. It’s going to be allowing to call a function with multiple different values and allowing… I’m sorry, calling it a function with different types and choosing which type that is at the call set. That’s sort of the purpose. With parametric polymorphism, it’s usually, you can choose any type not just a [inaudible 00:22:42] of another class.

Adam: So map is another higher order polymorphic function. Can you describe map?

Runar: Yeah, so map over a list. We’ll take the list and it’ll take a function that accepts an element of that list. It’ll turn it into a value of a different type and that will just simply apply that function to every element of the list. It’ll return the result on the list.

Adam: Exceptions in the Java sense are a violation of this encapsulation, a peer function throwing a no pointer exception seems to me violates the substitution rule that you are discussing. So how does functional programming deal with exceptional circumstances.

Runar: Right, yeah, so exceptions absolutely do violate the [inaudible 00:23:55] transparency. If you throw an exception, inside of a function, it’s no longer a function because it’s not returning a value. The way we deal with that the only way we can work exceptional conditions in functional programming is so that to return a different value. So for instance, in Scala we use option and in Haskell as well maybe. You’re going to have a function that takes the Ints and Maybe String or Maybe Int, the other way around, if it makes more sense. It’s going to take a String and a Maybe Int.

In the case that the String actually can be processed to an integer and it’s going to return that integer. If it doesn’t process to an integer, it’s going to return a special value called nothing or none which is not the same thing as null which we can talk about later. So the special case nothing or none is going to be encoded in the type. So that value is going to be a value of the return type of the functions, which is going to be options Int, so Maybe Int in another language.

Adam: So what makes it different than null?

Runar: Null is sort of a special… In Java, null is a special value at every type. It’s not part of the type. Null is not an integer, right? If you say null and you claim that that is a type integer then the compiler is just going to accept that. Then, when somebody receives that value, it’s sort of like a landmine and their code. If they go and they de-reference that and then say, “Well, okay, show me like add forth to this integer, the whole program is going to blow up.” With the value with a type like Maybe or option, it’s really a list that can have either no elements or one element. You have to just fold that list in order to back that element that might be inside. It’s fundamentally different way of talking about absent values.

Adam: It’s a much more explicit it seems rather than any specific value. It can be null you’re specifying the return type here either can be an integer or it can be a none. Then, the caller has to deal with that explicitly.

Runar: Then, you can do other things. There’s another data type of either. Instead of just having a none or nothing, like an empty value, you can say this function returns either an error or a string or an integer. All right, so you can say composing my string in integer either it’s going to return an integer or it’s going to return a string and tell me what went wrong or something like that.

Then, you just have to deconstruct that either value and look at whether you have an error or an actual value. That’s just going to be a normal value. It’s not going to be like a catchy exception.

Adam: The either is valuable if you want to return more information about what went wrong?

Runar: Right, exactly. It just suggest that a normal value, you can make that types of arbitrary [inaudible 00:28:30]

Adam: We started off talking about exceptions, but it doesn’t have to be exceptional and either could be used if I wanted to function they’d return a… It takes a number from one to 10 and if the number is below five, it returns a string. If it’s above five, it returns an integer. I don’t know why that would exist, but an either could be used.

Runar: Yeah, you can totally do that.

Adam: So if we’re making the exceptions, less exceptional will make an explicit values, does that make it more difficult for the caller of the function? They have to explicitly deal with possible failure cases as opposed to an exception. It’s just travels up the call sac.

Runar: It doesn’t really, because of the things that map. If you don’t actually care about the exception, let’s say you have an either, like this, and if you don’t care about the other condition, then you can map over it. You take a function that just computes with your integer that you have inside of your either and you map that over the either and that will ignore the error side. You can just keep doing this. You will never have to explicitly talk about your errors unless you actually care about them.

Adam: The higher order abstractions like map and fold that we have let us kind of work of [inaudible 00:30:23] with these exceptions, without a ton [inaudible 00:30:24] I think. Is that what you’re saying?

Runar: Yeah, that’s what I’m saying. It’s a case if you have an either E A, either E or A, then you have a function from A to D. You can always compose those two things together. The function from A to D just snaps where the A is going to occur and it will turn that into an either E or B.

Adam: Mm-hmm (affirmative) So the exception can travel up the call site just by mapping inside of this either structure.

Runar: Yeah, so it sort of travels in a different way, but it has the same effect.

Adam: You mentioned earlier functional data structure is now, I don’t know if we’ll get into that ST monad, but just basic data structures that we are using in an imperative way like an array, seems to knock the appropriate functional programming. Like an array involves mutation. So how do we deal with list in a functional manner?

Runar: Right, so the traditional way of dealing with list in functional programming is to use a LinkedList, so then, you’ll have a base case which is the empty list. Then, you’ll have a constructor which then constructs a list out of a single element and another list. So it’s going to be recursive data structure. You’re very going to work with arrays, but you might be working with abstractions that hide away the fact that they are raised under the hood. That’s a very common pattern.

Adam: So there’s no mutation at all, but it looks like an array…

Runar: Oh, other way around. So there might be data structures that you have that are mutable rays under the hood. Since you can’t see that, but you can only operate on this arrays using referentially transparent functions. You’re never going to see the fact that there are arrays under that. Obstructed away from you. Its just that [crosstalk 00:33:06] implementation detail performance mechanism for the implementer of the data structure.

Adam: So it looks like a LinkedList and it looks referentially transparent, but under the hood, there’s actually a bunch of memory and that it is being mutated. So why?

Runar: Well, mainly for performance reasons. Like vector in Scala, it’s like this. It’s a purely functional data structure that you can’t mutate in place, but under the hood it’s doing all kinds of tricks using regular arrays to make it fast.

Adam: Make sense. Let’s dig in on the performance. Implied with that is that there’s a performance implications to this functional programming style that this lack of mutation has a cost, I guess.

Runar: Well, in the case of the LinkedList the performance implication is just inherent to the nature of the data structure. So if you wanna ask for the last element of the LinkedList, you have to have a point or two of the last element. Or you have to traverse from the head of the list to go to all the way to the end. That will take time which is functional to the length of the list. If you can have structure, they get a random access to an array, that’s going to be much faster.

Your question is sort of in general. The question is whether there are performance implications with functional programming in general.

Adam: Yeah.

Runar: I don’t think that’s inherent, but I think with a lot of languages, there are implications depending on how you do things. For instance in Java and Scala, the function is an object on the heap. If you use a lot of functions, you’re going to generate a lot of garbage. That garbage has to be collected. Another saying is that a function call is going to occupy space on stack. You have along chain of function calls you make one out of stack. Then, you get a stack over flow exception.

Adam: How does that work with recursion? If we’re talking about our fold previously, and it’s using some sort of recursive constructed internally to map over a list, how do we deal with stack space there?

Runar: Well, so what Scala does it will do a tail call elimination. So it will turn your recursion into a while loop. So that’s what the compiler will basically unroll your fold into a while loop. It can’t always do this if you are making a call that’s to a function. So if your function is calling something other than itself, Scala can’t figure out if that’s going to be a tail call. So a tail call, for those of you who don’t know, it’s a call out of the function which is the last step of that function. There’s no further work to be done after that call. There’s no need to return to the bottom of that function. There’s no need for a call stack frame to exist. What a lot of languages do is that they will eliminate stack frames if they’re not necessary, but languages like Java don’t do this. Scala does this only in various specific cases.

What you got to do is use [inaudible 00:37:48] tricks like in Scala, you have to use a trampoline which is a data structure which allows you to make recursive calls and then sort of bounce back on a trampoline. Basically, all of the recursion happens in a single loop that is outside of your program.

Adam: So functional programming, as we discussed it here, it’s a constraint on how we write software. A peer function can necessary do less than an in-peer function. Why would we constrain ourselves to a subset of the possible ways of writing software?

Runar: Well, I guess constraint is [inaudible 00:38:42] now.

Adam: Could you expand on that?

Runar: Yeah. The fact that they can do less, that fact means that you can reason more about what they’re going to do. The smaller the set of possibilities of that functions might potentially do, the more than you’re going to be reason about what they will actually do. That’s tremendously important if we failed in a large system. At every level, you want to be able to reason, to sort of get your head around and to write a software about a test and things like that. What the system is going to be doing and if you constrain that space, you’re going to have a lot less work to do.

Adam: So it’s easier to reason about a function because what I know it does is constrain to this specific use case? Is that the idea or…

Runar: If you think about a procedure that could have a side effects, right, versus a function. Understanding what a function does is very simple. It takes an argument and then it returns a value. It’s guaranteed it will always just only do this. Whereas if we have a procedure that might have a side effect, it could do literally anything. It could mutate memories somewhere. It might not throw an exception amidst of these small cases or it could take down your whole system and erase all the hard drives and destroy a small country. You just don’t know what it’s going to do unless you read the source code and fully understood it, right?

Adam: So you’re arguing that in general constraints are good. Is that right?

Runar: Well, I mean I think it depends. We should not forget that constraints in the string but in general you want to impose constraints that sort of serve you. The constraint [inaudible 00:41:28] some space where it actually wants to be.

Adam: So you had mentioned in a talk relating to constraints I think about how abstraction and precision are related. Could you expand on that?

Runar: Yeah, I say that abstraction is what makes precision possible. It’s a common mistake that people make that abstraction is about being vague, that it’s about sort of omitting information, or that it’s about sort of sweeping things under the rug [inaudible 00:42:19], but it’s not. Abstraction is really about going to a higher semantic level where we can talk in a language where we can be absolutely precise about the things that we want to be talking about. To give you an example of where abstraction makes precision possible is if we consider the function that just takes its argument and returns it. This is the identity function. So think of this as a function from integers to integers. If you look at the type signature of this, it’s not very constraint. It could be doing anything. It could be multiplying your integer by four. It could be adding to or whatever. It could always just return zero. All right, so you don’t really know what it’s going to do. It’s very unconstrained.

Now if you introduce an abstraction, if you say, “Well, this is going to be polymorphic [inaudible 00:43:32] type.” So the type is going to be A to A or any type A. So now I’ve abstracted alpha type and I say the caller is going to supply the type A. Then, they’re also going to supply a value of that type and then, I’m going to return the value of that same type. Now, there’s only one implementation of this function. So this function has to return exactly it’s argument because it doesn’t know about any other values of that type. It doesn’t know what the type is going to be until it sees that value. [crosstalk 00:44:12]

Adam: That make sense.

Runar: Because abstraction creates this very precise specification of its function.

Adam: Because it’s generic, because it’s abstract over the type, we know precisely what the definition of it could be because there’s only one possible result that could be that function of the identity.

Runar: Right.

Adam: If that make sense.

Runar: The fact that you’re abstracting, creates that precision of it. The fact that it could be absolutely any type means that the space of implementations, shrinks down to just the bottom. Because the implementation actually is now unable to look at what the value is, right?

Adam: Mm-hmm (affirmative) That’s a great example. So switching gears, your book, Functional Programming in Scala. I have the book here. I’ve been working on it in a while. I really love the book. The thing that has me not finishing it, so far, is actually the thing I love about which is the… It’s full of exercises that really kind of drill the concept into you. What brought you to that idea in writing this book?

Runar: When Paul and I were training our co-workers to do creative functional programming, we found that there were sort of a handful people that work when they clock in and that were kind of doing the work on their own and just trying to figure things out. Really making functional programming part of their own daily work. Then, there were people that would just sort of show up and watch the lecture. They would show up every week, just like everyone else, but they learned significantly less to use functional programming in their daily work. What we sort of concluded was that, and this was consistent with our prior experience, was that in order to really learn something you have to make it a part of your work, you have to play around with it, you have to experience it for yourself. That lead us to the idea of having a book that you just don’t read. It’s a book that you do, the book that you work through and gain practice with functional programming.

Adam: I think it works very well. Like myself, I found the exercises can actually be tackled with pencil and paper, [crosstalk 00:47:31] but aren’t really effective at drilling this concept in. Is the pencil and paper aspect of it, is that intentional or is it just the nature of learning about functional programming and it’s very mechanical?

Runar: Yeah, it’s just in the nature of things. It’s not really intentional, although when I was learning Haskell, I went through a book by Paul Hudak, which was amazing and I could do most of the exercises just in a notebook with pencil and paper. I think it’s just in a nature of things that functional programming doesn’t actually require a computer because it’s pure. You can execute functions just in your head, because they’re so simple. We don’t need an I/O subsystem. We don’t need an operating system. We don’t need any of these complicated stuff in order to evaluate a function. We’re just doing mathematics essentially.

Adam: Because you can substitute. You can do the model of evaluation in your head.

Runar: Yeah. You only have to hold one step in your head at a time, so you could just do the evaluation. You’re right off the next step on a piece of paper just like you’re writing up proof in mathematics.

Adam: If all new software were written using the principles of functional programming, if everybody bought your book, work through it, or learned about the style and some other way, what would the software industry look like five, 10 years from now.

Runar: I don’t know. That’s hard to say. I think we’d all be a lot better off because we would… I think we would all have an easier time in understanding. We have libraries that would be easier to pick up and maintain. We probably also have better software compatibility because we’d be able to compose a software that wasn’t necessarily written to be used together. Because often the problem with a particular system is that it’s doing some kind of side effect that isn’t appropriate and are different context. But with peer functions, you can always kind of snap it together. I think we’d have a bunch of concepts of mix and match [inaudible 00:50:29]

Adam: Better composition for sure because everything will be declaring its inputs and outputs.

Runar: Yeah, exactly.

Adam: I need to be conscious of your time. Thank you for coming on the show, Runar. It’s been great and I love your book.

Runar: Oh, thanks, man.

Support CoRecursive

I make CoRecursive because I love it when someone shares the details behind some project, some bug, or some incident with me.

No other podcast was telling stories quite like I wanted to hear.

Right now this is all done by just me and I love doing it, but it's also exhausting.

Recommending the show to others and contributing to this patreon are the biggest things you can do to help out.

Whatever you can do to help, I truly appreciate it!

Thanks! Adam Gordon Bell

Audio Player
back 15
forward 60s

Design Principles From Functional Programming