Adam: Welcome to CoRecursive, where we bring you discussions with thought leaders in the world of software development. I am Adam, your host.
Richard: I love to write non-blocking code, right? Because that’s when you actually reach scalability.
Adam: Hey. When Riccardo Terrell hit the concurrency limitations in a JVM application, he thought back to the Haskell he learnt in university and decided to rewrite the whole thing. The immutability of the Haskell solution made the concurrency bottleneck nonexistent. So it’s no surprise that years later, his book on concurrency in .NET leans heavily on functional programming constructs and the functional features of F# and C#.
Today, I talk to him about concurrency, and functional programming, and about F# and how it compares to Haskell and Scala. We also chat about CPU architectures and best practices for writing distributed systems. A little bit of FRP for fun.
Thanks to Manning, we have some free copies of the book to give away. Listen to the end of the podcast to hear about that. Riccardo Terrell is the author of Concurrency in .NET, and a F# and functional programming enthusiast, I believe. Welcome to the podcast.
Richard: Hello. Hi. Thank you for having me.
Adam: I have your book here. Actually, I have the E version of it, so I have a Kindle in front of me, and I have the book on it. Let’s start with an easy question. Why do we need concurrency? Can we just depend on computers getting faster?
Richard: Well, so that’s no anymore an option. We almost reached the speed of light in the computer. The CPU, I mean. We have a computer [inaudible 00:02:14] used to be faster and faster, and when you write a program, to make it just process the computation faster, you just wait probably a year to get a faster computer. However, this trend is over because more slow measure that… Computer get faster every 18 month, more or less, but this is not possible.
We actually reach the physical limit, because [inaudible 00:02:44] try to make transistors smaller, smaller. Cram more transistor inside the chip so the computer can go faster, but these transistor got so small. I think now Intel is going to announce shortly a 14-nanometer, which is like, I think, 500 times smaller than a blood cell.
And if you think about it, when you’re that small, the electrons that pass the transistor, they become so, so tiny that you have an effect called tunneling. [inaudible 00:03:14] go through the transistor, so there is no like, the classic switch on, switch off. The binary operation. You cannot apply the operation anymore.
So [inaudible 00:03:27] limitations, so vendor decide to instead of transistor go smaller, smaller, provide computer with multiple CPU. So now, instead of computer run faster, we can achieve true parallelism, meaning the computer can truly do multiple things at the same time. Right?
What Makes Concurrency Hard?
Adam: What makes concurrency hard?
Richard: Mainstream languages were designed without concurrency in mind, right? So procedural or sequential, because literally it just… And the payload of the work to the computer to process faster. And unfortunately, now we’re facing different reality, and these programming languages are still utilized out there. They are kind of [inaudible 00:04:19]. Try to adjust the direction.
So they provide some sort of API or primitive to be able to run multiple things in parallel, but this primitive aim to synchronize the computation, because the main problem is that these programming languages were designed with mutability in mind, and mutability is the problem of writing correctly deterministic parallel programming. Right?
Adam: And so because programming languages, especially like an imperative programming language, it goes line-by-line, and this is happening sequentially, and then also it’s changing state, so that makes concurrency a challenge to introduce.
Richard: Yeah. That’s correct. [inaudible 00:05:13] language were designed with mutation in mind, so whenever… With states, right? With the change of state. So imperative language we’re able to process some computation, and as you say, sequential and procedural change the state. That’s actually how imperative language were designed. And now you are facing this reality of [inaudible 00:05:37] multiple thing [inaudible 00:05:39] multiple thing in parallel.
Well, it can happen that multiple thread access the same memory space at the same time, and there are several issue with that, because one of the most [inaudible 00:05:55] could be a race condition. When a thread change the state, the state of this memory address, the second thread access this memory, but the state changed, and in the state it was not expected, so the program [inaudible 00:06:13] unwanted behavior. And hopefully not going to crash, but it’s also stability, right?
Adam: Mm-hmm (affirmative). I had two interviews with Jim Blandy who wrote a book about Rust. And so Rust, in many ways, it’s a tool designed to deal with concurrency. It’s taking the approach that you find kind of in C or C++ with locks, and extending the compiler to have the compiler kind of warn and prevent, I guess prevent outright these sort of race conditions and deadlocks.
Adam: But I think your book is interesting, because it’s about a totally different approach to concurrency. What’s the approach?
Richard: Well, the book emphasize the functional paradigm, so it turned out that the functional paradigm is a old idea, and kind of [inaudible 00:07:10] aside from when [inaudible 00:07:15] programming languages, Java, C++ came along that was designed with imperative paradigm in mind. [inaudible 00:07:24] and everybody thinks that functional programming more for academia, or this kind of field, but it turned out that functional programming actually fit quite well to write concurrent programs, because the functional tenets such as immutability, referential transparency and so forth really are great tools to write correct concurrent program deterministic.
An Example Problem
Adam: Yeah. It’s a great fit. So you had an example in the book where you had to work on a system to scan medical images. I was wondering if you could share that.
Richard: Sure. That was back in Italy that I used to run a small consulting company with a lot of program for [inaudible 00:08:10]. And one of the system we implemented was for the radiology where we did… To do analyze and personalize images, but before even the first analysis, these images had to be processed. And it was like a pilot application at beginning, but they were quite good success, because at time really there was no many software that was able to do what we were doing.
And so we started with the radiology, and the payload, the work was decent, but it wasn’t really overwhelming, the overall application. However, more and more the images were sent to be processed over time. The radiology get extended, get more machine, so really for probably 10 image in process, I don’t know, in an hour, where probably like 100 per minute. Right?
Adam: Mm-hmm (affirmative).
Richard: Now, think about back in the day. There was no [inaudible 00:09:14] computer run super fast, so to process an image, it takes a few minutes, and we have hundred images to process in few minutes, really. That was a problem. Too long. So the solution we had was, okay, let’s parallelize the work. Distribute the work [inaudible 00:09:34] and that’s the way we start to face the problem [inaudible 00:09:37] mentioned early with the… Using technology that were not designed to write parallel program, we’re facing issues such as race condition was the [inaudible 00:09:47].
And when you try to fix race condition one side, you have other problems such as deadlock, when you have an application stop working because thread waiting to each other to release the lock, the primitive. So it was pretty quite messy.
And [inaudible 00:10:03] spend a lot of time to figure out solution, but it was like you’re fixing one thing, something else broke. Right? So at that point I realized that that’s wasting time. Sit back, and probably just keep… Make the same mistake over and over. Probably the approach you’re trying to do is not the correct one, or there probably a better solution.
And that’s actually when everything started, because I remember back in college I did a semester using a functional languages, Haskell, and I was pretty much… I was very fascinated about the language and the paradigm, and even was very hard for me to really master the language, but I knew that was definitely a better solution, a good solution, because I mention, functional programming like Haskell has built-in immutability per default, and this to really… It doesn’t make it your program to run faster, but prevents you to make mistake. Right?
Adam: Mm-hmm (affirmative).
Richard: It’s easier to write code that can be easily parallelized. It was faster to build a new part of the program that was leveraging functional programming than write to try to correct, or rearrange, or fix bugs with a previous permutation.
Adam: And so, did you rewrite it in the same language?
Richard: No, no. The portion the code was rewritten using Haskell.
Adam: Oh, nice.
Richard: Yeah. And yeah, actually, the original permutation was all in Java, so I think if I go back [inaudible 00:11:39] differently, but try to implement a sort of C++ layer to interact between Haskell, Java. And again, that was many years ago, so… But it work. And actually, definitely we achieve the goal expected to be able to process many images in parallel, and we were back in business.
Adam: So now it’s years later. You’re writing a book about concurrency in .NET, and the thing that I notice about the book is it seems to be secretly a book about functional programming.
Adam: So the book spends a lot of time with F#. Why is F# a good choice for concurrent code?
Richard: F# is one of the newest functional programming languages [inaudible 00:12:35] Scala for example is another. What I think really they do, what is successful, is they cover the gap between pure functional languages [inaudible 00:12:48] Haskell, and imperative languages [inaudible 00:12:51] Java, C++ and C#, because [inaudible 00:12:54] functional per default. However, allowed you to cheat and write in a imperative way. So it make it easier also for any sort of engineer that come from imperative experience to embrace the language, and slower, also embrace the paradigm.
And yeah, I did a lot of work in the past in C#, and especially my bread and butter is for many years has been distributed system and parallel processing, and of course decided to use majority time imperative language like Java, C++ and C#. However [inaudible 00:13:35] idea that functional was the right solution, and when Microsoft announced F# was really refreshing feeling to me [inaudible 00:13:44] between the languages, and because in the book I try to convey the message that you can do pretty much whatever you can, whatever you want in both C# and F#, and inter-op quite well between the two languages.
So, some places better use one language [inaudible 00:14:05]. But [inaudible 00:14:07] F#, the fact that it’s functional first allowed it to… It prevent you to write incorrect code. Right?
Adam: Mm-hmm (affirmative). I mean, I think that there’s probably a lot more C# programmers out there than F# programmers, so I can understand the coverage, but do you have a personal preference? What do you pick up when you need to solve these problems?
Richard: Well [inaudible 00:14:31] and write parallel application, I’d rather to go to F#. And again, and all because I’m lazy, and I know that write code that run parallel using a functional language allow me to write code that is less buggy. So I’m faster, more productive, and to write less unit test. It just fits better for the parallel domain. However, as I said…
F# is immutable per default. You can cheat, as I said. [inaudible 00:15:01] tell the compiler what you want to do. That you want to use something immutable, but you’d be explicit. On the other side, C# is, per default, mutable. Right? So it’s a different mindset. It’s a paradigm shift, but definitely they require a little learning curve, but they pay definitely the benefit later on.
Adam: Yeah. And certainly, with parallel execution, immutability is something handy as a default.
Richard: Yes. You think about when something cannot change, well, it’s safe to be shared between threads, because now the thread cannot mutate it. Or there is some sort of… something called copy per value, or defended copy, or like… You mention Rust for instance, earlier. Rust also has been designed with a hybrid approach that allows you to use something mutable, but can you… The compiler warn you. [inaudible 00:16:03] want to be sure that what you’re doing is what you’re doing. Like, these are [inaudible 00:16:07] right? You want to be careful. And the same thing here, right? So immutability allows you to create object that are thread-safe because it cannot mutate.
Adam: However, in the real world, we do have to mutate things. How do we overcome the fact that values need to change? We need to get work done.
Richard: Well, that’s a good question. So there are different kind of approaches. Like for instance, there are parallel pattern you can use. For example, the classic fork-join or MapReduce. Whenever you have some sort of computation that can be split among different [inaudible 00:16:49]. So in that case, you can partition the work, but also, each thread can create a copy that can only be accessed by this thread specifically, and when the computation is completed, you can then merge together the result in your solution. Right?
And that’s actually, it’s very close also to the functional programming style. When you have a big problem to solve, you try to breaking apart in smaller, smaller portion until very easy to tackle directly, and then you recompose them together. You glue them together to the final solution. Right?
So similar approach here with the parallel code. You [inaudible 00:17:32] computation that are dependent. You can mutate the objects, so it’s okay, because [inaudible 00:17:37] only one thread can access that object, and then you can re-merge just the join part. The join part in a fork-join pattern, for example.
And the global state will be more tricky. There are other [inaudible 00:17:51] that you can apply such as [inaudible 00:17:56] is also something that I introduce in the book. Is a good approach where what you can do is that you can mutate or update an object, but a low level. So, each thread that access this specific memory or this specific object is guarantee the… can access the latest one. The latest updated, right? Some sort of barrier to be sure that no thread going to get an object that is in unwanted state.
Hunt the Thread UnSafe Code
Adam: You had an example in the book. I think it’s an interesting concept, I don’t want to gloss over it, about, I think you called it hunt the thread unsafe. So this example about a chat application of some sort.
Richard: Yeah, sure. So in the book I sort of starts off the story. Right? But really what [inaudible 00:18:49] boil down to the fact that there was this application that was sharing state globally, and they changed the chat. When you have multiple user that can join the chat or leave the chat, and there’s room, there’s group and so forth, but [inaudible 00:19:07] sort of a [inaudible 00:19:10] location that maintain the state of the current active user. Right?
And the problem is that the API or the [inaudible 00:19:20] was running high contention, which mean that it was many thread accessing the same state, and there are two caveats here. One, with a lot of contention, also the application [inaudible 00:19:38] slower. Right?
[inaudible 00:19:39] like [inaudible 00:19:40] lock, you can run in other cases, such as [inaudible 00:19:45] lock and so forth, when for instance lock runs slower than the thread try to access the specific state. And what does it mean? It create this [inaudible 00:19:59] many thread that they’re waiting that queue up. Right? Waiting to [inaudible 00:20:04]. And the bigger the [inaudible 00:20:07] you can imagine the slower the application run, and that’s [inaudible 00:20:13] contention and kind of rise as a big problem.
Adam: Let me make sure I follow. So, you have a chat application, and when people join, there’s some sort of shared… like a list or something of the users who are active, and the problem is, as things scale… Okay. So you need to deal with concurrency, so you put locks around accessing this, and then it becomes a bottleneck. Is that right?
Richard: That’s correct. Yes. That’s correct. But that thing that I mentioned is that if you’re not careful, it’s very hard to understand [inaudible 00:20:54] work correctly. Because maybe sometime a user leave the chat, but the collection is not up-to-date because you don’t use the lock correctly, and it’s very hard to detect that kind of bugs, right? Because maybe work your machine [inaudible 00:21:11] production that maybe the server is under different kind of payload with many, many more request. That’s when the bug can raise and you’re not able to reproduce it. Right?
Adam: It’s non-deterministic. Concurrency bugs are often… Who knows when they’re going to happen?
Richard: Oh yeah. That’s correct. That’s correct. In fact, if I have a superpower, I’d like to be able to produce this kind of deadlock detector. Right? So you run the code [inaudible 00:21:43] to say, “You’ve wrote the code here, but actually where you put the lock is not right.” Oh, that’ll be great, but unfortunately there is not compiler or programming language that can tell you if you use this kind of primitive correctly.
Adam: No, but I guess like… So yeah, there’s potentially two solutions, right? One is your solution of immutability, and the other solution is Rust where they say like, “Listen. You can only have… One person is able to write at a time, or you can have multiple reader.”
Richard: And you can do the same thing in Rust, high-level. Meaning, Rust provided this kind of primitive. However, you can build this kind of tool in functional programming languages such as Haskell and F# and so forth.
Adam: Makes sense. Back to the story, wherein what’s the solution? You have this dictionary or something, and you need to lock around it.
Richard: Right. So it was interesting, actually, the investigation to use some [inaudible 00:22:55] to figure out actually where this contention was raised, and the problem actually is that who implement that code did not read the documentation correctly, because the collection that we’re using, it said yes and it tried to save, but pretty much only [inaudible 00:23:13] part, which all collection are pretty much trying to save in the [inaudible 00:23:17].
So the problem that when it was multiple write, there was the contention. And I think overall, the collection was behaving correctly. It was not creating sort of unwanted state, but definitely the time that was updating and releasing, it was slower than the thread try to access that collection in a very high-volume request.
So ultimately what we did, we implement some sort of asynchronous kind of approach, and the idea is that… What is great about synchronous programming, even for the collection at that level where you have a lot of high operation, is that at that point, the thread is not waiting for the collection to be update or release, so actually, the thread is released for the work, can do something else.
So overall, maybe… [inaudible 00:24:17] maybe. For sure, the performance did not improve, but the contention was much gone, which make the server lighter and able to do more computation at the same time. Right? Because now didn’t have such a heavy overhead, and so many thread busy waiting for the thread before in the queue to complete the work, if that makes sense. Right?
Adam: Yeah. So what you did is instead of updating and having to get a lock, you’re just kind of omitting some sort of message. Like this user-
Richard: That’s correct.
Adam: … joined, and then this user left, and then something else owns this shared state and updates it.
Richard: This correct. Yes. And that’s not blocking, as I say, because asynchronous. So all the thread were happy just to send a message, and they go back to do something else. Right?
Adam: So how come you didn’t try immutability in this case?
Richard: Well, so because that’s one of the case where probably… Redesigned application was taking extra effort. That was definitely a good approach anyways. The way of like global state, unfortunately. Like [inaudible 00:25:31] a database is a global state, right? Because everybody can access [inaudible 00:25:36] write [inaudible 00:25:37] update and so forth. However, when this kind of situation, you can use patterns such as… The message-passing is a good pattern where you can create these [inaudible 00:25:50]. They maintain in a state internally, which is very quick to answer to request, and then, slower and synchronously, also update the server. Right?
Adam: Mm-hmm (affirmative).
Richard: And that’s actually was pretty much the approach that we did this kinds of scenario.
Adam: So in this scenario, the problem to overcome is shared state, and an actor message-based approach seems to be a really good fit. What about cases where you have a lot of data, and you need to perform similar actions on it. Like the Mandelbrot set.
Richard: There are two different kind of data operation, like single instruction, multiple data, and multiple instruction, multiple data. So the difference between the two is that, as the name imply, you process one operation to all different kind of the data, or you can process multiple operation to the data. Right?
In the case of Mandelbrot that you mentioned, pretty much you apply a single instruction to multiple data, so probably as a sort of complex number or so forth, and you try to generate this Mandelbrot and render the image, and you can parallelize this kind of application, because you can compute a portion of the data in parallel. So each thread own a portion of the data.
Then, as I mentioned earlier, some sort of fork-join pattern. So you partition the data, and when the computation is done independently, rejoin the computational result together. Right? So we have two step to the process, which is the fork, which is the partition and the work, and the join. When everything completed, you smash them together to obtain the result. The partition is the tricky part, but there are different kind of approach that you can do in development as well.
Adam: Yeah, because you need to determine how parallel to go. So how do you do that?
Richard: The main issue is that you don’t want to overwhelm the scheduler, the task scheduler. So usually it’s recommended to have a ratio like one thread per CPU, per logical CPU of the computer. However, there are some cases that you can have more thread than that. One example that came in mind actually is [inaudible 00:28:38] tree. Okay?
When you have a big tree structure and you try to work the tree in parallel, well, you want to be careful to don’t spawn a thread for each sub-tree of the tree, until you most likely spawn a thread on for each node. Right? Because then you get like 1000 1000 node. Yeah, sometimes the vendor tell you, “Don’t worry. The scheduler going to take care of it.” That’s not completely true, because then you introduce something called context switching.
And what’s happen is that you have so many thread that run in parallel. And yes, most likely the scheduler try to partition the work among these thread, but because now you overwhelm the scheduler, the scheduler try to context switch between thread, and that introduce extra overhead. Right?
[inaudible 00:29:30] operation system, but like in a Windows operation system I think it’s about 150… Sorry. 15 millisecond. But you think about, that’s also is an eternity when you have many, many thready running parallel. Right? A solution is that [inaudible 00:29:46] you specifically create a death, which is the max thread that can run in parallel for that specific tree. Right? And even if, let’s say, a quad-core computer, maybe you can run up to eight thread. Is okay, maybe. But you don’t want to go like 1000 thread, right?
Adam: And it’s still more. So I’m trying to imagine this in my mind. So I write my little functional definition that recursively goes down a tree, and it… Say it’s a binary tree, so it goes to each leaf, and it’s trying to find the minimum elementary. Right?
Adam: So you’re saying, “Okay. We can make this parallel.” So what we do is instead of going recursively checking each leaf, we spin up a new task and have it get to each leaf, and then further to that, you’re saying we want to… If we have four cores, then we only go four levels deep of recursion before we stop going in parallel. Right?
Richard: That’s correct. Yes.
Adam: But that’s more than four cores, right? Because that’s like two to the four, if I have my math right.
Richard: Yes. That’s correct, which is a pretty high number, and really overwhelm the scheduler. So in the end, you run in this case of when… I parallelize my code; I run four times slower. Well, that’s because you overwhelm too much the scheduler. I have an example, actually, in the book about a similar approach. A similar problem. And I compare the solution with or without this extra check when you try to [inaudible 00:31:33] parallel [inaudible 00:31:33]. And really, in a quad-core machine, was about twice as faster even with less thread for the same reason that we remove such a large payload and stress to the scheduler.
Adam: So is there a magic number, or we just know it’s somewhere between the number of cores and two to the number of cores.
Richard: Yeah, so again, [inaudible 00:32:01]. It’s all about measure and measure and measure, and figure out what the best number. However, like in this case of the tree, working the tree in parallel, probably a good number would be probably, I want to say the number of the core time two, probably, I would say.
Adam: Oh, okay.
Richard: I mean, extra overhead is fine, right? But not too much.
Adam: Yeah. It will still be faster than the sequential approach, unless you’re way out there, but…
Richard: Yeah, but as I said, there are cases when if you’re not careful, actually sequential code run faster than the parallel one, and-
Adam: Oh yeah.
Richard: Yeah. So [inaudible 00:32:57] to measure and figure out actually if you gain speed and performance in the end. Right? Because, “Oh, it’s parallel. It’s going to be faster. Move on.” Well, and now you have problem. An example, we talk about a contention on the chat example. Right?
Adam: Yeah. So the secret of concurrency, besides functional programming, is know what your expected workloads are like, and then just test it.
Richard: That’s correct. Yes.
Async IO Parallelism
Adam: Yeah. So that’s like data parallelism. You have lots of data, and you perform the same thing on it. What about cases where, let’s say I have to call two web services, get two pieces of information back, and then call a third service with the result.
Richard: When you call a web server or any sort of services, we introduce a different kind of parallelism. In this case is asynchronous parallelism when you have IO operation that they’re not CPU-bound. So in this case, asynchronous operation is something that we complete later in the future. But the current thing about asynchronous programming that’s [inaudible 00:34:08] is the idea that you push the computation somewhere else. Right?
So in the case of the web server you mentioned, you make a request to the server, but because it’s asynchronous, now your computer, it doesn’t have to lock a resource [inaudible 00:34:26] the thread waiting to the response come back. But, it can go back to the scheduler. Reschedule to do some other work. And when the computation or when actually the response come back from the server, the scheduler can send a thread, which is not necessarily the thread that initiated the work. Can be another thread. And then continue the computation, right?
And the great thing here is that when you push the work to be performed somewhere else, in the case of asynchronous programming, because it’s not CPU-bound, you don’t necessarily have to be constrained to have the number of thread that match the number of core to reduce contention and high-performant computation. But really [inaudible 00:35:11] something that I call unbounded computation, because even in a single-core machine, you can reach true parallelism with asynchronous programming, because again, it’s not CPU-bound, so somewhere else. Somewhere else does it, right?
Adam: So there’s no limit. If it’s IO, there’s no reason to limit yourself in terms of the asynchronous operations you’re waiting on.
Richard: Yeah. That’s why I really love asynchronous programming. And in the case you mentioned, even in this case, the idea of two parallel requests going out to independent web server [inaudible 00:35:50] small fork-join pattern as well, right? Because now we have two computation run in parallel, and then we merge the result that the server sent back to the third web server. But now, even in a one-core machine, these two requests can be truly run in parallel regardless.
Adam: Yeah. Have you ever seen this Haxl project? There’s a .NET version of it. It came from Facebook originally.
Richard: No, I’m not familiar.
Adam: Yeah. It finds… This example that I had where you have two things that need to run asynchronously, but then a third that depends on the two of them. If you use, like in Scala or Haskell you were using like a do notation with a monad, or if you were in C# with your LINQ or whatever, the two first requests will end up… One will be depending upon the other, when in fact there’s no dependency, right?
Adam: So this Haxl actually kind of… I don’t know how it works, but it kind of traverses the dependencies to say like, “These two actually don’t need to be sequential. They can be run together.” I forget what they call it. Like implied concurrency or something. But you don’t have to tell it that these are separate. It walks the traversal.
Richard: Oh, very cool. It seems very similar, as you said, like the do notation in Haskell. Like the… something called continuation-passing.
Richard: And I think it’s very similar what you’re telling, but I’m looking forward to read more about it. But the idea is that to compute something, and instead to block the computation that maybe looks sequential, and you block the flow, but you don’t block the [inaudible 00:37:41] execution.
Like for instance [inaudible 00:37:45] desktop application, the UI won’t freeze, because even the code kind of waiting to complete it, but it’s not really stopping the main thread execution, because when the original computation complete, then it continue. It pass the result to the next step, all without locking the main thread. Right?
Continuation Passing Style and Monads
Adam: Yeah. Your book has a lot of… It makes use of continuation-passing style in F# a lot for this pattern, right?
Adam: To link operations.
Richard: Yeah. Definitely. And the reason is that it’s non-blocking. I love to write non-blocking code, right? Because that’s when you actually reach scalability. But even like C# or other programming languages, you can write code in the same paradigm. Right? And [inaudible 00:38:46] that you start a computation and then you continue the work to the next step.
Now, in the book actually I explain how this can be implemented in C#, even it was not designed with that in mind, and apply almost the do notation Haskell in C#.
In F#, there is actually something similar to the do notation in Haskell, which is specifically for asynchronous programming model, and it’s called asynchronous workflow. And what it does, it just… Syntactic sugar [inaudible 00:39:21] abstraction [inaudible 00:39:23] passing style. Because allow you to write code that looks sequential, and in reality it’s non-blocking [inaudible 00:39:31], and the result is passed as a argument to the next function without blocking the main thread.
Adam: How do these workflows relate to do notation or for comprehensions?
Richard: They’re all kind of monad. [inaudible 00:39:48]. The do notation of the modern Haskell are more generic than in F# for example, because F#, there are more specialized. Like for instance, there are built-in continuation like the synchronous workflow, or sequences, or [inaudible 00:40:11] and so forth.
What I think is somehow is more powerful, but you need to be careful, is actually you can implement your own. Right? [inaudible 00:40:21] computation expression. But specifically for concurrent programming, even Haskell use something called the continuation monad, and the continuation is pretty much the continuation-passing. [inaudible 00:40:35] continuation-passing semantic the same as other programming languages such as F#, Clojure and so [inaudible 00:40:45].
Adam: Yeah. It ends up pretty nice. I was reading through the examples in the book. I think, yeah, the one difference from something like Haskell or Scala where you have… F# doesn’t have higher kind of types.
Richard: That’s correct. Yes.
Adam: So you can’t write something that is generic over all types of workflows, I guess.
Richard: That’s correct. There are some workaround. That, however, it truly is not been designed like Haskell, Scala [inaudible 00:41:19] this kind of type, and to do… generate for you the code. Which is quite nice. I’m not going to lie.
Adam: It ends up getting close. Like I thought. And even the LINQ stuff that you have in C#. It reads a lot similarly, right? That you’re like, “From this, from that.” It looks the same. Under the covers I think it’s a lot possibly uglier.
Richard: Well, so as I said, C#, for example, definitely is not a functional programming language as… Well, they introduced more and more functional feature in language, but in the case of the LINQ, which is pretty much monad, they’re just called different name. But the great thing about is that it’s a pattern-based, so the compiler is able to detect the pattern and apply some sort of [inaudible 00:42:10]. Right? So you can write code in a different name.
So I actually in the book explain how you can write your own LINQ-style approach for task so you can run thread using sort of do notation. And writing the code in that way, actually compiler able to understand [inaudible 00:42:29] try to do, and you can write code using a LINQ expression, which is very close to the do notation.
Adam: So how does it do that? Does it look for certain methods?
Richard: Yes. That’s correct. In C# it’s called SelectMany, which is usually called in other programming languages bind, or the do notation or so forth. Right? But yeah, so pretty much [inaudible 00:42:51] SelectMany for the type that you want to apply this programming style. That’s it. It just magically, the compiler understand the pattern and is able to implement it. [inaudible 00:43:04] built-in is for sequences for enumerable, but you can create monad-like kind of approach for task just by writing SelectMany for the task type.
Functional Reactive Programming
Adam: Very cool. Now that we’re deep into the functional programming terminology, I wanted to ask you about… I saw you did some talk where you made your own functional reactive programming library in F#?
Richard: Yes. It started as a proof of concept for a project. I was working on it, and actually it turned out that it been implemented and expanded, and then [inaudible 00:43:38] production for the company.
So the idea is really apply functional programming to some sort of representation. Virtualization in this case, when I talk about it. So the idea is that [inaudible 00:43:59] object or can change over time, and [inaudible 00:44:03] very heavy [inaudible 00:44:05] between the computation of the object that can change, and functional programming just allowed you… First of all, provide you a very nice DSL. Some sort of domain-specific language to implement this animation in a very [inaudible 00:44:21] way. And again, quite nice.
Adam: And I think I’m not totally clear on what exactly makes something FRP, but I feel like it’s a concept that’s becoming popular. I know there’s Elm, and I think Redux maybe is FRP-like. I’m not sure.
Richard: Yes. Elm probably is the closest one definitely, yes. There is some sort of misconception with reactive programming and functionality programming. And the un-fortune is that they share a lot idea together, and even at the programming, share a lot of functional primitive. But they’re two different thing. Right?
Adam: So what is FRP then?
Richard: So first, actually, let’s start with reactive programming. That is really an umbrella. In the industry, the term reactive programming also try to cover functional programming and so forth. But so reactive programming is the idea that everything is an event. So you can process event as a string. It’s sort of like a collection, but in this case [inaudible 00:45:33] collection [inaudible 00:45:34] event. And thing about event are data. Okay?
So pretty much, reactive programming is just get this data, and real-time process this data, and manipulate it to do something. [inaudible 00:45:50] think about IoT in this case, like these days very trendy. There are a lot of event out there that need to be processed almost real-time, so more and more reactive programming will come like [inaudible 00:46:06]. But really, reactive programming is about processes asynchronously [inaudible 00:46:12] non-blocking a string or an event.
And the other side is that functionality programming is about the… [inaudible 00:46:22] event that change over time, but the main different between the two is the time, because in functional programming the time is continuous. [inaudible 00:46:39]. Instead, reactive programming is not only… You can reproduce the event, for instance, that ran a year ago and reprocess it. Instead, in functional programming, this is just is about time is continuous.
Adam: And like a spreadsheet. I’ve heard the example before of a spreadsheet. Are you familiar with this example?
Richard: Yes. So that’s very much reactive programming, because the idea that whatever cell that is bind to a formula that [inaudible 00:47:20] to two different cell. Whenever you change the value of one of the two cell, also the value of cell change. Right? And that cell pretty much react to that event that one value change in other cell.
Adam: Yeah. I think it’s a good model for understanding reactive, because yeah, everything updates. In your book, you had this acronym, ACD, about… I guess it was a guideline for designing concurrent systems.
Richard: ACD stands for asynchronous caching and distributed. So when we talk about asynchronous programming, everybody was think about synchronous to the classic begin/end pattern, or the callback pattern. But first of all, asynchronous is based on something that we know we’re going to complete later in the future.
So asynchronous can be either like, as I said, like a begin/end kind of pattern in code, but also a design pattern, such as the classic example, the Amazon. Right? When you submit an order in Amazon, you don’t get right away the confirmation, but you get [inaudible 00:48:39] to a page that say, “Your order going to be processed. You’ll receive an email.” And so forth. And that’s actually asynchronous. Right? And definitely, that’s an architecture kind of idea to be asynchronous.
And ACD is really the secret to write scalable system that is able to cope many, many requests that come over time, because [inaudible 00:49:07] which is asynchronous, which is non-blocking, which is…
Sometime actually I still get the question that server are naturally in parallel. [inaudible 00:49:21] many request in parallel. Why should you use synchronous programming? Right? Well actually, it’s very important to write asynchronous program in the server, because now you can handle multiple requests at the same time, because the server most likely going to call maybe database. It going to do some sort of operation [inaudible 00:49:44] just CPU-bound. Okay?
In that case, instead of the thread that is block and waiting the computation to complete, instead, go back to the scheduler. [inaudible 00:49:55] able then to handle another request coming. Otherwise, if it’s waiting, you have multiple requests coming in, and then the scheduler [inaudible 00:50:03] spawn multiple thread is very costly, very resource-intensive, and then when you pass actually the maximum number of thread that the scheduler can handle, they start queue up the request. And at that point, if the queue get larger, you get a timeout, which is not definitely what you want to do. So asynchronous programming is very important for server-side.
Adam: I’m thinking like a web service. If it gets a request like, “Hey. Create this user.” One thing you could do is, instead of returning a 200 like, “Okay, we created it.” Is you return like a… I don’t know what it is. A 202 or something. You’re like, “Yeah. We got the message.” And then you kind of put that somewhere. Right? You put it on a queue. Is that kind of the idea?
Richard: Yeah, that’s correct. But the queue is handled by the scheduler. Right? So you don’t actually have to do anything. Unless you built in your own pattern. Let’s talk about Amazon. [inaudible 00:51:00] queue of messages to some sort of message pass and so forth. [inaudible 00:51:07] queued up to be processed over time.
Adam: Yeah, or if you go to a café and you order your food and they give you one of those numbers. Right?
Adam: You don’t just stand there and wait for your food. They’re like…
Richard: Right. Think about if you order something on Amazon, and it was [inaudible 00:51:25]. You have to wait in front of the browser until the UPS guy showed up to the door. Right? And then you close the browser, maybe you lost your order. I don’t know, right? So that’s something that… Yeah. That’s very much the same idea, right?
Adam: I hope their delivery time gets that good. I can just sit there and wait. Refresh.
Richard: Oh yeah. [inaudible 00:51:44].
Adam: Okay. So that’s asynchrony. What about caching? How does that apply?
Richard: So caching is an approach that aim to avoid to recompute the same work multiple time. For example, you can call from database some sort of data that is not change over time. So why you have to call database again, and running computation takes some time, and so forth. Instead, it can be cached locally so you avoid a round trip [inaudible 00:52:27] which is costly, resource-intensive, and is very fast. Right? We’re talking about memory access rather than IO time access, which is quite a large different. Talking about… I don’t know. Let’s say 500 millisecond for database IO, or maybe less, but can be way, way less. Like very few milliseconds for local access [inaudible 00:52:53].
Adam: And something could be derived, too, where it’s not just a simple database read but you’re like-
Richard: Oh yeah. Definitely.
Richard: So in the case of the [inaudible 00:53:03] agent we talked like initially, you can tell the agent to persist in database, but also have a local copy, local state inside the agent. And what is great is that the agent [inaudible 00:53:16] so nobody can access the state inside the agent itself, so you avoid [inaudible 00:53:27] such as race condition, deadlock, so forth.
But this case, the access is very fast because you send a message to the agent to get some piece of data, but the agent say, “Oh, wait. I have this data here locally, so I don’t have to go to database.” Right?
Adam: I guess it sounds like on the write side you want to be asynchronous, and then on the read side, you’re trying to cache things.
Richard: Yes. That’s correct.
Adam: You can scale better, because no matter how much read side scales, then you’re still serving from the cache, and however much write side scales, you’re just queuing up somewhere. I mean, I guess it can’t queue infinitely, but it gives you a large buffer.
Richard: Yes. And the access is very fast.
Adam: So what about distribution? That was your last one.
Richard: Yeah. That’s the idea of partition actually the work across different machine. Right? So this case, where multiple computer [inaudible 00:54:27] server. You can partition the work among the server, and the idea though is to design the system that is stateless as possible, because without state, well, the server doesn’t have a state, so the work can be distributed without running any sort of problems, because then we introduce [inaudible 00:54:50] we talk earlier. Global state. You have to be careful [inaudible 00:54:56] access, and also maybe it is resource-intensive, but this case, [inaudible 00:55:00] distributed work is… You get the most benefit when you implement it in a stateless manner.
Adam: And then, if you have state, then can you partition somehow? Is this kind of like sharding, or…
Richard: Yes. That’s correct. So the idea that you can distribute the work, like for instance, yeah, most likely a load balancer between the server, and whenever requests come in, they’ll balance [inaudible 00:55:31] in a round-robin fashion between the server. So depend the application you’re building can be like a request [inaudible 00:55:41] independent, or you can also distribute the work [inaudible 00:55:47] request among server.
So for instance, MapReduce pattern. You can have multiple server that belong to the map step of the process, so they can parallelize the work, and then pass the computation to the other server part, the pipeline, so forth. [inaudible 00:56:07] but the idea of stateless actually, now you mentioned that, I was thinking if [inaudible 00:56:16] an application that you have a sort of session.
Well, the problem with a session, that you have to be careful because maybe you hit one server the first time with the computation, and then you hit a server the second time that doesn’t have that session. Right? So now you have to distribute the work between the server, but also they have this bottleneck or this global state that they share. So that’s where you can use database like we said earlier [inaudible 00:56:47] and so forth where you can partition also the session, but the idea that you try to build, architect application that is stateless as possible.
Adam: Yeah. Especially for a web server, I think the stateless.
Richard: Yeah. That’s correct.
Thoughts on the Book
Adam: Yeah. Can make a lot of sense. So we’re running out of time, but I wanted to mention how much I liked your book. It’s pretty fun. Interestingly, it’s a book about concurrency, and I think it goes through several different ways to approach concurrency in several different libraries, but the thing I really liked is… well, besides the Secret of Functional Programming book, is all your little asides where you are kind of teaching people things.
Earlier you mentioned CPU speed. I thought it was pretty fun in the book. You actually calculated, back-of-the-envelope, what the limit on CPU speed could be, based on the speed of light. I thought that was clever.
Richard: Oh, thanks. Thanks. Yes. Yes, I’m doing… Recently I’m [inaudible 00:57:57] about quantum computing, and so actually I [inaudible 00:58:03] presentation coming shortly about quantum computing, and more and more I get familiar with the concept, and more and more I really enjoy this technology, and I think it’s the future, because yeah [inaudible 00:58:17] reach the physical limit of the [inaudible 00:58:20] because I dare that we’ll never have a transistor that is big like an atom. At that point, we have to go to the quantum computing direction. Right?
So it’s very, very fascinating. It’s completely different idea how we program and how we implement hardware, but many years ago [inaudible 00:58:43] machine that were based on the vacuum, right?
Richard: Now today we’re transistors, so I think this is a natural evolution, quantum computing.
Adam: Yeah. I don’t understand it at all, but I understand it can speed up certain specific type of things. All right, well thank you so much for being on the show. It’s been a lot of fun.
Richard: All right. Well, thank you for having me again, and yeah, it’s been a lot of fun.
Adam: All right. That was our interview with Riccardo Terrell. I hope you enjoyed it. If you stayed this long, as I mentioned at the beginning, we have a couple copies of the e-book to give away. If you would like the e-book, look on Twitter where I will start a thread. You can reply to it if you’re interested, and I’ll do a draw. If you’re not on Twitter, also just go to the page for this episode on my website for CoRecursive, and leave a comment there, and you’ll be entered in the draw as well.
Once again, if you like the show, please spread the word about it. Feel free to leave reviews on iTunes or [inaudible 00:59:53] on Twitter. Thank you to everybody who mentioned the show on Twitter recently. [Niquest8 00:59:57], [BorkDude 00:59:59], Robert [Billicky 01:00:00], Ashley [Viterry 01:00:02]. Sorry if I’m murdering people’s names here. And Signify Technologies, who featured my first episode in their Sunday reads list. Also, thank you to Charity Majors, the guest from last time, who said that this was definitely one of her favorite podcasts ever, on Twitter. Kind of holding out for another one, but thank you very much, Charity. All right. We will talk to you next time.