Introduction
Adam: Welcome to Corecursive, where we bring you discussions with thought leaders in the world of software development. I am Adam, your host.
Bryan: I really value those other alternatives. I think they’re extremely important in every domain in software. And I think that when you make one of those idiosyncratic decisions, you are almost certainly making it for more deeply held reasons than someone who is making a safer decision. If you’re deploying OpenBSD in a production, there’s a good reason for that. If you’re using Rust, if you’re using one of these things that isn’t the default choice, to me there’s a greater likelihood that you’ve been more thoughtful about that decision, more thoughtful about the values that you have for this job, for that decision.
Adam: That was Bryan Cantrill, CTO of Joyent. He thinks that we need to be aware of what values programming languages and Open Source communities have and how those values either compliment or conflict with our own. That sounds a little vague, but it’s really not. Bryan just wants us to think carefully about trade-offs, but I’ll let him explain that. If you haven’t subscribed to the podcast yet, I recommend you do so, so that new episodes will be delivered to you automatically. Also set up a Slack channel for the podcast if you want to chat about this episode, or just hang out with myself and fellow listeners, you’ll find a link on the website. So, Bryan, I saw you give a talk where you had the super interesting idea that I hadn’t heard before that programming languages, software systems have values. So what did you mean by that?
Programming Languages Have Values
Bryan: I think everything has values, right? I think that we don’t really talk about it because sometimes it’s so implicit, but we do the things that we do because of what we think is important. And we think different things are important at different times. And that’s what causes us in part to make different decisions. And I think that programming languages often have a very opinionated idea of their values of choosing among things that are positive, but emphasizing some things more than others.
And I think that that’s very important that programming languages do that, we talk about the right tool for the job often. What we often implicitly mean by that is finding the values of a programming language or system that match the values of the engineer and the problem at hand. And so to make that more specific, kind of a classic value is around performance, how important is performance relative to say expressiveness or relative to say the speed of development or ease of use?
And these things are often intention and there are jobs where you’re going to want to pick something that is going to be a highest performing thing at all costs. And there are jobs where you’d want to pick the thing that’s going to allow someone who doesn’t have previous experience in the domain to actually be able to implement successfully. And those are very unlikely to be the same thing. And so I started to get very explicit about this in terms of values and the values that platforms have and how we select among them, in part, because I was trying to figure out, several years ago, what went wrong with respect to the Joyent and Node.js relationship.
So I’m the CTO of Joyent. Joyent was the company behind Node.js. So we hired Brian Doll way back in the day, 2009. Part of the reason I came to Joyent was because of the big bet on Node.js. And the Node.js experience, there are parts of it that were really great but ultimately ended somewhat in disappointment, and trying to understand why that was. Why did we have this kind of amicable or sometimes not so amicable divorce effectively with Node? And sometimes coming out of a bad breakup can be very healthy in terms of, of being introspective and trying to figure out where things went wrong. And with respect to Node, I really think that where things went wrong for us was with our values. I think that our values were not Nodes values. And Nodes values really are JavaScript values.
That’s the other kind of realization for me. Even though I kind of had visions of Node diverging a bit from JavaScript’s values, and becoming dynamic server-side programming. But really Node at the end of the day, Node really is JavaScript values. And JavaScript values, there are great things about JavaScript’s values. Absolutely. But they were a poor match for our values at Joyent.
What Are Javascript’s Values
Adam: What are JavaScript’s values.
Bryan: JavaScript’s values are allowing really every person on the planet to write software. It’s around growth. It is around allowing everybody to develop software around… It is very broad and then understandably pretty thin because it’s not designed around rigor as a first principle. To say that it is type unsafe is almost putting it too gently. It is so easy. And I remember reading in one of your earlier episodes, your interview with Jim Blandy he talked about his own [inaudible 00:06:16] that the fact that you can have a typo that crashes a program and it really is frustrating.
Because there are times it just feels like, “Boy JavaScript. You don’t have to give me this much leeway. You could actually just like, let me know that I’m accessing this property over here, that I’m not accessing in any other way elsewhere in this program.” But that would cut against JavaScript’s core value of allowing for highly dynamic software. And I’ve been known to say in the past, and certainly I believe it, that JavaScript is the failed state of programming languages, which sounds overly pejorative, but there is no central authority in JavaScript.
Yes there’s ECMAScript and so on, but no one is going to tell you that you’ve misspelled your variable name. And that gives you tremendous freedom, but also tremendous peril and it makes it such that you are allowed to write many different styles of programming in JavaScript, and it’s all about JavaScript accommodating your existing idioms, your existing way of thinking. And that in turn is all about JavaScript growing as much as the possibly can, be used as broadly as it possibly can, which is great, but it’s not the way I want to live the rest of my life.
And in particular it was really frustrating when that did come into tension around things like rigor, around debug-ability around observability around safety. Where we would advocate, and when I say we, I mean, not sweet joying, but we who believed in strongly and say rigor would be advocating one path. And those that we’re trying to get the language in as many hands as possible would be advocating a different mutually contradictory path without really understanding where the other was coming from. And especially with something like rigor, no one is going to say, “Hey, by the way, I want to have like a sloppy language where it’s really easy to get stuff wrong.”
And it’s not like people don’t believe in writing correct software. It’s just that when you actually need to make a decision where you have to choose between supporting correct software or construct, that will make it easier to write correct software and a construct that will make it easier for more people to write in this language, JavaScript is going to choose the latter every single time.
What Are Go’s Values
Adam: So you had this falling out with JavaScript. So at the opposite end of the like, “Anybody can do whatever they want.” The failed state model seems like go, where it’s like, you will format your code this way.
Bryan: Yeah, it is funny. Isn’t it? That it is at the opposite end where GO then makes a bunch of decisions for you that do feel like it’s kind of infringing your own way of expressing yourself. So how your code is going to be formatted. And on the one hand, I like a consistent style, consistent style is important. And there’s lots of reasons why consistent style is important. On the other hand, mandating a consistent style, it’s too much, or mandating one style is to me too much. And there are lots of things like that, which are kind of strange, autocratic decisions in GO that aren’t necessarily well socialized. And often there are reasonable reasons to not want to make that particular decision. And they kind of permeate things.
So it’s like if JavaScript is a failed state, GO is kind of a strangely autocratic one. And in one talk, I likened, going from JavaScript to GO is like going from Somalia to Turkmenistan. And actually it’s funny, someone on the internet who apparently is Turkmen and knows both GO and JavaScript got ahold of this and said that this was the most apt analogy he’d ever heard for GO because it does kind of capture the strangeness of some of the decisions that was made. Not to take anything away from it, I think there are some people that are actually really comforted with those decisions having been made. And there’s a lots of things that GO that are fine or good.
But it felt somewhat lateral for me from JavaScript, albeit totally differently. But it’s kind of a lateral move. And again, I think that those decisions that GO makes I think, and maybe I’m falling into this trap myself, I don’t mean to be pejorative about the positions that GO has made or the decisions that JavaScript has made because for the values of those languages, they are the right decisions. And they make sense for the community that chooses those values and one shouldn’t malign those values because that makes sense for certain jobs or certain people or certain communities at certain times. I just would stop short of saying that they make sense for all communities or all people at all times. And for me personally, I would say both of those left me looking for something else.
Adam: What values were you looking for?
Bryan: Well, I’m really a C programmer at the end of the day or have been historically. I’m a systems programmer. I’ve done OS [inaudible 00:11:56] development for my entire career. I like low-level systems development. I like being that layer that’s close to the machine, like abstracting the machine. I think I haven’t gotten over my kind of fixation with providing that lowest level of abstraction than the total magic involved in that. And for software that’s at that layer you really have to pick performance above everything else. For those abstractions that are going to be closest to the machine, they have to yield the maximum performance of the machine.
Anything that you do in that layer is machine capacity that you are taking away from the software that you’re going to run. So it has to be highly performing and it has to be highly reliable. We really expect our operating systems to work all the time as we should be. We’ve got a very high level of expectation for our operating systems. And I grew up in an era in the 80s and 90s when operating systems were kind of garbage, honestly. The two operating systems that you had to choose from if you had a personal computer or DOS/Windows and Mac OS, Mac OS nine. Both of these operating systems, they were not modern in the regard that they were not actually using the memory protection that the microprocessors had support for. And so as a result, an errand application could crash the operating system.
Which fortunately we don’t live in that era anymore. We don’t live in an era where people have to reboot their desktop a couple of times a day, or they’ll run a strange program, which will crash their machine. Yes, it happens, but it happens nowhere near as frequently as it did happen.
Adam: The boot was fast though.
Bryan: True, true. Well, especially now, if you go back and actually run those… Even calling them operating systems is almost an exaggeration because they provide so little, they’re are almost what we would call an executive, but they run now. They were relatively quick on ancient hardware. You run them, God only knows how fast DOS would boot on a Skylight, once you actually got past the bias. The irony is that we still run firmware, that dates from that DOS era.
It’s almost embarrassing that if you do boot a skywalk system, it will take as long or longer to boot than a similar server machine from decades ago because the firmware itself is still so knuckle-headed. But in terms of the operating system from reliability perspective, really do expect it to be absolutely reliable a time. So those to me are my values. My values are, I want highest performance, I want total robustness. And historically C has been the language that provides that. C has shared those values and even C++ has made the wrong choices for the operating system kernel and generally for embedded development for that lowest layer of software that runs on the hardware we have generally not used C++, we’ve used strictly C.
So I was kind of hoping that not that I was ever going to write an operating system kernel in Node.js but I did hope in 2010 that Node.js would allow us to write better, faster upstack system software. And it wasn’t wrong in that it was a big leap forward. It was much lighter than running Java, for example. But it ultimately did leave us lacking. And the point that I found myself at not too long ago, was like, “All right, well, what’s next because it’s not going to be Node for me?” I’d already kind of decided that it wasn’t going to be GO for a variety of reasons.
It was certainly not going to be Python again, and not to malign Python. Python is great, and it’s very important in many domains, but not in the domain that I’m in. What’s it going to be? And there wasn’t a whole lot out there with of course the notable exception of Rust.
Adam: I keep on thinking about this writing an operating system and Node.js. What would you call it undefined or?
Bryan: Oh, God. The thing is to even do that is almost an exaggeration, because if you’re writing an operating system in a dynamic managed language, the operating system itself is that runtime that you can’t see when you’re writing your program. Because that’s the thing that is actually doing the scheduling is actually doing the garbage collecting, is doing the just-in-time compilation. So that’s what your operating system becomes. So, there wouldn’t be a note operating system. What you’re actually saying is you want to run VA as an operating system. and it would be nightmarish for many… Oh, God. You wouldn’t want to think about.
Although actually that having said that, in the ’90s, and I was at Sun in the ’90s and early 2000s, Java was such an, obviously, as you can imagine, was such a big thing at Sun that we wanted to not only make Java operating systems, but Java based microprocessors. And it’s like, “That’s insane.” I can understand the enthusiasm of the era, but to dope bytecode into Silicon is to totally miss the point of bytecode. It makes absolutely no sense. And those things, they all did not succeed. Would be great, to have like a book on all of these kinds of failed experiments, because they do fail for somewhat interesting reasons. Each failure is a little bit different but they ultimately fail because they’re trying to push something. They made Java’s high level language into a spot that it really does not want to be. It’s not designed for it. Doesn’t add much value.
Adam: Yeah. I don’t know a lot about firmware, but I know the people who work on it, memory allocation is very important. I don’t know about a GC running on some little piece of firmware.
Bryan: Well, you definitely wouldn’t want to have a GC. I think that when you’re writing that lowest level of software, you just need to manage everything very explicitly. And to a certain degree, it’s a simpler world because it’s not a distributed system. It’s not sloppy. It’s orderly and that you know what memory is mapped where and you can control that kind of universe, but in return, the software that you write needs to be very cognizant of what it can do and what it can’t do, and when it can do it and when it can’t do it. So when you’re the operating system, you are responsible for the illusion that is memory.
For memory is ultimately an illusion. Yes, it is sitting on capacitors and dims, but the operating system is providing that key abstraction that allows you to actually allocate memory. And as a result, because it is the one providing that abstraction, it simply cannot allocate memory dynamically whenever it wants to. And we’ve got many contexts in the operating system in which you cannot allocate memory. We’ve got contexts in the operating system where you can’t block because by the way, you’re in the schedule or code actually dealing with the mechanics of blocking. You can’t obviously block in that code path because you are the software responsible for the abstraction that is to block and to yield, or what have you.
So in those worlds, and from where it is kind of an extreme of that, where the firmware is not running software generally above it it’s not a full operating system, but it is certainly interacting directly with hardware beneath it. And as a result, it can’t make arbitrary references. It needs to be arbitrary memory references. It operates in a constrained environment and it needs a programming language that’s going to be able to abide those constraints, which these dynamic languages aren’t designed to do.
On Operating Systems
Adam: I need to take it as a personal mission to learn more about, well, just about operating systems, like how they actually function. I know I took a class on operating systems, but I feel like that was a long time ago and there’s a lot there and we gloss over it, I think day-to-day. You don’t gloss over it, but I do.
Bryan: There’s a mind-numbing amount there. And one of the challenges with it with operating systems or system software in general is it can be very hard to even see what’s there. And one of the technologies I worked on earlier in my career is something called DTrace that allows you to dynamically instrument the system, to see what it’s actually doing. And even today, but we use DTrace all the time to understand what the system is doing because seemingly simple abstractions are wildly complicated, and that seems to be true all the way down. There’s that expression turtles all the way down. And what that is meant to mean is that you are standing on abstraction, that’s standing on abstraction, that’s standing on abstraction, that’s standing on abstraction.
And when you want to actually observe all that, that can be a real challenge because you want to turn the system inside out so you can actually see what it’s doing. But it is absolutely stunning how much a simple operations… So for example, if you want to open a file, how complicated is it to open a file? It is basically of unbounded complexity to open the file. And something like DTrace allows you to actually, as a user of an operating system, follow that code flow through the whole operating system. And one of the reasons we actually developed DTrace, among other things, we wanted to understand the system ourselves.
But I took DOS course, my OS course in college and TA did for a couple of years. I envisioned DTrace being used as a pedagogical tool to actually teach operating systems. And it’s been fun to see that get picked up, and particularly in the FreeBSD community. And there’s the latest FreeBSD books really use DTrace a lot as a teaching tool to learn how FreeBSD is implemented. So if you’re interested in operating systems, I would encourage you to check that out. George Neville Neil’s latest on that, the design importation of FreeBSD. Where you can actually like, understand yourself what this thing is actually doing. And appreciate, it’s just nearly unbounded complexity, because it seems like anything simple is much more complicated than you think it could possibly be.
Adam: Yeah. I’m going to check that out. I remember Joel Spolsky he had this article before talking about like somebody painting a road where they would like put the can down and then they would paint a line and then they would walk back to the can and dip the brush and how they slow down, because the can is getting further and further away. And his point was that software is rife with this, where people just don’t understand. They’re just calling like paint line. They don’t realize that they’re walking back to the can every time.
Bryan: I think that that’s endemic. I think we would not want it to be any other way. I think that it’s imperative that we build and utilize abstraction. We need those abstractions. So you actually don’t want someone who’s opening a file to be burdened with the outrageous complexity of opening a file. It is important that that abstraction stay tight in that regard, but it’s also important that there’s enough reverence for that complexity, that you’re not trying to open a file hundreds of thousands of times a second, or what have you. So you need to have enough reference for the abstraction to not abuse it. And that’s a tall order where we say, “Hey, you don’t need to know how this works, but by the way, you might need to know how it works when everything goes sideways.”
And we may have to turn this thing inside out so you can figure out why your software is not performing as well as you think it should be. And that’s a huge challenge and one that I think we’re still grappling with.
Adam: So do two operating systems have values? Do they fit in the same framework?
Bryan: Oh, absolutely. Absolutely. Yes. Yes. Perhaps more than anything. But no, operating systems have got very clear values, I think. We work on an operating system Lumos which it’s a Unix derived operating system, that traces its heritage back to Open Solares. And a lot of people wonder, like, “Why don’t you just use Linux like the rest of the world, or why does FreeBSD exist? Or why does OpenBSD exist? Or why does that BSD exist? Or why can’t the Mac just run Windows or the Windows just run Mac OS?” And I feel that these different systems actually have a very important place in that they do speak to very slightly different values and clearly there are values that transcend all of these systems and clearly all of these systems care about performance.
Clearly all these systems care about reliability and robustness, but the way they reflect that are different in each system. And I think those differences should be accentuated. I think it’s good. I think it’s important. I think that it’d be hard to argue that OpenBSD doesn’t serve an extremely important purpose, even though it’s not run by that many people. OpenBSD is an operating system that picks security above all else. They will put themselves in an arbitrary amount of pain to have a secure system, and that’s an important choice to have out there. Because they do represent those values. As a result they make choices that other operating systems don’t make.
But often the choices that OpenBSD makes are choices that other operating systems come to later when they realize that actually while they may not choose security over all else, security actually is more important than they necessarily realized. So it’s important to have these kinds of different points out there and making different choices. And I don’t think we want to live in a homogenous world where there is, but one set of choices being made. And that means I don’t want to have just one operating system. I don’t want to have just one database. I don’t want to have just one cloud. I don’t want to have just one programming language.
And maybe as a result, I am faded to be constantly doing things strangely. I know I was having this discussion with another CTO and they’re using Slack. And I said, “Well we actually don’t have this particular problem you’re describing because at Joyent we use Mattermost, we’ve got our own Mattermost server.” And he’s like, “Do you guys have to do everything differently? Can’t you just do one thing like the rest of the world?” Like yes, we are able to do, but I actually, that said, I really value those other alternatives.
On Heterogeneity
I think they’re extremely important in every domain in software. And I think that when you make one of those idiosyncratic decisions, you are almost certainly making it for more deeply held reasons than someone who is making a safer decision. If you’re deploying OpenBSD in a production, there’s a good reason for that, almost certainly. If you’re using Scala where someone else would have used Java or someone else you would have used Python, there’s probably a good reason that you’re using Scala. That’s not an ill-considered decision.
If you’re using Rust, if you’re using one of these things that maybe that isn’t the default choice, to me, there’s a greater likelihood that you’ve been more thoughtful about that decision, more thoughtful about the values that you have for this job, for that decision and you’re making what is a choice that maybe is something that other people aren’t as familiar with, and it can be easy for others to derive that choice. And I think you got to stay strong when you are making a choice that is a bit idiosyncratic in that regard.
Adam: Well, I like that because I’m on a team that does Scala at work and not everybody does, and they don’t understand. And it’s also a great justification for why you’re using Rust instead of C.
Making the Case for Scala’s values
Bryan: And I think that it’s an unfortunate kind of human attribute that when we see something that we don’t understand, we often respond to that antagonistically. I’m sure there are times if you’re in a group doing Scala and you’re in a larger organization that doesn’t understand the value that it brings, I’m sure there are times when that can feel antagonistic. And that’s where I think kind of understanding these things as values can help you better explain to someone why decisions have been made or why we feel this is the right tool for the job.
Because that way you’re not falling into the trap of like, “Look, Scala is just better than your thing.” It’s like, “Well, no, it’s actually more nuanced than that.” It’s just that for this job, the values of Scala we feel are a better fit than the values for what might be a safer alternative?
Adam: No, that’s a great perspective because people get blind to the values they don’t care about, they just don’t even consider them. So because it’s a better fit for the things I value, then it’s just better.
Bryan: Exactly. A and they do that implicitly, which can be very frustrating. And especially when you’re choosing between things that everyone agrees that programmer expressiveness is good and robustness is good, but not really understanding that there are times that these things are intention. And it can be frustrating. So I’m hoping to get people to think a little bit more about why they might want to choose or not choose certain technologies in part to encourage people to make more different kinds of choices. I just I’m big believer in heterogeneity of systems and of thought.
Rust’s Values
Adam: So what were the values of Rust that made you go there instead of… It sounds like C is your default, but here you are.
Bryan: And I would say, C is my default. And I think that historically we’ve done things that kind of C when it’s down stack or when it’s real performance is critical, Node when it’s up stack. And the question I had was, on the surface of it, Rust has some really compelling values. Rust is highly performing. It’s memory safe, which is really interesting. And we can kind of get into how they actually yield that safety. And it’s really rigorous, but it’s also trying to provide programmer expressiveness and allow you to develop software quickly.
That to me was really, and I wanted to check that out, basically. Would Rust be able to deliver on all of these things. And in particular, would Rust be able to yield high-performing artifacts because if it doesn’t yield high-performing artifacts it’s not going to be applicable to the things that I would want to use it for. So I finally found something or had the kind of right time, right fit for something to merit learning Rust and dove in. I think that I’ve been Rust curious for a long time. I think reading the blog entries and listening to kind of experience and had heard peoples experience with it, I think that I was letting myself be a bit too intimidated.
Rust has got this kind of infamous learning curve. I actually don’t think the learning curve of Rust is that bad at all. And people listening may be concerned… Maybe they’d put themselves in the situation things like think like. Rust just sounds like it’s just magic. It really isn’t, I think it does need to be learned. It’s not something that you don’t want to simply download it and start banging away. That’s not going to work well. Ut if you sit down and I really recommend you that you… Obviously been interviewing Jim Blandy. I think that the Blandy book is terrific.
The Rust programming language book is terrific by Steve [inaudible 00:34:17] and community, but you really want to sit down with a book and actually learn it. And with Jim’s book, Jim Jason’s book, I did something that I haven’t done for a very long time, which is they have this kind of intro chapter that has an example program that they work through. And I sat down and I typed in that example, and it was really valuable, it didn’t take that long. And the inevitable typos got me kind of getting a feel for the compiler error messages and so on. And at the end of that, I had something that worked albeit something that I had only probably copied. I hadn’t actually thought of it myself.
But it got enough of the kind of the brain working on it, that it made it much easier to go and actually understand these other elements of Rust. So on the one hand, there are elements of Rust that are definitely novel. The ownership model is absolutely novel. It is incredibly important. On the other hand, it is not nearly as arduous as it’s made out to be. To the contrary, what I see is that in many of languages where it’s super easy to get started, your day one is really fast and that’s great, but then on day 100 or day 300 or day 500, you actually have to wait in to even more complexity as you need to understand the implementation details of say the garbage collector to understand why your program isn’t performing.
Or why is there this 150 millisecond GC pause. It’s like, “Well, now you need to figure out is it the young generation or the old generation? Which garbage collector are you using? And what’s all the nuance of that garbage collector. And do you have an object graph that is large and connected errantly,” and all this other complexity that you will now have to go deal with. And disregarded, I think Rust kind of shifts that cognitive load from that 100th day or that 300 today, much more towards the first day or second day, which on the one hand can feel overwhelming. But on the other hand, once you get it, which doesn’t really take long, I don’t feel the artifacts that you’re yielding are much higher performing and with much less surprising dynamic behavior. Where you’re not going to have the surprise 150 millisecond GC pause in Rust.
On Approachability
Adam: You had this list of values when you did this talk. Approachability, like screw approachability. I say that because you’ll be using a programming language for so long. Maybe approachability is fine, but you shouldn’t be afraid for your language to have like expert level things.
Bryan: I think that’s right. I think that that’s exactly right. And I actually, I love the sweet spot that Rust is trying to hit, which is like, “Look, we’re not going to pick approachability over robustness. We’re not going to pick approachability over rigor.” But that doesn’t mean that we’re going to make this as rigorous as it needs to be and then let’s make it as approachable as it can be. So I actually think that Rust in part, because it’s trying to, I think fight a bit of its reputation, I think it’s incredibly approachable. In particular, the compilers error messages are amazing. And I think that, especially early on, when you are not dealing with the borrow checker, you’ll get these incredibly verbose, helpful error messages out of the compiler. And it’s using ASCII art and colors to highlight exactly where your error is.
And you got you think to yourself, “Boy, you are really going out of your way to help me. This is great.” And you can just almost wonder if the compiler is like, “Look, I need you to hold onto those positive vibes, because at some point you cannot move out of our context and you and I are both going to be shrugging our shoulders, trying to figure out what’s going on.” So I think that Rust tries to make it itself as approachable as it can as a term that they use a lot in the Rust community. Not the first community use this, but they definitely use a lot. It’s ergonomics. And I like ergonomics. It’s different than approachability because it’s saying, “We want to make this construct comfortable.”
It doesn’t mean that we’re going to make this construct any less rigorous. And there are lots of ways in which they have… And it can continue to make it ergonomic in ways that new programmers won’t even be aware of. Rust 2018 drops today and one of the big changes in Rust recently is something called non lexical lifetimes. Historically lifetimes of an object in Rust have been lexical. And one of the big frustrations of Rust, or can be with our borrow checker, a big fight with a borrow checker will happen when you are done using something. So you are effectively done borrowing it by you looking at your code, but because it is still lexical in scope, the compiler treats it as still being borrowed.
And that can be really frustrating because you want to have some way of telling the compiler, like, “No, give it back. I’m done with it. I’m not using it anymore.” And with non lexical lifetimes, the compiler is a lot smarter about realizing like, “Oh, I get it. Okay. You’ve actually used that thing for the last time. So now you can give it back.” And as a result, the borrow checker just silently does the right thing and there is absolutely going to be a new generation of programmers that come to Rust in the next six months to a year. And they’re not going to know what the fuss was about the borrow checker. Just to be like, “I just don’t think this is that bad.” It’s like, “Well, it’s not that bad in part because the compiler has gotten a lot smarter and it can tell what when ownership can transfer back. Because it can tell when you’re done with something. So I think that that’s going to be a big positive change, the language.
Adam: Yeah, that’s awesome. I saw you on this panel and somebody was asking you, “Hey, why doesn’t Rust have a GC?” And I felt like he kind of missed the point that maybe what they’re shooting towards is like a static, compile time GC. And right now that involves some hops and loops, but that’s the arrow.
Bryan: That’s exactly what it is. That is exactly what it is. And as a result, it’s the compiler trying to figure out at compile time, some of these dynamic attributes and it does an amazing job. And even that is getting better. And then as a result, you can totally reason about the performance of the system and folks that deliver high performing software in GC languages do exactly this. The irony is the person asking a question, Cliff Klick, very accomplished software engineer, Cliff would tell you, “Oh, I can write absolutely high performing software in a GC language.” Like, “Okay, Cliff. How do you do it?”
And he would talk to you about how you do it. It is all of the things that cognitively you have to do for Rust. He’d be like, “Oh, I’m going to pre allocate my map. I’m going to hold on to this. I’m going to move on. I’m going to do all these kinds of implicit things that basically don’t generate large amounts of garbage for the GC to collect.” But he’s been able to do that because he’s implemented the VM a couple of times. Rust allows effectively anybody to get to those kinds of results albeit with slightly higher cognitive overhead when you’re developing in it.
But again, I think it’s ways that are actually intuitive once you understand what Rust is trying to do. The intuition around it grows really quickly. And as a C programmer, one of the things that’s funny about C is that you can feel the underlying assembly that the C wants to write. With rust, I can feel the underlying C, I can feel what it’s trying to do. So constructs like the parameterization of lifetimes makes total sense when you understand what it’s trying to do. And as a result like I have not really had… Yes, I had some early fights with the borrow checker. And they’re going to be a couple of things that are going to drive you to the brink of tears early on.
But once you break through that, it becomes actually I think, a lot simpler to write software because there’s so many things that you don’t have to worry about. And then the artifact is high-performing. I think that’s the thing that’s really very impressive is that I found my naive Rust was outperforming my carefully written C.
Adam: That’s a big statement really?
Bryan: Yeah. And it’s for a bunch of reasons that are… Every time I say this people get upset that I’m making an overgeneralization. So I’m not saying, clearly it is not the case that every Rust program is going to out-perform a C program, or even that for the same task, a Rust program is going to perform a C program. What I found though, is that it is easier to deliver very high performing software in Rust than it is in C for a variety of reasons, but not least the fact that the strength of the ownership model allows Rust to be truly composable so you can use much more powerful data structures.
And in particular, the reason why particular program was faster for Rust than it was for C is because the default balanced binary tree implementation for Rust is not a red, black tree or an APL tree, but it’s a B-Tree. And a B-Tree is a much more sophisticated data structure, historically use in databases. But the Rust observation is one a B-Tree actually makes sense in an all memory system, because the memory hierarchy is so spread out. And two, the composability of Rust to actually allows for B-Tree to be implanted. A B-Tree is gnarly. It is hard to implement a B-Tree in a way that’s composable in a way that doesn’t allocate auxiliary memory.
Which is the reason we’ve always used AVL trees in kernel. But boy, to be able to use a B-Tree instead, and delivering higher performing artifact is pretty compelling. So the one hand, yes, B-Trees are higher performing than AVL trees. On the other hand, I could not practically use a B-Tree for my C implementation. And not only can I for a Rust implementation, it’s like the only choice to make, because it’s the balanced binary tree for the default collections.
Adam: Unless you say something more controversial, we’ll just call this one naive Rust is faster than C. That’ll be the podcast.
Bryan: That’s great. It’d be great. It’ll certainly get some attention. What’s funny is I had a blog entry on this because I discovered that my Rust is out performing my C and I had kind of pledged to go investigate it more deeply. And then I had a follow-up blog entry where I investigated it, pretty deeply with a extremely long disclaimer about how I was not trying to make a gross comparison, it was making a specific comparison that my specific… And still people are like, “God, how can this guy makes such a ridiculous statement?” I’m like, “I’m really not.” I’m like, “Can you not read the eight paragraphs of disclaimer? How much more disclaimer do you want me to provide?” But I think despite that disclaimer, one has to acknowledge that, yes, it was easier to develop a higher performing artifact of Rust for this particular problem.
Adam: I forget. I think I had this guest Stephanie Weirich, who works on Haskell. And she was saying like something to the effect, when you give the compiler more information, in theory, it can do more optimizations. So Rust just knows more, I assume is one of the advantages.
Bryan: And that’s an advantage that they are not even fully appreciating yet. In terms of the Rust power folks appreciate that those advantages are possible, but they have not yet begun to really deliver on that stuff. That is true. In fact, actually, when I first looked at it, I’m like, “Oh, okay, this is because Rust is able to actually do true memory disambiguation.” So one of the problems you have in C is that the second you call any external function in a file, C has no idea what that function is touching and not touching. This is called memory disambiguation, to disambiguate what memory is referring to what. And because C is fundamentally unsafe in it’s construct, it can’t reasonably, get powers or tried to do memory to speculation, but it’s really hard to do because the language doesn’t help you at all.
Rust is able to do that really cleanly and crisply. And I think it can yield even much better performance than they’re getting now by leveraging that more deeply. Because it’s my understanding that they’re not doing a whole lot of what they could potentially be doing in the future. And I think this is a domain where there’s going to be a lot of really interesting work. And as a result, you’re just going to see your extant Rust code getting faster and faster over time as the compiler gets smarter and smarter. And smarter and smarter in a way that is not really intention with Rust’s other goals.
And one of the problems with C is that there are certain levels of optimization that the compiler can’t reasonably apply, because they actually will result in a slower artifact in a bunch of other cases. And I say this not as compiler optimization person, so cut me some slack, but my intuition is that there will be fewer cases like that with Rust. There’ll be more cases of unequivocal optimization that can be had because the compiler knows so much more about what is going on because you, the programmer have agreed to this grand bargain where you’re going to work with the compiler to generate a higher performing artifact, which is a terrific bargain.
Adam: Yeah, first of all, I also know nothing about compiler. Before I state any… The problem is sometimes I think compilers get sufficiently smart. Your model is going to break down that you were talking about before, do you understand the C it’s going to right? That will change that if it’s able to do smarter things, right?
Bryan: And this is where she can get too smart for its own good, where it can do say member disambiguation, that then makes a system that was safe, becomes unsafe because that the system was implicitly relying on the compiler’s inability to perform that optimization. And we’ve actually got a lot of code like that in the operating system kernel. So yeah, that is an example where the optimization breaks the model and yields an artifact that doesn’t… And it’s always interesting to take a C compiler and look at it’s -05 optimizations, and those almost certainly have warnings associated with them about their limited applicability, where they can result in slower code, if they’re used more broadly.
Adam: I’m thinking of a specific example that I’m going to get totally wrong, but somebody on stack overflow talking about like this Haskell Fibonacci program that was running much faster than the C program. And then when it was looked into in depth, like the Haskell compiler had just realized that the Fibonacci was only evaluated once and had just like compiled time calculated it, right?
Bryan: Right. That’s great. And yeah, that’s the kind of freedom that Rust is afforded by the programmer having shifted that cognitive load is that we can in principle see some of those opportunities. And I think the other thing, actually that I love about Rust. A couple of things we just don’t talk about frequently enough. I love all the explicitness around mutability, which obviously C has as well [inaudible 00:50:59] and so on. But there are so many ways out of it. You can just cast away the cons. So what’s the point? Whereas in Rust you can’t cast it away, right? If something is mutable or it’s not, and if it’s not mutable, you can’t mutate it, you can’t just magically make it mutable.
If it’s mutable then you can only have one owner and so on. But that is going to afford, I think, a lot of opportunity [inaudible 00:51:22] hasn’t already, a lot of opportunity for optimization as well, because it knows that like, there’s going to be no store to this because it’s not mutable. And that in turn allows the values to be cached and so on. And I think it’s just going to be a lot of opportunity at rust is already performing really well. And because those values are so crisp in the community, I think we’re going to see it perform even better over time.
Adam: And like no pointers.
Bryan: That too. Yeah. And it’s funny because that is probably a bigger deal too. That is like the big deal with Rust. That’s huge. And it’s great. I think, especially if you’re coming from C++ that is a huge win. It is a huge win. I don’t mean to minimize the memory safety. From my perspective, like I’m able to write safe C from a memory access perspective. So the safety is great. I’ll definitely take it. It’s nice. But it’s not as big of a win. But I think for most people it’s actually the bigger win.
For me, actually, the safety that we don’t talk about as much with Rust is integer safety and overflow safety. So Rust is persnickety about overflow which is actually great. It’s one of these things where the Rust compiler will be giving you a hard time about something you’re like, “Oh, come on, Rust just like lighten up already.” And then you look at it. You’re like, “Actually there is actual potential overflow here, so, okay. Thank you Rust.” And there are lots of [inaudible 00:52:59] around sign extension safety, around overflow safety.
I can write memory safe C. I say that with pretty high confidence although it’s much easier and Rust for sure. And I would prefer Rust because writing memory safe C, it does cognitive load. Writing integer safe C is actually really hard. And the kind of the worst bugs that I have had in my production code have been because of overflow that can then be exploited. So the overflow, the integer unsafety then tacks into the memory unsafety in that the malicious code will induce integer overflow that will then allow a guard to be snuck past. And then you leverage the memory unsafety to either corrupt memory or utilize a gadget, or what have you. And now you can exploit.
And with Rust, it’s the integer safety plus the memory safety that yield that more secure artifacts, something we haven’t spoken about here at all, but is another huge factor in Rust. And for any internet facing code, I would absolutely write it Rust first because it makes it so much harder to generate some of these kind of common pathologies.
On Null vs ADTs
Adam: I think that we are talking about program and language values, and there’s trade-offs between all of them. But I think sometimes, there’s things that once they get hit on at once they take off, then they’ll become table stakes for future languages. So I think that [null 00:54:49] we should just get rid of null. I think we’ve had a couple of languages that don’t have null. No new language should have null. I’m calling it.
Bryan: Yeah. I think you’re probably right. I think in that if you want that, C is always going to be your answer. And to be clear, there’s still going to be a place for C in the universe. And C having a sense of value that denotes unmet memory is, I know it’s been called the a trillion dollar mistake or whatever. I don’t quite buy it because you have to have some way of indicating that this points to nothing, that this points to avoid effectively. And if we weren’t dying on null pointers, we’d be dying because we’re referencing voids. So, to me, it kind of sticks in what does in the other.
Adam: Can we just use like a some type. We have to…
Bryan: Totally. Yeah, absolutely. And I think that an algebraic type obviously solves that. And I do feel that, yes, for new languages, we need to be done with sentinel values. There is no reason to have a sentinel value that that should be an algebraic type. I’m totally with you.
Adam: And then it’s interesting to think, what else could be like something, that could be new table stakes, right? So this borrow checker, will there be other languages that take this approach?
Oberon, Simula and OS-370 GTF
Bryan: Yes. Absolutely. No question in my mind. And to me as not appeal person, the whole ownership model, I think, is relatively novel with Rust. Because for most languages they develop something that is kind of putatively novel. And then any PL person that’s saying, “Oh, no, no, no. That was done ages ago. [inaudible 00:56:44] did that, or Oberon did that, or Modular forwarded.” Some unverifiable claim that the… And actually working on DTrace back in the day, DTrace actually did advanced the state of the art in terms of dynamic instrumentation of systems. And knowing that I was going to get some grief from like main framers, I educated myself to a great degree about the tracing facilities that existed on effectively every system I could get my hands on.
And indeed some folks are, “Well, DTrace is interesting, but actually I had this facility on OS370.” I’m like, “Are we talking about GTF? Because if we’re talking about GTF, let’s go like. It’s on.” GTF is the generic trace facility. And it’s a tracing facility on the mainframe, but it is not what DTrace does. It’s not that [inaudible 00:57:40]. And if you want to like throw down over GTF, let’s roll. And inevitably the claims would disappear. There’s this kind of thing to subscribe, but well, Multex did this, or Citywide did this, what have you. With the ownership model, to me, it does seem it’s pretty novel. Steve Glavic informs me that it actually does trace its roots back to Clean, which is a language apparently.
So there’s a language called Clean. It’s been around since the ’80s. So maybe Clean is the pioneer of the a fin types or the ownership model. But certainly it is not been used in a broadly used language. In that regard Rust absolutely represents a step forward to the state of the art. And there will absolutely and there should be Rust derived languages or Rust inspired languages, or languages that have that… One of the things I thought, “Oh my God, can we please get rid of Bash?” Bash is humanity’s dirtiest secret right now, as far as I’m concerned that the amount of software, load bearing software that we have written in Bash in part because no programming language… Well Bash isn’t a programming language. Bash makes it very, very easy to string together the output of different Unix commands.
Adam: It’s approachability, again-
Bryan: It’s total approachable.
Adam: Approachability wins, but in a horrible way,
Bryan: A horrible way, a horrible way. And God, there is so much lethal Bash out there. I mean, you can almost pull up any Bash script and find subtle bugs in it. And can we please get a Rust ethos coupled with Bash approachability and some new language, that is wholly designed around executing other programs and stringing their output together rigorously and then handling those failures rigorously? It just feels to me like there’s a real place for that. That could be me.
Adam: Excel, I think that there is a a large amount of the world that runs on Excel that nobody talks about that. That’s just another side factor, like VB scripts and I’m sure that there are large hedge funds that are just some Excel book with giant formulas-
Bryan: Absolutely.
Adam: … millions of dollars trading in a note per second, tied to some Excel.
Bryan: How many people’s payroll depends on Excel in some way, shape or form. I’m sure it’s very load bearing and perhaps less load bearing than it was historically, but yeah, that’s another example where we could really use a lot more rigor. And I think that we’re going to see Rust inspired language. And I think that the big statement that Rust has made is like, “Hey, you don’t have to choose between some of these things.” And especially the the ownership model gets fleshed out from the perspective of implementation.
And especially as people then wrap their heads around it cognitively. Part of the reason that Rust is a bigger cognitive lift is because it is the first language to really use this and really have this. And you do have to kind of wrap your brain around it, but once you do it opens up new vistas. So I think that they’re going to be other languages that adopt a similar model. And as you say, no more, no pointers would be great.
Random questions: Uber, Amazon, Oracle, GPL, Microsoft
Adam: Preparing for this interview I went onto YouTube and I watched a whole bunch of your talks.
Bryan: That can be dangerous.
Adam: I know it can totally be dangerous. So I’m just going to throw out some questions that have nothing to do with what we’re talking about, see how it goes.
Bryan: Sure, absolutely.
Adam: So should I invest in the stock market in either Oracle or Uber? What would be your preference?
Bryan: Oh boy. I have been saying for a long time that Uber is going to be the poster child of the coming bust. And I think that Uber has, and I think we’re seeing this with just that the business model there is zero barrier to entry. There is not much network effect. There is perfect rider competition and perfect driver competition. So I think Uber has really disrupted obviously delivery, as has Lyft, but it’s not clear if they themselves can endure being disrupted by the next wave. And Uber is doing a lot of crazy things that have got nothing to do with that kind of core business.
I will not be an investor in the Uber IPO suffice it to say. Now I will say that I have, and I’ve learned this about myself many times over, I am often right on trajectory and I am often wrong on timing. So for all I know that Uber IPO will be some barn burner. And I thought that Bitcoin was unsafe at any speed in 2009. And if I could only have been like, “Okay, look fine. Bitcoin is not [inaudible 01:03:05].” Why don’t you take a hundred bucks instead of the Bitcoin in 2009, which is when I first heard about it.
If I had about $100 of Bitcoin in 2009… But the thing is I would’ve sold it when it was worth $200. I would be fooling myself to say that I would’ve held on until exactly the peak, because I don’t think cryptocurrency makes sense as a means of exchange or as a store of value. And then you were actually asking about Oracle and clearly Oracle is… My opinions of Oracle, I guess are well-known. But I think that they are in a very dated bottle. And I would say a lot of headwinds for Oracles. I’m going to be an investor in neither Uber nor Oracle nor cryptocurrency. I’m not sure what that leaves, I guess.
Adam: I guess Amazon.
Bryan: It will leave Amazon. Oh my God, they are so dominant. It almost takes your breath away and they are so at reinvent, they’re still executing with such drive and focus. It is like someone is chasing them. And yet I think they’re just putting more and more distance between them and the other infrastructure providers. And I say this speaking as an infrastructure provider, that’s putatively competing with Amazon. They are a tough company to compete against. It’s pretty stunning and I don’t know what the future holds in that regard.
I’ve been saying that I believe, heart of hearts, I still do believe this, that we are not going to be renting our compute from Jeff Bezos. I believe that not all of us are going to rent all of our compute. But with every re-invent I doubt that a little bit. Like, “Maybe, you know what? Screw it. We are going to rent our compute from Jeff Bezos.” You know what, give it up. Compute is going to be reprioritized. You’re not going to be able to buy your own microprocessor. The only person who’s going to be able to buy DRAM is actually Jeff Bezos for the Hive cloud, which is what you’re going to run everything on, and we’ll just all give up.
Adam: Have you read the everything store book?
Bryan: I haven’t, have you read it?
Adam: Yeah. It’s super good. But definitely he’s not to be messed with. You got the perspective and he’s like a true poker player that will like crush another company, just-
Bryan: He is the ultra apex predator of capitalism. But he is a super predator. And in fact, the only thing that gives me true hope and solace for the future is he is of such a biracial appetite that there is no capitalist enterprise that he’s going to view as off limits. And as he begins to compete with all of humanity, I think that there’s going to be some sort of backlash at some point because it’s stunning. The ambition seems to know no end, but it’s the ambition that is… That is unlike Elon Musk ambition that Elon said that there’s a 70% chance he’s going to die on Mars which is a very kind of strange way of phrasing it. The ambition from Bezos seems to be backed by a incredible execution.
Adam: Yeah. Also I think they just announced a AWS blockchain thing at Re-Invent. The crypto-blockchain one doesn’t make sense because like it’s distributed, but only within AWS data center. I’m not clear, it sounds made up, it sounds April Foolsy but-
Bryan: It does sound April Foolsy. God, that will be a great reinvent won’t it? Where they announce all this stuff, at the end they’re like, “You know what, God, we were actually fucking with you that whole time. I can’t believe you guys bought all that stuff. Like AWS outposts on. We’re not going to let you run that. And the blockchain stuff, you guys ate that one up. There was no detail there.” Maybe they’ll prank us, certainly we would all fall for it.
Adam: I’m just going to hit you up with random tech questions now.
Bryan: You bet.
Adam: Should we be trusting Microsoft now since it’s Open Source and all this great stuff?
Bryan: Yeah. It is a newer different, I would almost say a kinder, gentler Microsoft. It is shocking. Especially given where Microsoft was, certainly in the ’90s where it was, not just homogenous, but I think so oppressive with respect to other ways of thinking. So proprietary, so devious in so many ways. The findings of fact from the Netflix case really merit a reread. There are so many underhanded techniques that Microsoft was engaged in. And yet here we are, they really have changed, I think pretty fundamentally.
They’ve obviously had the obviously have the cash to go do some really interesting things. And they’re kind of making all the right moves. When Satya became the CEO, what three years ago? I jokingly said that, “Oh yeah, here’s what he needs to do. He needs to Open Source.net. He needs to Open Source Windows and he needs to buy GitHub.” And that was a joke. That was a joke. I really want to emphasize that the buying of GitHub especially was meant for humor value. I didn’t think that they would actually do it. And people I know would be like, “Do you think you would actually buy GitHub?” I’m like, “I think they should.” And wow, they did.
So Microsoft, I think is kind of getting back to it’s… Microsoft at root is not a monopolist. Microsoft at root is a developer tools company. That is what Gates famously wrote the basic interpreter on the plane or what happened. Ultimately they are, I think they’re kind of the base DNA. They understand the way the developer wants to develop software. And they then later incarcerated themselves onto this Windows monopoly. And Windows was kind of crappy because they’re not an OS company. That’s not who they are, or what they’re about. And so Windows was always kind of not very good.
So it was certainly before they pulled in the deck VMs folks, it was terrible. And then Windows NT was better obviously, it became better as time went on and is probably fine now. But very interesting to see them in really embrace Unix and not in their classic embrace, extended extinguish, but actually truly embrace it. It’s a different company. I don’t know. I would still stop short of really embracing any Microsoft technology myself, but that’s really because of my own bigotry and my own resentment over Bill Gates having robbed me of my childhood by forcing me to use DOS when Unix was actually available.
Adam: I’m not sure that’s his fault. Is it?
Bryan: I blame he personally… Look, I think it’s great that we’re curing tuberculosis and what kind of opening up schools. And I honor the philanthropy, but let us not forget that underneath that there actually is the Bill Gates that forced us all to run DOS and wrote very snippy letters to anyone copying his basic interpreter. But actually it’s being unfair because Microsoft really is, I think changing into a wholly different company. And one that is much better positioned for the future. Satya has just done an incredible job and I think business schools will read about Satya’s work at Microsoft as one of the great turnarounds I think. A great cultural turnaround, really, really, very impressive.
Adam: I think that’s all my random questions. [crosstalk 01:11:19] this GPL, good or bad?
Bryan: It’s interesting and we are in a new era for Open Source in that we are so firmly in the Open Source era, that there’s now a desire among those who created these Open Source artifacts to reprioritize them. Because the thing about the GPL is that it doesn’t say anything about taking the software and running it as a proprietary service, which is what Amazon is doing. And so people tried to address this with the AGPL, which is really not good news. And in general, they have tried to do this by asserting rights that the copyright holder doesn’t generally have. If I’m going to make something Open Source, I really don’t get to tell you how to run it. And you get to kind of run it however you want to run it.
And if you want to run it and charge people to use the thing that is the service that results from it, that’s going to be what it is. And I don’t really have a way of extending you the right to use this without the right to resell that, is I think a search rights the copyright holder doesn’t have. And I think the big issue right now is not kind of the GPL versus BSD versus the Apache license. Fortunately I think the Apache public license is winning out. It’s a better license than GPL. I’m a big fan of the MPL as well, but I think the real heat now is moving to this kind of nutty trend around the commons license and the common source where the idea is that we are going to have… There’s some open core effectively that everyone can use, but there are proprietary bits that if you want to resell this as a service, you’re going to have to pay me.
And I think that it’s been misnamed first of all, but I think that this is going to be a trend that is going to not blossom into something larger because I think it’s so antithetical to Open Source. I think Open Source is here to stay. And that means that yes, Amazon is going to be able to make that software into a proprietary service. And if you don’t want Amazon to do that, you shouldn’t Open Source it. I don’t think there’s going to be a middle ground there.
Adam: Because it seems like a good idea, well, especially when you think about Amazon, right? When you’re like, “Hey, we built this. But we should be the ones who can decide who can charge to run it.” I don’t know.
Bryan: It is. And it is, I guess it feels right, but that’s actually perilous. And this is what the commons causes is trying to do, but it feels like you should be able to say that, but if you can say that, then what’s to prevent you from saying, “Hey okay, so this can be used only in the US, you can’t use this in Canada, because I’ve decided that I’ve got something against Canadians. Or this can be used in this industry, but it can’t be using that industry.” So I’m a bank and I’m going to Open Source this software, but you can’t use it for financial services.
It’s like, you kind of can’t do that. There’s a very good chance that you just flat out can’t do that because that’s like saying, “Here’s this book, and you can buy this book, but you can’t read it on a train. You can only read it on a plane.” It’s like, “Well, it’s actually my book. And when I buy the book, I get to read it on a train or on a plane, and on my bookshelf, I can put it next to an author that you disagree with or not.” What I can’t do is I can’t make… I’m limited in terms of what I can do about a derived work from that book. But if you’ve given me a license that tells me I can make a derived work from it, I don’t understand how they’re going to be limited in terms of how that derived work can be used. So I think it’s going to be a real challenge.
Adam: It’s tricky because like, in the cloud world, everything runs in AWS. So it seems like we should be heading towards more and more proprietary software just offered by the cloud providers, right?
Bryan: Right. And so I actually, that is the world that I don’t think we want to go towards. And I think that it’s going to be very interesting to see how all this shakes out because I don’t think we want to be in a reprioritized world.
Adam: [crosstalk 01:16:16] I think it’s happening now, isn’t it.
Bryan: It’s happening to a degree. It’s hard to know how much it’s happening because AWS gives us no real insight into what’s making what, so we don’t know how high margin some of these services are. I don’t think that there are services based on Open Source software at AWS that are throwing off Oracle like margins. That could be wrong. I don’t know, maybe that’s wrong. I think if those services do exist, there’s clearly not much barrier to entry for another cloud provider to do that, or someone else on AWS to do that.
So I got to believe that the economics will ultimately keep everything in check. And I also think that people don’t want to have vendor lock-in. So one of the things we certainly see is there was a time when people were building on every AWS service they could find. And now they are restricting themselves to those kinds of core infrastructure services, because they actually do want to be future-proofed and do want the ability to move on to a different cloud. So yes, S3 and yes, EC2, and EPS, what have you and ELP, but not trying to build something that depends on SQS or something that depends on Kinesis or something that depends on Redshift or something that depends on some of these other services or their… What is their Blockchain thing even called?
Adam: It’s called Blockchain on AWS, which is not a very distinctive name.
Bryan: God, this is turning into Microsoft from the ’90s.
Adam: That was the interview. I feel like I could have had interesting conversations with Brian for the whole afternoon. So hopefully you didn’t mind that this episode is a bit long. Usually I don’t ask people about random tech news, but let me know what you think. Too long, not long enough? Let me know. I’d like to thank the many people who recommended the show or the last episode, the little typer on Twitter, on Reddit or wherever else. Special shout out to Rich Seymour on Twitter, Cryo and Nifri on Reddit. There’s also some great discussion about the book happening on Slack channel. The channel’s brand new. There are multiple tens of us on there at this point.
Hey, [inaudible 01:18:41] Bloodworst and John are most active members at the moment, but it’s been great chatting with everybody who’s on there. Until next time. (silence).