CORECURSIVE #018

Domain Driven Design and Micro Services

With Vaughn Vernon

Domain Driven Design and Micro Services

Vaughn Vernon

Today I talk to Vaughn Vernon about how Domain Driven Design can help with designing micro services.  The guidelines that Vaughn has developed in his work on DDD can provide guidance for where service and consistency boundaries should be drawn.  We also talk about the platform he is developing for applying these DDD concepts using the actor model, Vlingo.

Transcript

Note: This podcast is designed to be heard. If you are able, we strongly encourage you to listen to the audio, which includes emphasis that’s not on the page

Introduction

Adam: Welcome to CoRecursive, where we bring you discussions with thought leaders in the world of software development. I am Adam, your host.

Vaughn: Ask a domain expert, “When do these two pieces of data need to be consistent?” And if you ask enough questions around when you show them that immediately it’s not such a great idea in this particular case, you might find that that seconds or minutes or even hours of tolerance between consistency is actually possible. So, that’s where the complexity of and the diligence comes in with trying to model these concepts because figuring out those boundaries sometimes can be a challenge.

Adam: If you are a talented Scala developer or a talented developer in general, my group at Tenable is hiring. We are looking for a Principal Software Engineer for our web app scanning product. Tenable is a great place to work. This is a distributed team, so you could work from your home office or at one of Tenable’s many office locations. I will put a link in the show notes to the job or you could email me at adam@corecursive.com.

Today’s guest is Vaughn Vernon. He is most well-known for his books on domain-driven design. Today, we talk about how domain-driven design is a great tool for finding service and consistency boundaries. Enjoy.

Vaughn, welcome to the podcast.

Vaughn: Thank you. Thanks for inviting me.

Adam: So, funny story of how I decided to invite you to come on here. I was on my treadmill, doing some walking and I have my laptop mounted up on there. I’m watching some YouTube tech talks and trying to get up to speed on microservice architectures. And I saw this video you gave. It’s a couple of years old, I think, in fact. But you were talking about domain-driven design and how it related to microservices. And I had never really heard these two terms used together. And like at one point, domain-driven design, it was something I was pretty excited about. And I kind of forgot about it and then here you are bringing these concepts back to mind for me. Kind of exciting. So, what does domain-driven design have to teach us about the building on microservice?

How does DDD Relate to Micro Services?

Vaughn: Well, I just have to say probably as you’ve kind of alluded to here, microservices has a very sort of loose and not well-defined meaning among the software development community. So, in an effort to try to put some real meaning around it. My efforts have been to talk about the fact that DDD bounded contexts are relatively small in size and that this is a good starting place for microservices. And if you’ve read Sam Newman’s book on Building Microservices, he starts from the perspective of a microservice as a DDD-bounded context and he directs you to Eric Evans’ seminal work on DDD, but also references my red book as, I don’t remember exactly his words, but something like a practical example of how microservices can actually be implemented with domain-driven design. So it aligned with my ideas on it, although there are still various definitions of what a microservice is. So, I can go into more detail on that if you want to, but I don’t want to ramble too much.

What is a Bounded Context?

Adam: So, what is a bounded-context and why does it make it a good place to draw these lines between services?

Vaughn: Yeah. A bounded context is an area of a system where a specific domain model has specific meaning, so one way to describe that is around the language or linguistic drivers and DDD promotes what is called a ubiquitous language and a ubiquitous language is developed within a context boundary. So, any modeling elements that you have inside that boundary have a very specific meaning. And one good way to describe that is to talk about the word “product.” What is a product? And even if you work in an area of a business where you think that product is well-defined, you’ll probably have to admit that product in reality has several different meanings even if they’re referring to the exact same digital item or physical item.

Say it’s a watch that we want to wear on our wrist or software that we want to download, that product has a meaning in a product catalog, and has a different meaning in an inventory system. It has a different meaning in the shopping cart, it has a different meaning in an order management system, and even though all those subsystems of a whole system are related to each other as that singular product is dealt with in different situations, those are different contexts. And so, a shopping cart context and a product catalog context have different definitions for describing that very same item as in a watch that you want to wear on your wrist.

One is concerned with price, one is more concerned with just, “Do we have it on the shelf?” Right? It doesn’t necessarily really matter what the prices or what discounts were offered in purchasing it, we just need to take it off of the shelf and ship it. So, as you move around that whole system that’s working together, the word product changes and DDD gives those contextual differences a boundary to live within. And so, in those four or five different situations that I described, there would be four or five different contextual boundaries or a bounded context.

Adam: In the inventory system, in the inventory bounded context, which we’re saying could be its own microservice, it would just know the counts. It wouldn’t know about pricing, is that the idea?

Vaughn: Well, it might have some sort of a description, but it may be blue or silver or gold or whatever it is for the watchband or the parts of the watch. It may be a description like that because when a person who picks the item off the shelf picks it, they may be picking it by color, because possibly the identifying number on it may not be clear as to color or it may be. It could be that the identification number on it is about color, but it may have a description as to color and a general description, such as an Apple watch or something like that. So you can see and say, “Okay, this is an Apple Watch. This is a black Apple watch or a silver Apple Watch, and this is the one that I need to pick.”

And sometimes, the skew number has that identification, the clear full identification, sometimes it doesn’t. So, you may have a skew and you may have additional description, but we’re not really worried about the skew so much in a product catalog. We know, we probably know what each of the skews are, but we don’t necessarily present that to the user because they may not be interested in it or it could be that the skew itself doesn’t even reach the product catalog. It’s a completely different unique ID that has some sort of basically foreign key relationship to some other completely disconnected part of the system that allows you to look up the skew for that. It really depends on how the system is modeled. Right?

Adam: Why is that a powerful way to split these things?

Splitting At The Bounded Context

Vaughn: Yeah, because if you don’t split them think of the alternative is put them all together. So, we have one canonical model element that lives in one place in the whole system and it tries to define everything about a product that everyone will possibly want. And sometimes, those different properties of the same item can work in conflict with each other. Right? But even if they don’t work in conflict or aren’t completely misleading or whatever, in many cases, they’re irrelevant. And so, you end up creating a model that sort of fits the or that goes by the description of one size fits none or one size fits one or maybe a few, but the other 10 not so much.

And so, this way you have a very specific, very explicit way of looking at a particular model concept in a specific context and it has a very definite meaning and it works just for that one area. And being explicit is so important because when I say the word product, what does that actually mean? If I say the word account, what does that mean to you? Probably your mind first draws the conclusion that I’m talking about a bank account and that’s confirmation bias or something. And so, you’ve already concluded that I’ve described or I’m talking about a bank account. But way over on the other side of the universe of that definition, what if I’m talking about a literary account, right?

And that really throws you for a loop, because probably most people would never think or they would have to search their mind for quite a bit before they would ever think of the word account being used in a literary context. And yet, a literary account is a very important concept if you’re in a bookstore or a library or something like that, like Shackleton’s Adventures to Antarctica concept like that. So, yeah, I think it’s quite important. But when I named the context as in literary context, and I say the word account in a literary context, I’ve got a pretty good idea of what I’m talking about.

Whereas in another context, such as if we’re in a library and it’s simply someone, a library, someone who, I forget what it is, I guess, an account holder at the library these days has a library card or something that they use to check out a book. What is that literary account according to that individual’s library account? It’s just a book, right? It has a title. It has, was it, Dewey Decimal or something to identify the book. But it’s just, it’s just a book on that account. And yet, when I go look it up in some particular context, it’s going to say, “Well, it’s in the literary account section of the library or the adventures section” or something like that.

Adam: By cutting, by being specific in the context, these things have a lot more meaning. So, in our inventory system, we only need to know about this. Keeping track of the books and then at the point of sale or whatever, then an account is a whole different thing.

Vaughn: Mm-hmm (affirmative).

Adam: So, how do these systems communicate with each other, though? I mean, the point of sale system needs to know about the inventory, for instance.

Service Communication

Vaughn: Yeah, well, I mean, that depends. If you’re talking about maybe the New York City Library System or the Tokyo or whatever. I’m just giving examples of the potential for a huge library inventory and an operations going on, it’s probably a distributed system. But if you go to the [inaudible 00:13:11] Idaho Library System, it might be an old dBase system working on a single FoxPro or whatever, working in an isolated workstation or a few workstations against a server or something like that. But the point is, it depends on the scalability needs and the number of users that it has to surface if it could be that it’s written as what we might refer to these days as a monolith. Where the whole library system resides in a single sort of executable that gets deployed or whatever, run in a single server or something like that.

And that’s the pretty simplistic viewpoint, but in a very large system, you can imagine someone who decides, well, we’re going to write the library operation software to beat all and every library in the whole world is going to use our Cloud-based, SaaS library systems. Well, I mean, there’s a lot of replication and distribution going on in that situation. And so, you would be using probably messaging to communicate occurrences from one context to another, whereas in the smaller system, you might just be talking in a loosely coupled, maybe even multithreaded situation, but even not necessarily so. But just in a way that’s loosely coupled to keep the components across contexts separated from each other. But still it could function in the same thread in a decoupled way, so that you kind of have those extremes, too.

Adam: So, loosely coupled how?

Vaughn: Through well-defined interfaces, look up a service. You can think of it as like directory service or a service directory or whatever you want to call it. In same fashion as web services or something like that. And you just have a name and say, “Give me this service.” Now, you have an interface, but you know nothing about the actual implementation and you invoke some methods on the interface, and you get something back. And to you, as a developer, you sort of don’t really need to understand, is this just a direct method invocation in my same VM or is this traveling over a network or is it, you know.

Scaling and Coupling

Adam: I get it. So, it sounds like that to you that the bounded context is very important. And then like whether those actually become standalone things is more a concern of scaling. Like they should always be loosely coupled, but then they should be the actual, whether they’re deployed as separate actual services is an issue about how big things are?

Vaughn: Yeah. I mean, how far they have to scale, how broadly it’s used. I mean, potentially, I mean, I guess you could deploy this sort of very small monolith in into AWS. And wow, we have a Cloud-based system, but in reality, if it runs against one MySQL database or something and it’s all just sort of just in one small container. I mean, that’s not… I mean, maybe that has some benefits, but that’s not really the way the Cloud is meant to be used, right? So, you’d naturally think if you’re deploying to AWS, well, you’ve probably got containers while potentially all over the place, even around the globe, but probably, at least in different data centers, different regions.

And so, they’re talking to each other in that way, too, but they do so through messaging or sometimes through rest or soap or something like that. It’s not like DDD doesn’t actually tell you which of those mechanisms you should use, but in some cases, like if you’re using domain events, those sort of naturally fit into a messaging environment. But on the other hand, you can always deliver messages through Adam logs of groups of events that have occurred, and you can consume them through hyperlinks. And say, “Okay, give me the current log and then have I seen this log yet? And if not, we can actually navigate back through previous logs until we find the event that we’ve already consumed and then consume everything that’s available from.”

Adam: So, what’s an Adam analog? Is that what you said?

Vaughn: Yeah. Well, like the Adam feeds on the web. If I have a blog and people want to subscribe to my blog, they can read it at Adam feed. So, yeah, so we can design domain event logs in much the same way. And so, how do you get all the blogs for the past year that someone has written? Well, if they blog regularly once or twice a week, that maybe five different feeds, or maybe it’s a feed per month or a feed per week, so that could be 52 feeds, let’s say. And so, to get a handle to each one of those blog posts, you have to read all 52 of those, but you can navigate them through Hyper.

Adam: Yeah, that makes sense. It’s sounds a little bit like hand-rolled Kafka, you’re just keeping track of where you are in this log file array or an Adam list.

Vaughn: Yeah, yeah, yeah. My guess is that Kafka was born out of those concepts. I wonder, actually, what, I think my book was one of the first to present that idea. I wonder was my book an influence on how Kafka was implemented. I know it’s had a big influence in other areas, just that concept in how it’s-

Adam: Definitely. I know, I tried like 10 years ago, I was working on a team. We tried to build like kind of event sourcing system and there was no Kafka at that time. I mean, I guess we did, I think we serialize like objects to a table. And it was actually super complicated, but yeah, I think these concepts have been around for a long time.

It’s funny you mentioned dBase, so my first job when I finished university had to help maintain a dBase system while building a better replacement, but I think that like domain-driven design, it tends to focus on to my understanding, like business requirements. And there’s no like business requirement that says that you need to have this running in a container on AWS. I mean, dBase was ugly, but for all its problems, you could whip up something pretty quickly, right, like-

DBASE, FoxPro and RBase

Vaughn: Yeah. And I was just, again, naming some extremes. But I actually in a business and in back in the ’80s, that I started software product house, we used R:Base, instead of dBase. I think R:Base disappeared before dBase did. I think it showed up after it disappeared before, but yeah, similar ideas. And then I think FoxPro was pretty similar to that, at least the earlier versions.

But, yeah, but DDD is, I think the best way to describe DDD is you’ve got a really hard problem to solve and you want to make certain that you solve it with explicit concepts. And you also want to isolate it from the rest of the system that may not be so well-defined, right? So, explicitly defined and so to some extent, maybe, you could actually even use a dBase or R:Base or FoxPro system for data that you need to consume. But the reason that you’re consuming that data is, is to do some very heavy lifting with it, like whatever that happens to be. Some sort of processing or not, data crunching or calculations that are really complex. I mean, you can imagine that some professor or scientist in some lab way back in the inner sanctum of some government system or government agency or university system or something like that.

What is DDD?

It’s collecting all this scientific data on something. Maybe it’s on galaxies or stars or whatever it is. And now this, she wants to do something extremely complex with this data that’s been collected over five or 10 years and figure something out. Well, I mean, okay, so this individual collected in FoxPro, because that’s all they knew how to use, but it doesn’t mean that you’re going to use FoxPro queries to solve the heavy lifting of this. You could very easily justify creating a domain-driven design bounded context or how you’re going to crunch this data to do some very complex, create some very complex solutions around them.

Adam: Somewhere in the bowels of it is just one of those DBF files that contains all his, yeah.

Vaughn: There you go, yeah. But maybe DDD decides, “Well, let’s not put it back into FoxPro with the results. Let’s put it into something that can be consumed globally. Maybe this is research that’s being done that’s going to benefit scientists all around the world. And so, we’ll put it on DynamoDB or something on an Amazon. And we’ll have it available on five or 10 or whatever different availability zones.

Adam: I don’t recall, I mean, it’s been a while. I apologize, my DDD knowledge is kind of rough, but it doesn’t really talk about like performance and scaling as a concern. Is that intentional or?

Vaughn: Well, I haven’t really… I think that my book addresses that and some of the later work around DDD addresses that. I can’t say that I really know why that was the case earlier, but I think there was a pretty messy time around when J2EE showed up or whatever it was originally, Enterprise JavaBeans and stuff. And I mean, I remember when it was difficult to get a contract or a job if you didn’t have EJB on your resume. And two or three years after that, they didn’t EJB on your resume. So, there was a time when maybe you didn’t have so much control over that.

And I think that scale, maybe in the late ’90s, early 2000s wasn’t as big of a deal until kind of the dot-net boom. And then right around that time, things were needing to scale a lot. But even think back to Amazon in the late ’90s, right? I mean, it was just concept of probably like selling 100 books in a day was a huge win for Jeff Bezos. And so, what was really needed to scale back then? Not that much. But then the internet comes along and wow, we’ve got all this potential for drawing people from all around the world to a single web property and selling them stuff or teaching them stuff or whatever and it just kind of went off the charts from there. And that’s when scalability and performance became very important. But I think that was probably near, that was starting to happen probably around the time that Eric may have been finishing his book. It’s an interesting question, maybe you can ask him that.

But I don’t think that it was such a big important thing, although what’s interesting is the scalability and performance things with just a few tweaks to the patterns, just some guidelines. Like I talked about the four rules of thumb of aggregate design in my red book. That does address a lot of the very things that Amazon ended up needing to use and that Pat Helland wrote about in Life Beyond Distributed Transactions, that paper. And so, by just ensuring that you design your objects in a specific way without creating large graphs of connected objects, then you stand a good chance of scaling and performing and doing a lot less garbage collection there.

Adam: So, let’s talk about that. I think the rule that I’m familiar with has something to do with eventual consistency, so.

Aggregate Design And Eventual Consistency

Vaughn: Yeah. I think that’s sort of been labeled rule or at least that’s how it ended up in my book. So, the first rule of aggregate design is use aggregates to protect true business invariants. There may be this temptation to model an aggregate for convenience. And so, we start thinking about this kind of large cluster like, “Wow. If I have a direct object reference to this object or to this tree of objects, then I can just navigate through and I can do these things really easily.” And that is pretty much how, I mean, not very large grasp.

But if you read Eric’s aggregate guidance in his blue book, you’ll kind of see that where the cluster of an aggregate design is a bit larger, not necessarily extremely large, but larger than what the current guidance might be. But again, he wasn’t necessarily even trying to teach is this going to scale or perform, he was just trying to demonstrate what the general pattern is for. And the general pattern is for protecting invariants. So, if there is some data related in this sub system or bounded context that you’re modeling, and data item A changes, and data item B changes, and when those two change, data item C needs to change in a way that’s closely related to those other two changes and the business has a rule that says, “Those need to be consistent constantly,” right? And those have to be persisted in a single transaction.

So, that behooves you to in some way or another cluster those objects together, whether A, B, and C are just attributes on a single class, which is really easy to do or if A, B and C are each attributes on different objects, right? But when those change, they have to change together and be persistent together, then the aggregate pattern is meant to do. That’s first and foremost, the motivation behind the aggregate pattern. And then of course, there is the convenience of navigation. But when that convenience works against performance and scalability, well then, you need to drop that because well, you have to scale and perform.

Adam: That means that you have to keep these things together. No matter how micro your microservices are, these things need to be grouped together. And I feel like maybe the problem where people say microservices are horrible, it’s because they’ve split these things.

Vaughn: Yeah. It could well be, yeah, that they really need, they feel this great need to employ distributed transactions to have everything consistent because yeah, they’ve just modeled their aggregates incorrectly.

Adam: Or it’s just not consistent, right? Like it’s, yeah.

Vaughn: It needs to be, but it’s not. But then on the other hand. I mean, it depends, but generally speaking, ask a domain expert, “When do these two pieces of data need to be consistent?” And if you ask enough questions around, they may say immediately because they’re used to hearing that immediately is possible. But when you show them that immediately is not such a great idea, in this particular case, you might find that seconds or minutes or even hours of tolerance between consistency is actually possible.

So, that that’s a big difference from must be transactionally consistent. And there are definitely areas in a lot of systems that must be transactionally consistent is a very real requirement whereas we can relax that to a lesser or greater extent across other pieces of data. And that’s where potentially, that those two concepts are separated by bounded context, they’re in two different contexts. But it could even be that they’re in the same context, they just don’t need to be transactionally consistent. So, that’s where the complexity of and the diligence comes in with trying to model these concepts, because figuring out those boundaries sometimes can be a challenge.

But on the other hand, there are occasions where truly the aggregate boundary and trying to work around those becomes unnecessary. Because if it’s possible to persist to loosely related entities or three in a single transaction. Right? If it doesn’t cause transactional failure, because multiple users are trying to use the same, one or more of these same entities simultaneously, it doesn’t matter. So, it’s not like DDD is trying to get you to use a certain set of patterns to prove your DDD-ness, right? “Oh, we’re using the aggregate pattern. Therefore, we’re DDD.” That isn’t the purpose of it at all. It’s really, it’s a problem. It solves a particular set of problems and if you don’t have those problems, don’t use it.

Adam: Yeah. If I was interested in it for the idea that it might provide some of these guardrails and guidance around how you might cut things up. And I think that that is a great example how this consistency within a grouping shows that those things need to be together. Well, what about when they don’t need to be consistent?

Relaxing Consistency

Vaughn: Right, then you can use a domain event as a, let’s say a sort of reified or wrapped message that says this happened and when the system or even the same subsystem, the same bounded context sees, “Oh, that happened. Now, I’m going to update or modify something in reaction to that.” And so, that is eventual consistency. So, we’ve just kind of jumped from Rule 1 to Rule 4 and in between that is Rule 2 and it’s simply design small aggregates, right?

What’s a small aggregate? Well, it’s not a big aggregate, but I mean, I know that’s a little tongue in cheek, but really, what it amounts to is, do not design your aggregates to be any bigger than is required by the business rules to keep data consistent, that must be consistent, transactionally consistent.

And then the third rule is reference other aggregates not by object reference, but identity reference, right? So now, we don’t build a graph, although we can navigate a graph through IDs if we need to. We are purposely breaking those connections, so that our aggregates do stay small, right? Hopefully, just one entity even if possible. And then those guardrails, the one to four set there of guardrails is keeping you in a pretty safe zone.

Getting Too Small

Adam: So, do you think that microservices are taken to an extreme?

Vaughn: Yeah, I think so. And I’ve talked a bit about this. And I have not worked in some of the areas where some of the guidance about microservices comes from. And I will say that there are at least a few if not several individuals who say very outrightly that a microservice should be no more than 100 lines of code. And to me, that’s just is like I mean,inside I roll my eyes, right? If not, even physically and I just say, yeah, but one of those responses. Because then if my model… what’s that?

Adam: That sounds insane, 100 lines, like even in the most concise language, I mean, it seems-

Vaughn: Yeah. I mean, what does that actually mean? It probably means one entity or not, maybe you don’t even think in terms of entity. Maybe it’s just I don’t even know. But yeah, but these are in environments where there could be tens of 1000s of microservices. And I am not joking, like literally to the extent where those developing them lose track of what is actually relevant anymore.

And so, now think about this for a second. What am I really describing when you have a microservices ecosystem that has so many microservices that you are afraid to unplug any of them because it could bring the system down or a portion of the system down? And you don’t even know that? What does that sound like to you?

Adam: It sounds like Skynet from Terminator.

Vaughn: Well, it’s a distributed monolith, right?

Adam: Yeah. It’s-

Vaughn: I mean, that is the situation we get in with a big ball of mud is we’re afraid to touch anything, because if we touch anything, it might explode. And so, the organizations that are developing these kinds of max 100 lines of code microservices are they’re reaching this point where they’re really just reinventing the monolith that turns into a big ball of mud, where if I touch this man, I could break something for months. So, instead of touching anything, we just decide, what? It only costs $400 or $500 a month to keep each service running, let’s just keep them running.

Now, how many poor nations could get fed by not doing that? I mean, just think about if this trend were to be global, totally prolific and there were hundreds of thousands of organizations doing this. And they all didn’t understand their system well enough to touch any of those and they just keep paying Amazon for more and more or whoever it is, Microsoft or whoever, $400 or $500 a month for these tens of thousands of microservices per organization. Right? But on a global scale and that that is pretty scary.

Adam: Conceivably, you could use something like Amazon lambda or serverless, so that all of these things spun down and spun back up. But I think just the maintainability.

Vaughn: Yeah, but how do you know which lambda to unplug to, right? It’s the same. Actually, when you think about lambda that is the 100-line microservice or less even.

Adam: Yeah, it definitely fits that model where you have a function. And I mean, I think it could be great for like glue or just for massaging something. But to build an entire system at that level, I mean, I have a hard time picturing it, to be honest.

Vaughn: What it tells me is that we need a lot better monitoring and system sort of discovery tools where, “Okay, if we’re really going to do this, I need to know, has this microservice processed any data for the past seven days?” And if it hasn’t, can we prove why it hasn’t? Is it because it’s no longer receiving messages? Is that because it’s no longer used? It’s no longer relevant or is it going to be relevant again next month when a particular offer is run again?

Adam: Yeah. And it becomes hard to point at I guess, like business value and deliverables, like the service that you’re working on. It just, it feeds things from one other thing to one other thing, and we’re not even sure if those are used.

The Need for New Tools

Vaughn: Yeah. And so, what I’m saying is, it could well be that you get into a situation where that is really the only practical way or one of the very few ways that you can work in a system like this, like I mean, let’s just say I don’t know if this is the case, but let’s say Expedia or hotels.com or whoever is just throwing all these offers together on a daily basis. And “Escape winter, get out of the heat of summer,” whatever season it is, they’ve got umpteen different offers to make. And you just have to keep deploying these services to make all these deals work together and close them and get them paid for and then get them booked and everything.

And they’re so different or slightly different from each other that you can’t really reuse anything that’s there or very little of it. And so, it’s just like, “Okay, let’s keep knocking out these 100-line lambdas or whatever.” But if you’re going to do that, yeah, I mean, just think about the even ecological footprint of that.

Adam: It’s an interesting use case. I mean, I guess you’re saying it’s like an anti-use case. But yeah, like Expedia, because each service could be very small, but it does have its own, like it’s very well defined. Its interface, right? It’s like all I do is figure out if it’s time to offer the Easter special or something. So, it is contained in a way.

Vaughn: Yeah. So, I don’t really know where those apply, but here’s my guidance with DDD is. Okay, create a bounded context. If it has 10 or a dozen entity types in it, that’s really pretty small, right? Prove that you need to scale any one of those entities off the charts, right? Let’s say that 10 out of the 12 are just pretty simple and they get used, from time to time, or whatever it is. They perform fine in this single deployment that we have of a small-bounded context.

But two of those entities or aggregates are just getting hammered constantly and just, we even have to use a completely different storage mechanism for those because while PostgreSQL works for the other 10, it’s been proven that we have to use something Cassandra or Dynamo, or whatever, to make these other two kinds of entities perform and scale. And so we can prove, “Okay, we need to break these two out, so we’re going to take these two and create smaller microservices just for them.” But they are still logically part of the same bounded context.

But I haven’t overcomplicated things by automatically assuming that I need to break up all 12 of them into microservices. And then by the time I have, 50, bounded contexts and a whole system solution, I’ve got, upward of 700 microservices, these small micro services to manage that. When in fact, maybe I could have 60, right? Because 50 altogether, but these 10 have to scale in different terms where their rate of change is different among the other parts of the bounded context. So, it makes sense to break these out. So, I’d much rather have 60 microservices than 700.

But you’re taking, I guess, the Missouri approach to show me. Prove it to me that I have to break this into a smaller microservice. And if you can’t prove it to me, we’re going to deploy these 12 entities or whatever it is, seven entities or five entities or 15 entities together, because as far as we know right now, they’ll work just fine. And it’s pretty small as it is, right? It’s a relatively small VM that we’re working in.

Adam: I heard, I think it’s from Google saying that when you design something, you should design it, so that it can scale to like 10x of what you plan. But if it needs to go to 100x, it’s going to be something completely different, so don’t even worry about it.

Vaughn: Well, but again, if you think it could do that, then go ahead and design the aggregates with the those four rules of thumb. It’s not going to hurt anything if it only, if it goes to 5x bigger than you first built it for. I mean, it’s not like you failed in some way, you’re still in very good shape. But if you can now scale to 10x and perform to 10x because of some early decisions you made then I think that’s pretty good or at least be in a position to refactor when you can prove that you need to do that. But that’s just the dangerous thing, though, is if you don’t have those, like you said 10x ideas on your mind up front, you may make enough mistakes along the way that you won’t easily be able to do that when it’s needed.

Adam: Yeah. You don’t want to paint yourself in a corner. And so, that if whatever you’re making actually succeeds like you imagine, then it doesn’t actually work.

Vaughn: Yeah, which is actually far more the case that things can’t scale. And that’s why a lot of Fortune companies right now are in a very bad situation where they’re scrambling to try to get on the Cloud and they’re scrambling to try to make things scale. And even really, the biggest problem is with them is they just can’t make changes fast enough because, again, with that, “I touched this and it’s going to break something.” They get to the point where it can only release once or twice a year, maybe three times a year.

How can you compete today in that environment when you have startups that are on the way to dethrone even the kings of the industry that are technology companies? I mean, these are companies that made their billions on technology, and they’re in trouble. Because the startups are more nimble. They could move more quickly.

Adam: Because you can gather these metrics, right? If your velocity [inaudible 00:44:44] and releasing things and gathering feedback, like-

Vaughn: Well, they know. They know because they see, they hear their customers complaining about, “This doesn’t work and when are we going to get a fix” and when are we going to wanting to get this new functionality and they can’t deliver it in internet speed.

Adam: Well, we’re running out of time here. Is there anything else we should cover?

VLingo Platform

Vaughn: Well, that’s an interesting question. Yeah, I think I really want to let people know about my vlingo platform that’s V-L-I-N-G-O, vlingo, which is a lot about lingo, right? So, the language of supporting the languages of domain-driven design. So, Vlingo platform is being developed for the purpose of developing DDD-bounded context in a very DDD friendly platform and ecosystem. And I don’t really know of any other platform that while they may say that they have delivered on DDD or that they wanted to deliver on DDD, I don’t know that they’ve really accomplished that. And I think that I may be not the very best person in the world to determine that. Whether something is good for DDD or not, but I think I’m probably right up there.

So, I want people to take a look at vlingo. It is currently far along in potential release candidate stage for Java. And we’re also, I have a team working on the dot-net side of things, so the idea is you can spin up bounded context/microservices that are reactive using the actor model through and through, right? So, every single component, which right now there are about 10 components in the Vlingo platform and it’s growing. It will be 15 for well before the end of the year, and all of these are running in a reactive actor based message driven, event-driven ecosystem, and event-driven architecture. And the patterns of DDD just snap in very easily. So, I’d like people to take a look at that.

Adam: And it’s open source, so people can see how these concepts shakeout in code?

Vaughn: It’s open source. It’s github.com/vlingo, V-L-I-N-G-O.

Adam: I’ll put a link as well in the notes.

Vaughn: And I’d love to have more contributors. I have some contributors, but as the platform grows and becomes more popular, it sure helps to have more developers. And so, what I’ll say, too, is if anyone like in a… I hate to put it this way, but a more depressed economic environment like maybe Eastern Europe or somewhere like that or even Africa or other European countries that don’t have, quite as demanding salaries as the United States and Western Europe and so, forth. I would love to talk to you about working with me. I am I’m bootstrapping this whole development cycle and working with volunteers right now. But if I can afford to hire folks at a rate that I can afford as a bootstrapper. Then I’d love to talk.

Adam: That’s great. So, you’re looking to put some people on payroll. So, if people want to learn about some of these concepts, they can reach out to you. Well, thank you so much for joining me for this conversation. It was a lot of fun, Vaughn.

Vaughn: Okay. Yeah. Thanks a lot, Adam. Really appreciate it.

Join the Newsletter

Did you know CoRecursive has a newsletter?

Sign up to get insights, takeaways, and exclusive content from each new episode.

    We won't send you spam. Unsubscribe at any time.
    Audio Player
    back 15
    forward 60s
    00:00
    00:00
    49:00

    Domain Driven Design and Micro Services