Bimodal programming – why design patterns fail
Also a few solid paragraphs of me explaining why SOLID is bad
There’s an issue I always took with the idea of “design principle” and “design patterns”, part of this comes from personal empirical evidence that showed me none of the good programmers that I knew (i.e. the guys that actually wrote 90% of the critical code) really seemed to use them.
I’ve met good programmers that paid lip service to standard design patterns, lamenting that “Oh, if only I had used this best practice here this wouldn’t have happened”, but it seemed to be more of an outward facing act, not something they internalized.
I’ve met good programmers that had their own design principles, or some off-shot from a very poorly known FP guideline or some such thing.
I’ve met good programmers that just didn’t care at all and wrote code that seemed like mess to anyone but themselves.
I’ve met good programmers that wrote in a style I couldn’t quite place my finger on. It was readable once you spent a few hours or days getting accustomed. Most importantly, it was fit for purpose: testable or fast or producing a tiny binary or easy to refactor.
But I never really met a startup CTO, or a front-end dev that create a popular app, or a “go to guy” in a team that was like “Oh yeah, Uncle Bob is basically my hero, if it wasn’t for SOLID principles I’d have never gotten this thing off the ground and working well”.
A detour into large projects and coding guidelines
The three large open source projects that pop into my mind are pytorch, llvm and clickhouse.
All of them have contributing guidelines which outlined how to design code for them, but they are all different from each other, even internally the design seems to change from component to component.
Even more so, the guidelines don't outline anything like a "design pattern" in the way a design pattern book thinks of the word.
A well laid out example are the LLVM guidelines: https://llvm.org/docs/CodingStandards.html#introduction
They are more like a mishmash of tips, tricks and customs. Let's look at a few snippets to illustrate this point:
Use Early Exits and continue to Simplify Code
Keep “Internal” Headers Private
Don’t use else after a return
Don’t use default labels in fully covered switches over enumerations
Use range-based for loops wherever possible
Do not use RTTI or Exceptions
Basically, coding guidelines in large projects often take the form of:
- Never use X, unless you really have to use X in which case we can review it and if there’s not way around it then let’s use X but just this one time.
- When you can chose between X,Y,Z chose Z provided there’s not a really compelling reason to use X and Y.
- Using A is not forbidden but be careful, because when using A because you’ll likely f*** up.
- You’ll likely find yourself using boilerplate X a lot, here’s why it’s still a boilerplate and not a function and here’s how to properly go about writing it.
- We have abstractions X,Y,Z… they are very useful, read what they do carefully and consider using them when they might fit.
- This is how we test stuff, testing is important and you should do it this way too unless you have a reason not to.
- This is how we review stuff, keep this in mind when you submit stuff to be reviewed or when you yourself review changes.
- These are the end goals of our product, code with them always in your mind.
For other references, take a look at the linux kernel coding guidelines: https://www.kernel.org/doc/html/v4.10/process/coding-style.html or the Rustc coding guidelines: https://rust-lang.github.io/rustc-guide/conventions.html or any other arbitrary large project you can find. I can only assume you’ll find a similar pattern (pun not intended).
What you will probably not find is some aloof sounding abstraction like:
Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program”.
Something like that seems more akin to what a “Reliable Enterprise Maintainable Software Senior Architecture Consultant” would say, rather than what “that one guy in the corner with the anime t-shirt and greasy hair” would say.
But the later character is usually what I associate with good software… so what the heck is he doing ? He obviously has a rich picture of design that he uses and it obviously works, but I’ve never managed to get these type of people to lay it out in words for me.
I think the answer to that question can be partially found in the coding guidelines of large & amazing open source projects. Namely that… the fact that they are guidelines.
You can probably write a huge PR to the Linux kernel, respecting almost none of the guidelines and the only issue that will pop up will be someone telling you in the PR: Oh yeah, this, this, this and this is incorrect, please change it… and you will take a 1 hour to change everything and that’ll be that.
At most, what can happen if you don’t follow any of the guidelines, is that you shot yourself in the foot or end up writing boilerplate for something that already exist or debugging a memory error for 2 hours.
But overall the guidelines are partially optional, they can can be accounted for only when you make a PR and don’t take a lot of mental space.
The idea of:
Please use if-return rather than if-else whenever possible.
Is very easy to keep in mind and easy to follow. Furthermore, not following it will result in “issues” that can be fixed in a few seconds.
The idea of:
Objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program
Is very hard to keep in mind, it’s a high-level concept that you can’t create a simple heuristic to check for. Furthermore, not following it might result in code that has to be completely refactor in order to follow it.
Most design patterns tend to be high level from a human mental model of code perspective; and code can be written in such a way that a full-rewrite is quicker than adapting the code to the pattern.
Most guidelines actually use by large successful projects tend to be low level, they can be successfully applied on a line-to-line or snippet-to-snippet basis; And you’d have to explicitly try to write code that can’t be easily rewritten to follow them.
Why design patterns don’t work – Or why SOLID doesn’t respect SOLID principles
Let’s think about the reason why design patterns usually don’t work. We can do that and go backwards from there.
Let me take the examples of the SOLID design principles again, you can replace it with an overly-complicated design guideline that you encounter in banks, healthcare CRM makers and other large entities that somehow seem to require a team of 5,000 to accomplish what could reasonably be done by 10 skilled programmers.
What’s SOLID ? five design principles intended to make software designs more understandable, flexible and maintainable.
Surprisingly enough, I think the SOLID principles contain locked within themselves an explanation for why the SOLID principles are wrong.
Namely, inside the
I, the Interface segregation principle:
Many client-specific interfaces are better than one general-purpose interface.
I like this principle, this is the only part of SOLID I can really get behind in almost any situation.
So, considering the above state goals of solid, it seems like SOLID should contain 3 interfaces:
- One to make code understandable
- One to make code flexible
- One to make code maintainable
Since surely there are clients that want just one of the two. Maybe I need my code to be easily understandable and maintainable, but I don’t care about flexibility.
But then I can’t just go to the “make code understandable” interface for SOLID… no, I have to take the whole package, either I want all 3 or I’m getting none.
SOLID utterly fails when it comes to separation of concerns. It’s one generic interface, which we expect to accomplish many things, instead of being many smaller interfaces each with a single purpose.
Which should be good, because all 3 things are good… right ?
But surely no good thing is free, there must be a trade-off to this sort of “generic interface for making code good”. Compared to the more general guidelines mentioned above where each rule has a pretty self limiting purpose (e.g. this one is here for code to look well, this one so that it runs faster, this one so that you don’t shot yourself in the foot with a memory error, this one for you not to write a bunch of boilerplate).
KISS (aka Occam’s razor)
I really do think that’s the problem with most design patterns, they try to accomplish too much. They forget the basic rule of “Keep it simple stupid/silly”, the first and only true design pattern in all of programmings, mathematics and science (also known as Occam’s razor).
The reason why Occam's razor / KISS is so important, is because complexity is BAD… not for some divine reason, but simply because our brains are rather bad at dealing with it.
Note: I am 100% convinced some reader can come up with a pedantic argument as to why Occam's razor and KISS are totally not the same thing, I prefer to think of them as the same general principle formulated for different domains. If you can’t accept this, no worries, my argument doesn’t rest upon them being non-distinct
To be fair, if you look at code made using well-applied design pattern (a rare occurrence in of itself), the code itself doesn’t look “complex”. It’s easy to read… but the keywords here are “read” and “look”.
As a design pattern creator , you look at code written respecting the pattern and say “Looks uncomplex to me, it’s obviously doing it’s job”.
However, the hidden trade that design patterns make, is that they trade ease of reading with ease of writing. Code becomes easy to understand, but hard to create.
In other words, design patterns trade complexity in the code itself, with complexity in the act of creating said code.
Coincidentally, writing code is the more complex part of “programming”, but also something that is hard to create design pattern about; since reading code is much simpler than creating code it’s much simpler to create complicated yet true rules about readable code.
Allow me to define two concepts for coding, two modes if I may express myself in fancy words that miss-reference poorly-named statistics concepts they are unrelated to: Explore mode and Integrate mode.
I would think most people that read a blog like mine enjoy coding, or at least enjoy making things which can be created via code. So think of the way you write code in the comfort of your own projects.
This seems to vary from person to person, but for most people it seems to be a very iterative process. Few designs survive first contact with reality.
You start out with an idea and you accomplish it in a somewhat iterative fashion, you figure out what libraries are available, you find out where the performance bottlenecks are, you discover what parts are complex to write, you figure out how to split it into smaller tasks… and then you rinse and repeat a bunch of times.
Got a smaller task ? Ok, what libraries can I use for it ? Where are the performance bottlenecks ? What’s the complex to write logic here ? What smaller tasks can I split it into ?
You also fluctuate between levels often, when writing a given sub-part of the code you might realize “Oh, this is kind of doing what that other thing is doing, maybe I can merge them” or “Oh, this library seems like exactly what I needed on that other thing” or “Oh, this separation makes no sense, let me go a level up and try something else” or “Oh, this is literally impossible to do… I might need to change a higher level component or even my specifications in order to avoid this”.
Obviously this is a gross simplification, some people don’t code like this at all and in practice a lot of the steps are blurred together. But I’m trying to convey a rough picture here. To point out that when you are writing code from scratch, the process is often very prototype~y.
You don’t have a clear vision of what you are doing until you actually start doing it, you might know what the thing is supposed to do… but you’re unaware of the myriad of parts required to do it.
I like to think of this as basically coding in “Explore” mode, where the act of writing code is tightly combined with figuring out what that code has to do. You obviously have the big picture, you know what the whole thing is supposed to accomplish, but you probably don’t know the “how” exactly, you might know that the “how” is doable, but you have to discover how the “how” is to be done.
Once you’ve kinda figured out what you have to do you have to switch mode. You slowly switch to what I’d call “Integrate” mode, where you gather all the parts, where you handle the edge cases, where you polish everything and make it short sweet and readable.
A very simple example of this would be writing all your code inline in one file, a very “Explore” mode things to do, since everything is within reach, you don’t have to think a lot about separation of responsibility, about internal interfaces and about what things are worth abstracting more and how. Copy pasting boilerplate, having a lot of global variables, importing a ton of functionality from a ton of libraries and basically rushing to get the thing to run.
After you’re done, or more likely whilst you’re doing this, you switch to “Integrate” mode, you notice a bunch of data that’s referenced together a lot, you move it into a structure.
You notice you’re using almost the same logic in 4 different places, you move it into a function.
You notice a for-loop that’s basically just doing a
map operation after you removed the boilerplate, you replace it with a
You notice a part of the code that seems like “it’s own thing”… you move that into it’s own file/class/entity-of-your-choice.
You notice a name that doesn’t make sense, or the fact that you were in a hurry and used
k instead of being more explicit and saying
IFUCKINGHATEPYTHONSIO() instead of saying
I think a lot of programmers do this exploratory kind of coding at some point, but unless you do coding in pairs or like looking over people’s shoulders and bugging them about what they’re doing… you don’t really notice it. We're horrible at being aware of what we do, especially when we're no a bit of a flow, which often happens if you're doing exploration well.
The place I’ve seen code that still looks to be in “Explore” mode to some extent is in scientific computing, especially in small libraries used by few people or in companion code to papers that weren’t written by some big research org with plenty of time. Another good place to gun for this is by looking at unfinished open source projects, or unfinished components of existing projects.
The problem with a lot of design principles, and even with very strict languages (think Haskell), is that they basically stop you from going into Explore mode. Even if, theoretically, you could go ahead and disregard the design principles while coding, you’d have to mess up the whole codebase doing so.
If everything is neatly separate into very tight abstractions, that often forces you to either unroll all the abstraction (which you have to put back together before you actually “finish”) or to integrate with said abstraction while coding, which somewhat forces you into using the design patterns the abstraction was create with and for.
Design patterns are made by people that look at finished code and think “why does this suck”, they are great if you stay in “Integrate” mode, if you only think about polishing the finished product and about making sense of code that’s already written. However, they are horrible if you go into Explore mode… and that’s not because the right pattern wasn’t create, it’s because a single pattern for doing exploration is basically impossible to write.
Remember the coding guidelines of large open source projects ? How the only real thing unifying them is that they are surprisingly relaxed for projects with hundreds or thousands of developer working on the same monolith.
They prohibit and recommend stuff, but they don’t tell you “how” to do stuff… because if they knew the “how” that code would already be there, the job of the programmer (and of any engineer really) is to figure out the how.
The way to do that is to experiment, to break things, to try loads of things fast, to quickly figure out what works and what doesn’t without having to finish the whole project first and to settle on a way of doing them and integrate it with the whole once that’s done.
The way people explore in order to solve a problem is widely different, again, that’s arguably why more programmers are better than one, typing speed is not the limiting factors and even project scope is usually not the limiting factor. A kernel or a compiler is complex, but 100 people can make and maintain one, you don’t need hundreds of thousands of contributors.
The reason multiple programmers are good is because they have different ways of figuring stuff out, which often ends up with them finding different issues and more efficient way of solving a problem, An amazing example for this is the crazy kernel fixes implemented to avoid SPECTER exploits, nothing impressive in terms of lines of code, but hugely creative in a wide variety of ways.
Furthermore, the ways to explore are very domain specific. I’ve had the opportunity to look at, refactor and create software from scratch in several different industries… and some similarities exist, but the details differs a lot. Whenever you get out of your programming comfort zone you’ll usually encounter a stranger tribe doing things a whole different way, if you observe them for long enough and try to participate you’ll probably figure out that they have their reasons for it and the reasons are good.
I can’t help but think of high-modernist planners and their attempts at “standardizing” local farming and forestry, how they were incredibly intelligent “on conceptual level” and how they miserably failed when faced with dirty reality.
You are putting yourself in the same predicament if you try to standardize coding practices. You take a high-level view and say “Surely this principle ought to be correct no matter what you are doing”, but if you actually apply it you might end up disrupting the complex ecosystem that a team, project, organization or industry has built… an ecosystem which is not optimized for “the end result” but for the long an arduous experimentation process of getting there.
Maybe I’m biased towards disliking design principles. I certainly don’t think this is a definitive takedown of the concept, it’s more of a view that tries to explain why the concept might be bad, since design principles are one of those things that always “sound good” but I never actually see them working well in practice.
Patterns should emerge, they shouldn’t be create. If we see a pattern emerging a lot in your domain, add it to the domain specific libraries and lingo. If we see a pattern emerging everywhere, add it to our languages and standard libraries.
Furthermore, I think most common patterns we have right now are pretty good. Our focus should be more on understanding the process of creation, which still seems to be a matter of apprenticeship to some extent, rather than something you can learn from a book.
Herb Sutter had a very good quote related to this about a book on design patterns (and yes, I’m aware I’m quoting someone that would despise the ideas presented here to their core), I can’t find it, but to paraphrase:
I knew this books was the kind I’ll come back to for my whole life, because it had one things which I consider to be of most importance, it had names for things I was already using.
That’s what design patterns should be about, looking at what we’re currently doing and finding ways of putting that into words so we can better communicate it to other people.
But if you think you can come up with a general set of “new” rules, especially for something as complex as programming, you’re going to break something.
At best whatever you break will be obvious since it will affect performance or make the thing untestable. At worst, you’re going to break something that we don’t have a good abstraction for, we won’t be able to figure out that it’s broken because we don’t have a good conscious model for it.
In the case of design patterns like SOLID, I think we can define the thing they break as “Exploration”.
If you enjoyed this article you may also like:
- Stop future proofing software
- Please, reinvent the wheel
- The Red Queen
- Imaginary Problems Are the Root of Bad Software
Published on: 2020-02-13