Skip to content
Dec 17 / davidalpert

SOLID as an antipattern:
when guiding principles go bad

I’ve heard, read, and thought a lot about the SOLID principles over the past 5 years:

Initial Stands for
(acronym)
Concept
S SRP
Single responsibility principle;
an object should have only a single responsibility.
O OCP
Open/closed principle;
“software entities … should be open for extension, but closed for modification”.
L LSP
Liskov substitution principle;
“objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program”. See also design by contract.
I ISP
Interface segregation principle;
“many client specific interfaces are better than one general purpose interface.”
D DIP
Dependency inversion principle;
one should “Depend upon Abstractions. Do not depend upon concretions.
Dependency injection is one method of following this principle.

source: http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)

This list is often presented as an ideal of modern software design.  We are told that applying these principles will make our software easy to read, easy understand, easy to test, and easy to change. 

In many ways I have found those claims to be true, but today I’d like to tell a different story.

Today I want to talk about how applying some of these principles can become an antipattern, when using them makes your software more brittle and resistant to change rather than less.

Losing the forest for the trees

The Single Responsibility Principle (SRP) states that classes should have only one reason to change, and generally leads to breaking up complex logic into smaller pieces.  Sometimes, however, SRP is used as justify a position that many small classes is arbitrarily better than a few large ones. 

Unfortunately small classes are not automatically easier to understand.

As an architecture decomposes components into smaller and smaller widgets there sometimes comes an inflection point where the logic and architecture that used to be expressed as lines of code in a long method becomes expressed instead as many small classes spread out amongst a collection of files and folders.  If the boundaries of those many small classes are not organized well and grouped into clearly expressed patterns, this explosion of classes and files can actually increase the amount of work needed to make a change to the system.

Dependency Inversion can be similarly abused, taken simplistically to mean that we should *never* depend on concrete classes.  This can lead to the habit of declaring an interface for every class, on principle, which nearly doubles the code that you have to maintain.

And don’t think that always depending on abstractions makes your code easier to test, either.  In fact, as you replace concrete dependencies with abstracted or injected ones the number of possible entry paths into your code increases, and with that the surface area that you must cover with tests in order to ensure stability.

The trick with dependency inversion is to depend on abstractions of external, slow, or side-effect-prone systems like web service calls, database calls, and filesystem access or calls into a legacy system.  Within the scope of your own code, however, abstractions can be as much a barrier to productivity as a help.

The way forward: simple design above all

So how can we know when to apply these principles and when to avoid them?  How can we tell how much splitting and factoring is too much?  Is this entire question merely a matter of personal preference and generational thinking?

I believe there is more to it, an objective observation of Quality in the truest Motorcycle Maintenance sense of the word.

Recently a friend asked me to explain how Object Oriented Programming differs from procedural programming.  By way of answer I described a hypothetical sequential process evolving towards an object oriented one (and then further into functional territory).  We started with a single long sequential script, the kind that was well known to my friend.  We then discussed how refactoring it into function calls, also a familiar pattern, made the logic easier to work with.  Finally I explained how grouping those functions by responsibility and encapsulating closely related data allowed us to use physical metaphors such as common objects to organize the logic at an even higher level of abstraction.

In the process of telling this story I realized that this pattern described an evolution along an spectrum of simultaneously increasing and decreasing complexity.  I was justifying the introduction of more advanced (and complex) architectural patterns in order to better manage (by reducing) the complexity of a long sequential script.  The higher levels of abstraction allowed us to reason about our logic with larger grains, and therefore move faster.

The trick comes in how and when we introduce complexity.

SOLID as a tool, not the goal

Check out the following addendum to a listing of the SOLID principles:

Generally, software should be written as simply as possible in order to produce the desired result.  However, once updating the software becomes painful, the software’s design should be adjusted to eliminate the pain.  Often, these principles, in addition to the more general Don’t Repeat Yourself principle, can be used as a guide while refactoring the software into a better design.

source: http://deviq.com/solid
(emphasis mine)

I love how this frames the SOLID principles – not as an end that brings its own benefit, but rather as guidelines for refactoring.

Notice also the process described in this paragraph:

  1. make it as simple as possible;
  2. pain and friction will indicate when your design needs to grow;
  3. use SOLID (and other) principles as guidelines while refactoring.

The emphasis on simplicity above all is refreshing.  It’s a much more straightforward guideline than SOLID. 

In fact, it is vitally important to start simple and introduce complexity only to solve existing problems.

Why?

The reason itself is simple:

When you start with complex designs, how do you know that you are using the right abstractions?  At the beginning, all you have is your best guess, and we humans are notoriously bad at predicting the future.

When you start with simple designs, however, and let emerging pain or complexity drive your refactoring then you wind up adding complexity only where it’s needed with full confidence that your complex approach is grounded in exactly the scenario that you need to address.

Simple design as an antidote to complexity

In my work I have found the four rules of Simple Design to provide a much stronger framework for architecting software:

Simple design

  1. passes all tests;
  2. clearly expresses intent;
  3. contains no duplication;
  4. minimizes the number of classes and methods

Notice how the the 4th point acts directly to counter the “class explosion” described earlier.

These rules also serve as a functional boundary condition for the refactoring phase of TDD.

In summary

Any principle taken to extreme runs the danger of becoming a weapon that we use to club unbelievers over the head until they submit to our view of the One True Way.  Is that the kind of developer that you want to be?

So by all means, learn the SOLID principles, practice applying them, use them to solve problems – they can be a great tool to use when restructuring the logic of your software. 

But please remember: SOLID is but one tool in the toolbox of the seasoned software developer.  It is not the One True Way. 

And it is certainly not a valuable goal in and of itself.

Leave a comment