Skip to content
Mar 24 / davidalpert

Disable the XAML designer in Visual Studio

I interrupt my ongoing series of Nuget tips to bring you the following public service announcement.

After working on a WPF app for the past year or so, I have come to realize that no one on our team uses the XAML designer surface that ships with Visual Studio.  We all find it much faster to edit the XAML by hand, break up the XAML into manageable chunks, and generally stay away from the designer.

We also all notice that Visual Studio grows to consume large amounts of memory and locks up to a (Not Responding) message on a semi-regular basis.

This week a colleague showed me a couple of tricks to disable the XAML designer surface and squeeze a bit more performance out of Studio.

1. Default to full XAML view

Prior to this week our team had all done the obvious thing and told the XAML editor to default to XAML view, hoping that if we never opened the Designer view that would improve things.

image

In the Tools –> Options menu, open the Text Editor node, then the XAML node, then select the Miscellaneous node; make sure that under the Default View heading there is a checkbox beside Always open documents in full XAML view.

This got us part of the way there, with the code view opening by default when we opened XAML files, but Studio still locked up quite often.

2. Kill the designer rendering process

It appears that Visual Studio spawns a background process to render XAML for the Design tab of the default XAML editor so that it is ready to show on the design surface. 

image

Open the Task Manager, right-click on XDesProc.exe, and select End Process.

This process seems to start up whenever you open a XAML view in Visual Studio whether you ever click on the Design tab or not, which leads to the latest suggestion.

3. Open as source code

In order to prevent the designer from ever getting loaded, you can tell Studio to use a different editor entirely when opening XAML files.

image

Right-click on any .xaml file in your solution explorer and select Open With…

image

When the Open With dialog opens, you can pick anything you like as a default editor for XAML files.  The XAML UI Designer and XAML UI Designer with Encoding will both trigger the XDescProc.exe process to start up in the background, but the numerous other editors will not.

On our team we use the Source Code (Text) Editor.  Not only does it avoid slowing us down with the XAML UI editor, but it provides all the rich intellisense-supported goodness of the XAML tab from the default editor.  In short, our experience of editing XAML files is the same as when we tried to stay in the XAML tab of the default editor.

To do this, pick the Souce Code (Text) Editor and click Set as Default.

Happy XAML editing!

Mar 9 / davidalpert

NuGet Tip #2: Run your own package feed

Now that you’re using NuGet to manage the 3rd party code dependencies in your project and you’ve configured Visual Studio and MSBuild to restore missing packages as a pre-build step you can safely remove those packages from source control.

This provides a number of advantages, but it does expose you to a few risks:

  • The public NuGet feed may be unavailable when you need to download a missing package;
  • The specific version of a package that you depend on may be no longer available publicly;
  • The specific version of a package that you depend on may contain different code!

And so on; in other words, that convenience you gained by leaning on NuGet to manage your dependencies required you to give up a bit of control.

Reduce your risk with a private feed

Giving up control is a good thing in many situations, but when it comes to ensuring that you can reliably build and release your software it is sure helpful to make your process repeatable. When it comes to NuGet standing up a private package feed is one way to regain some of that stability.

Thankfully there are a number of options for creating a private NuGet feed:

  • myget is a software-as-a-service offering that provides externally hosted package feeds

  • the official NuGet docs describe two ways to Host your own nuget feed internally, one by standing up an in-house ASP.NET web site to serve packages and the other by simply pointing the NuGet tooling at a file share.

  • I experimented a while back with making it dead-simple to stand up a local NuGet server but I haven’t kept up with NuGet releases. When I revive that code I’ll be sure to write about it here.

Once you’re hosting your own internal feed you can configure Visual Studio’s NuGet tools or the NuGet command line app to include your feed when it looks for packages. For example, in the Visual Studio Tools –> Options menu, open the Package Manager –> Package Sources node:

Visual Studio Package Manager options

Back up your dependencies into your internal feed

Now that you are hosting your own package feed it’s a good idea to back up the packages that your projects depend on onto that internal feed. This is what enables you to take on management of the uptime of those particular packages rather than depending on the external state of the offical NuGet package feed.

Maarten Balliauw has a great post on how to Copy packages from one NuGet feed to another.

I personally recommend Rob Reynolds Nuget.Copy.Extension as I’ve used it several times:

NuGet.exe Install /ExcludeVersion /OutputDir %LocalAppData%\NuGet\Commands AddConsoleExtension

NuGet.exe addextension nuget.copy.extension

NuGet.exe copy castle.windsor –destination \\companyshare\nuget_packages

Prefer your local feed to the

Now that you have your own feed, you can tell MSBuild to use only your internal feed when downloading packages for a build. This will reduce the risk that a new build will mistakenly pull down a new version of a package dependency behind the scenes.

Open the NuGet.targets file:

Finding the NuGet.targets file

And change the following code:

<ItemGroup Condition=" '$(PackageSources)' == '' ">
    <!-- Package sources used to restore packages. By default, registered sources under %APPDATA%\NuGet\NuGet.Config will be used -->
    <!-- The official NuGet package source (https://nuget.org/api/v2/) will be excluded if package sources are specified and it does not appear in the list -->
    <!--
        <PackageSource Include="https://nuget.org/api/v2/" />
        <PackageSource Include="https://my-nuget-source/nuget/" />
    -->
</ItemGroup>

to:

<ItemGroup Condition=" '$(PackageSources)' == '' ">
     <PackageSource Include="http://myserver:8090/dataservices/packages.svc/" />
</ItemGroup>

with that middle line pointing to your internal feed; this will ensure that MSBuild only pulls packages from your feed.

Include your package repo in your backup and recovery plan

Finally, if you are hosting your own package repository internally in your organization, remember to include it in your backup strategy and your disaster recovery plan. There’s not much point in hosting your own feed if it goes down and you can’t recover the exact versions of the packages that your project depends on.

Remember also that the backup part of the plan is only a means to an end; it is the ability to recover and restore your package feed that will really matter if (or when) Murphy strikes.

Feb 24 / davidalpert

NuGet Tip #1: Restore Packages on Build

Nuget package restore is a way to tell MSBuild to download any missing packages as a pre-build step.

This means that you can exclude your packages from source control, making your repositories smaller, faster to work with, and cheaper to back up (as you have just removed a bunch of binary data that is painful to merge and doesn’t change very often).

Enabling package restore through Visual Studio is easy:

  1. Right-click on your solution node and select “Enable NuGet Package Restore”.

    alt text

  2. Click “Yes” in the confirmation dialog.

    alt text

  3. When the installation is finished you’ll have a new solution folder called “.nuget”.

    alt text

This process adds a copy of nuget.exe to your solution. It also adds some MSBuild targets that integrate with your project files to download missing packages as a pre-build step.

Check your .nuget folder into source control and anyone who donwloads your source will get nuget.exe and the MSBuild know-how to donwload missing packages as part of a build.

This is awesome, but brings with it a risk: what happens if your dependencies are not available when you go to build them?

Imagine that a new dev joins your team, gets latest on your source tree, and one of your package dependencies is not available in the public feed? How do they build the project?

Or…

Perhaps the package is available, but not the version specified in your projects’ packages.config files? You don’t want your team randomly getting the latest and greatest version of a dependent package any time they want, because we want to manage the risk that our code might break by using different versions of our dependencies.

To manage these risks I recommend that you set up your own internal package feed.

More on that next time.

Feb 24 / davidalpert

NuGet Tips

Nuget, the defacto package managment system for .NET development, has come a long way in a relatively short time. It does a pretty good job at simplifying and automating the process of downloading and referencing 3rd party libraries. It takes most of the guess work out of which versions of which dependencies need to come along for the ride in order for everything to work. It even works from both inside and outside of Visual Studio.

Kudos to Microsoft stepping into the package management space and working with the community to create a tool that many of us have come to depend on.

Using Nuget within a large project is not without it’s risks, however, so I share the following recommendations in the hopes that it saves someone else some time and trouble.

I will update this list as the tips come out.

Feb 10 / davidalpert

Testing should be fun

I had an interesting moment with a team member this week. We were pairing on a new feature that included some calculations and I suggested that this feature would be a great candidate for writing some unit tests. Despite showing a bit of hesitation, or at least none of the enthusiasm that I have come to associate with an experienced test-driven developer, he agreed to try it with me.

As we opened up some of the existing tests in the project that covered similar calculations I noticed that they were hard to read.

The tests were spread out with one test case per class/file and the test inputs and outputs buried in a lot of syntactical ceremony.

class When_converting_imperial_to_metric 
{
    void Should_have_expected_value()
    {
        result should be 23
    }

    void GivenThat()
    {
        mock unit.value to return 73
        mock unit.type to return Imperial
    }

    void WhenIRun()
    {
        result = calculator.Convert(unit, Metric)
    }
}

I’m using pseudo-code here rather than actual syntax but I think it makes the point.

After some basic refactoring to reduce ceremony and improving readibility we managed to reduce this quite a bit.

class When_converting
{
    void 73_imperial_should_become_23_metric()
    {
        unit = new Unit(73, Imperial)
        calculator.Convert(unit, Metric) should be 23
    }
}

Before moving on I want to call out a few things about this revised test:

  • it expresses the entire test as a single 2-line method that can be read and understood in a single chunk;
  • we are using a literal component instead of the mocked one;
  • the specific inputs and expected outputs are written with less noise so they can be identified more quickly;
  • the logic is ordered as
    [Given] setup, [When] I act, [Then] I expect
    which more closely matches a common pattern of speech than the original
    Should (behave as expected) - Given (mocked setup) - When (I act)

In the end, as we leaned back and looked at this revised test, my initially reluctant partner told me how this test made him want to write more tests.

And here we come to my ah-ha! moment.

Testing should be fun

If your tests are not fun to write, you have a problem.

Actually, you have a couple problems.

First off you are going to have a hard time convincing less experienced members of your team to climb up the learning curve and join you in covering your ass with tests while you are all coding up a storm under the heat of that upcoming deadline.

Beyond that, if your tests are not fun to write, the rest of your code probably isn’t either.

I’ve seen a high correlation between code that’s not fun to write and code that is difficult or expensive to maintain, so if your team isn’t having fun it’s likely to cost your busines real dollars.

Okay, I believe fun is cheaper. Now what?

So just how do we make testing fun?

Here are a few things that I’ve seen work well.

1. make testing accessible

  • name and organize your tests so that they describe business value, not implementation details;
  • favor specific scenarios over generic ones as they are often easier to read and understand;
  • remember that there is no one right size of unit for a unit test – the right size is the one that brings your team the most value for the investment;
  • favor literal components over heavy mocking as it’s more straightforward and can be a more realistic way of exercising your code;
  • introduce mocking only when needed to make tests fast or determinstic (see below) or else to ease setup;
  • I agree with Oren Eini on this one – I’d rather have 80 tests break at once and help me triangulate a problem than 100 tests that are so isolated they break one at a time.

2. make testing easy

  • invest constantly in removing friction from your tests;
  • ruthelessly drive out ceremony and repetitious setup and teardown so your devs can focus on verifying the business value of the test case at hand;
  • if tests are still hard to write review the code under test to see if responsibilities are broken out properly;

3. make testing fast and deterministic

  • the places to guard against coupling are at the latency boundaries or external dependencies of your code – database, internet, file system access, etc.
  • mocking here can increase the speed and predictability of your tests while reducing setup and teardown maintenance;
  • mocking inside those logical units, however, can make tests slow and cumbersome to write or debug.

4. make testing visible

  • the value of automated tests is when they provide tighter feedback loops that testers, stakeholders, and customers – run them often;
  • hooking up a continuous integration server to your source control system can ensure that tests are run on each checkin, nightly, or with whatever frequency makes sense for your team;
  • most continuous integration systems have some form of notifications or indicators that can signal team members when a test run passes or fails;
  • making the results of running your tests visible not only to individuals but across the team as a group can increase the accountability for team members to engage with tests and keep them passing;

In conclusion

If you have devs on your team who are reluctant to write tests, do some investigation.

Ask them why they don’t write tests in your system.

I bet that if you can put aside your ego and listen to their answer you will find something that can be done to improve the experience of writing tests.

That in turn will lead to more participation in writing and maintaining tests and that, finally, will lead to more participation in writing and maintaining your software.

Jan 27 / davidalpert

3 ways to remember which build you’re running

Often when building software we have to juggle several different build configurations, each pointing at different services, accessing different data, etc. During development it can be a huge time saver to have an easy way to know which build you’re looking at. I find this to be important when tracking features, defects, bug fixes, and deployments, for example, or that moment when you pause before submitting an order to check if it will actually be charged to the valid credit card number you’re using for integration testing.

Technique #1 – Color-Coded Icons

One of my favorite techniques for differentiating between builds during development is to use color:

image

In this case you see five different versions of the same icon, each one representing a different build configuration of the software my team is working on:

Configuration Description Color
DEV development/integration builds Red
QA internal releases to our QA/testing team Gold
UAT internal releases to our stakeholders Green
STAGE pre-releases to an environment that mimics production as closely as possible Purple
PROD production releases to our customers Blue

In our solution we have each icon named according to it’s configuration:

imageWe’ve used the VSCommands extension to group the configuration-specific ICO files “underneath” the one that the application is configured to use at build time.

We also use the following MSBuild target as a pre-build step to copy the configuration-specific icon into the right spot for the build to pick it up:

  <PropertyGroup>
    <BuildDependsOn>
      CopyEnvironmentSpecificAssets;
      $(BuildDependsOn);
    </BuildDependsOn>
  </PropertyGroup>

  <Target Name="CopyEnvironmentSpecificAssets">
    <ItemGroup>
      <AssetToUse Include=".\Application.$(Configuration).ico">
        <AssetToReplace>Application.ico</AssetToReplace>
      </AssetToUse>
    </ItemGroup>

    <Copy SourceFiles="@(AssetToUse)" DestinationFiles="@(AssetToUse->'%(AssetToReplace)')"/>
  </Target>

Technique #2 – Configuration-specific application name

Another technique that is useful is to add the configuration name to the application name that appears in the install shortcut, the window title, the add/remove programs dialog, etc.

Configuration Description Application Name
DEV development/integration builds Fancy App [DEV]
QA internal releases to our QA/testing team Fancy App [QA]
UAT internal releases to our stakeholders Fancy App [UAT]
STAGE pre-releases to an environment that mimics production as closely as possible Fancy App [STAGE]
PROD production releases to our customers Fancy App

We use this naming convention in our main WPF window, having it’s viewmodel read the title in dynamically from an application.resx resource file, though you could just as easily store it in an appSetting in your app.config file.  In either case the XmlTransform target is a handy way to use Web Config Transforms to swap in configuration-specific appSettings as a pre-build step.  And remember that RESX files are simply XML files:

image

so the Web Config Transform tooling works just great on those also:

image

Technique #3 – configuration-specific emails

In our case we have one email address to handle support requests. When a support or activation request email comes in it is important to know which build configuration sent it so that we can triage the email appropriately.

In the screenshots above you’ll notice that we transform the subject of our support email from:

  • Fancy App Support Request

to:

  • Fancy App [DEV] Support Request

or whatever other configuration we are targeting.  This let’s recipients quickly identify those emails that are sent by devs during integration testing (i.e. those with [DEV] in the subject line) and those that are sent by stakeholders (i.e. those with [UAT] in the subject line) and trim them out of emails without a configuration tag in the subject line that are sent by customers in the field.

Jan 12 / davidalpert

Working with TFS branches in git using git-tfs

I’m pretty excited about git. I’m also pretty excited about git-tfs. Let me show you one of the reasons why.

Let’s say that I have a TFS Team Project with a Source folder and three copies of my source tree named Dev, Main, and Release: image

Main was the first source folder. Dev was created by branching off Main, and Release was also created by branching off Main. That makes Dev one child of Main and Release a second child, setting up a merge path of making changes in the Dev branch, merging them into the Main branch, and then merging them into the Release branch.

Right clicking on any of the source branches in Visual Studio’s Source Control Explorer and selecting the View Hierarchy command

image

will show us a picture of these relationships:

One thing I love about working with git is how easy, cheap, and fast it is to make local branches, try several ideas, and mash them all together before checking back into the original repository. Another thing that git makes really easy is moving changesets back and forth between branches.

When I started using the git-tfs bridge I would have created three separate local git repos for this team project, one for each branch. The problem with this approach is that git doesn’t easily recognize the common code between these repos. I have found that while moving changesets between different git repositories with the same histories is fairly straightforward, moving changesets between git repositories or timelines that do not explicitly share a common ancestor commit (according to git) is more difficult.

Enter git-tfs clone –with-branches.

Cloning from TFS –with-branches

Working with the master branch of git-tfs from github you now have the ability to clone several TFS branches into local git branches.

prompt> git tfs clone --with-branches http://tfsserverurl:8080/tfs/ $/Project/Folder/ pathToLocalRepo

This will:

  • initialize a local git repo;
  • add a git-tfs remote called ‘default’ and a local branch called ‘master’ and pull down all the changesets on the branch you requested, adding them one by one to the new git repository.
    • each TFS changeset is pulled down one at a time into a local workspace tucked inside the repository’s .git folder.
  • the –with-branches flag will then query that ‘default’ TFS remote for a list of all related TFS branchs, list them, and then loop through them fetching all the divergent changesets on each branch.

This last step is pretty magical for a git junkie like me.

Look at the command-line output from running that clone –with-branches command against the team project with three branches that I described above:

Even more magical is to crack open gitk and take a look at what the local git repo looks like after running this git-tfs clone command.

prompt> gitk --all &

There are my TFS branches, all linked up in git to the proper parent changesets and everything!

Now I can import an entire TFS branched source tree into git and work locally, batching changes back into TFS when I’m ready to share my code and I can shuffle changes from one TFS branch to another using the full power of git’s toolset.

Caveats

As the code that manages TFS branches in git-tfs is still in it’s early stages, a couple of caveats apply.

  • The speed of this approach will be heavily limited by the connection to TFS so it may be significantly slower than a plain git clone.
  • The –with-branches flag will cause git-tfs to try and pull down all the changesets in a TFS branch hierarchy – if that is a large number of changesets this full clone may take a long time or consume a large amount of disk space.
  • The current code runs a ‘git gc’ command periodically to clean up and re-pack the git repo, but it may not surface this on the console output without the –d or –debug flag, appearing to hang instead – I plan to work on that.
  • The current logic tries to find the oldest changeset on a child branch and link it in git to it’s parent changeset on the parent branch.
    • long-running branches with large files (like the one I’m working on right now at the office) may be problematic; without identifying TFS merges and creating git merges to match (I’m looking into that) long-running TFS branches come into git as long-running git branches, not sharing nearly as much history as they do in reality. This appears cause git to slow down and consume a large amount of memory, probably because it has to process changes across much more history than should be required for a given operation.
    • the current code may pick up the wrong changeset when finding a root in the parent branch
      https://github.com/git-tfs/git-tfs/issues/284

Don’t let these issues scare you off, however – it’s awesome!

When all is said and done I’m absolutely thrilled to see TFS branches pulled down into a git repo and very excited to be helping the git-tfs project to evolve.

Grab the code, try out the emerging branching support, and let us know how it works for you.

Jan 2 / davidalpert

The correct way to uninstall Visual Studio packages (extensions)

So it turns out I was a bit premature when I declared victory in my last post, and although I was able to start Studio several times without receiving that pop-up error warning it started recurring today. I’ll go into detail, but the short version is, “Read error dialogs” and “the internet is your friend.”

Thanks to this post on the JetBrains issue tracking web site I learned that those registry entries I so cavalierly deleted are regenerated on startup from the package manifests dropped in the following location(s):

  • C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions
  • C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions

Buried in those folders (depending on which version of Studio concerns you) are extension-specific folders containing a pair of files as described here:

  • extension.vsixmanifest
  • vspackage.pkgdef

Here is a great pair of resources on Visual Studio Extensions (VSIX) and how they’re loaded that explain more about how all this works:

So removing the registry entries as I described is not enough; you also have to remove the manifest and package definitions.

Apparently there is a known issue in some of the ReSharper v7.1.25-related Jetbrains tools whereby some of the manifest and package definition files do not get removed properly. Removing them manually seems to do the trick, and the issue appears to have been fixed in ReSharper 7.1.1000 and dotCover 2.2.

Hat tip to Scott Hanselman for encouraging us all to look underneath the covers.

Dec 28 / davidalpert

How to unregister a Visual Studio package (extension)

This post is out of date; please see:
The correct way to uninstall Visual Studio packages (extensions)

I ran into a small problem uninstalling dotCover from Visual Studio 2012 this morning and had to complete the removal manually.  Due to the recent popularity of NuGet and the word “package” in the .NET space, finding the right search terms was difficult.  In the end, however, it wasn’t hard at all.

First a short disclaimer:

dotCover is a code coverage tool by JetBrains that integrates with ReSharper.  I have no real thoughts on the tool itself one way or the other as my trial ran out before I found a need or use for it, hence the uninstall.

And now back to our story.

I started through one of the usual routes, clicking Uninstall in the ‘Programs and Features’ dialog.

Most of the uninstall experience was painless and came off without a hitch.  I did notice a pair of dialogs stating that something was not allowed, which I proceeded to ‘OK’ my way through without apparent effect.

All appeared well, until I started up Visual Studio 2012 and was greeted by this message:

The 'JetBrains.dotCover.Product.VisualStudio.v11.0.Package, Version=2.1.471.44, Culture=neutral, PublicKeyToken=1010a0d8d6380325' package did not load correctly.

The problem may have been caused by a configuration change or by the installation of another extension.  You can get more information by examining the file 'C:\Users\davida\AppData\Roaming\Microsoft\VisualStudio\11.0\ActivityLog.xml'.

Continue to show this error message?

image
Looks like something didn’t quite work right after all.

Following the dialog’s instructions I opened up the ActivityLog.xml file.  Here’s the relevant section:

Notice the GUID embedded there:

  • {7FFD1A80-7A5A-49B2-A39B-491C750984FF}

I wondered if Visual Studio relies on the registry to decide which packages to load, and if perhaps the part that failed in my automated uninstall process was merely the registry cleanup related to removing the bits.

Sure enough, thanks to this post on Stack Overflow I cracked open RegEdit and found a Packages node at:

  • HKEY_CURRENT_USER\Software\Microsof\VisualStudio\11.0_Config\Packages

and it contained a node with a matching GUID and values that included a path to the missing dll that Studio complained it could not find.

Deleting that node resolved the problem.  No more warning dialogs when starting up Studio and no noticeable side effects.

One final tip on searching for a GUID using RegEdit.

Remember the GUID we were looking for?

  • {7FFD1A80-7A5A-49B2-A39B-491C750984FF}

Notice the curly braces?  Extension package GUIDs are embedded in a pair of curly braces.

image

If you search for the GUID without the braces and have that ‘Match whole string only’ option checked, you won’t find it.

image

Including the braces in the search term (or unchecking the option) will find it just fine.

Dec 17 / davidalpert

SOLID as an antipattern:
when guiding principles go bad

I’ve heard, read, and thought a lot about the SOLID principles over the past 5 years:

Initial Stands for
(acronym)
Concept
S SRP
Single responsibility principle;
an object should have only a single responsibility.
O OCP
Open/closed principle;
“software entities … should be open for extension, but closed for modification”.
L LSP
Liskov substitution principle;
“objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program”. See also design by contract.
I ISP
Interface segregation principle;
“many client specific interfaces are better than one general purpose interface.”
D DIP
Dependency inversion principle;
one should “Depend upon Abstractions. Do not depend upon concretions.
Dependency injection is one method of following this principle.

source: http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)

This list is often presented as an ideal of modern software design.  We are told that applying these principles will make our software easy to read, easy understand, easy to test, and easy to change. 

In many ways I have found those claims to be true, but today I’d like to tell a different story.

Today I want to talk about how applying some of these principles can become an antipattern, when using them makes your software more brittle and resistant to change rather than less.

Losing the forest for the trees

The Single Responsibility Principle (SRP) states that classes should have only one reason to change, and generally leads to breaking up complex logic into smaller pieces.  Sometimes, however, SRP is used as justify a position that many small classes is arbitrarily better than a few large ones. 

Unfortunately small classes are not automatically easier to understand.

As an architecture decomposes components into smaller and smaller widgets there sometimes comes an inflection point where the logic and architecture that used to be expressed as lines of code in a long method becomes expressed instead as many small classes spread out amongst a collection of files and folders.  If the boundaries of those many small classes are not organized well and grouped into clearly expressed patterns, this explosion of classes and files can actually increase the amount of work needed to make a change to the system.

Dependency Inversion can be similarly abused, taken simplistically to mean that we should *never* depend on concrete classes.  This can lead to the habit of declaring an interface for every class, on principle, which nearly doubles the code that you have to maintain.

And don’t think that always depending on abstractions makes your code easier to test, either.  In fact, as you replace concrete dependencies with abstracted or injected ones the number of possible entry paths into your code increases, and with that the surface area that you must cover with tests in order to ensure stability.

The trick with dependency inversion is to depend on abstractions of external, slow, or side-effect-prone systems like web service calls, database calls, and filesystem access or calls into a legacy system.  Within the scope of your own code, however, abstractions can be as much a barrier to productivity as a help.

The way forward: simple design above all

So how can we know when to apply these principles and when to avoid them?  How can we tell how much splitting and factoring is too much?  Is this entire question merely a matter of personal preference and generational thinking?

I believe there is more to it, an objective observation of Quality in the truest Motorcycle Maintenance sense of the word.

Recently a friend asked me to explain how Object Oriented Programming differs from procedural programming.  By way of answer I described a hypothetical sequential process evolving towards an object oriented one (and then further into functional territory).  We started with a single long sequential script, the kind that was well known to my friend.  We then discussed how refactoring it into function calls, also a familiar pattern, made the logic easier to work with.  Finally I explained how grouping those functions by responsibility and encapsulating closely related data allowed us to use physical metaphors such as common objects to organize the logic at an even higher level of abstraction.

In the process of telling this story I realized that this pattern described an evolution along an spectrum of simultaneously increasing and decreasing complexity.  I was justifying the introduction of more advanced (and complex) architectural patterns in order to better manage (by reducing) the complexity of a long sequential script.  The higher levels of abstraction allowed us to reason about our logic with larger grains, and therefore move faster.

The trick comes in how and when we introduce complexity.

SOLID as a tool, not the goal

Check out the following addendum to a listing of the SOLID principles:

Generally, software should be written as simply as possible in order to produce the desired result.  However, once updating the software becomes painful, the software’s design should be adjusted to eliminate the pain.  Often, these principles, in addition to the more general Don’t Repeat Yourself principle, can be used as a guide while refactoring the software into a better design.

source: http://deviq.com/solid
(emphasis mine)

I love how this frames the SOLID principles – not as an end that brings its own benefit, but rather as guidelines for refactoring.

Notice also the process described in this paragraph:

  1. make it as simple as possible;
  2. pain and friction will indicate when your design needs to grow;
  3. use SOLID (and other) principles as guidelines while refactoring.

The emphasis on simplicity above all is refreshing.  It’s a much more straightforward guideline than SOLID. 

In fact, it is vitally important to start simple and introduce complexity only to solve existing problems.

Why?

The reason itself is simple:

When you start with complex designs, how do you know that you are using the right abstractions?  At the beginning, all you have is your best guess, and we humans are notoriously bad at predicting the future.

When you start with simple designs, however, and let emerging pain or complexity drive your refactoring then you wind up adding complexity only where it’s needed with full confidence that your complex approach is grounded in exactly the scenario that you need to address.

Simple design as an antidote to complexity

In my work I have found the four rules of Simple Design to provide a much stronger framework for architecting software:

Simple design

  1. passes all tests;
  2. clearly expresses intent;
  3. contains no duplication;
  4. minimizes the number of classes and methods

Notice how the the 4th point acts directly to counter the “class explosion” described earlier.

These rules also serve as a functional boundary condition for the refactoring phase of TDD.

In summary

Any principle taken to extreme runs the danger of becoming a weapon that we use to club unbelievers over the head until they submit to our view of the One True Way.  Is that the kind of developer that you want to be?

So by all means, learn the SOLID principles, practice applying them, use them to solve problems – they can be a great tool to use when restructuring the logic of your software. 

But please remember: SOLID is but one tool in the toolbox of the seasoned software developer.  It is not the One True Way. 

And it is certainly not a valuable goal in and of itself.