Yet another practical pocket guide on writing clean code

Being a big proponent on crafting beautiful, robust and maintainable code, I have read several books and articles on the subject.  One of my favorite resources is a book titled “Clean Code” by Robert C. Martin.

I would propose that such a resource be readily available in the company library and would even go as far as advocating for every software engineer joining the company to either have read this book or is required to read this book as part of the on-boarding process.

In summary what is clean code:  Code which mostly abides by the SOLID principles of software design.  In my own words.

Methods should have a single responsibility.  It is better to have a class with many small methods that have a class with a small number of large methods.

Methods should not be overly long. A well known and acceptable measure is that a method should not span the entire code editor space in Visual Studio, when viewed over a 15″ laptop monitor.  For example, this is a long method:

On the otherhand, this is a nice, short and terse method:

Methods should return early if possible.  This avoid too many nested iffs.  For example, consider the following method which takes in input object of some sort

   private void InitializeActionMethods()
            if (_configurationManager.Configuration == null)
                Logger.Warn("Some configuration is not defined.");

We fail fast and early.  In contrast, we could written the code like this:

   private void InitializeActionMethods()
            if (_configurationManager.Configuration =! null)
                // continue

This creates a code base with too many nested-iffs which is hard to read and maintain.

Method names and variables should clearly indicate purpose.  I often say code is a story.  Write code as if you are writing a story.  Books with shorter paragraphs are more engaging than books with longer paragraphs.  I often see developers naming variables using acronyms instead of taking the time to craft out descriptive variable names. Again, if code is a story, we need to clearly identify the characters.

Entities themselves should have single responsibility.  This one is also easy to violate.  I have seem some very large and weird looking classes over the years.  I have also seen classes that are almost impossible to refactor and unit test as it is composed of a collection of large, deeply nested methods with a large number of inter-dependencies.  Keep classes small.  I have told my devs that is is better to have a code base with thousands of small entities that one with a small number of large entities.  The former system, if well organized, is easily to reason with, maintain, modular and robust.

Entities should have dependencies passed to them.  The term coined for this is dependency injection or inversion of control.  I always get confused here but the idea is for a factory to construct a car, it needs to have all of its dependent bits, such as assembly line, etc.  These must be explicit and defined up front.

Unit test, unit tests and more unit tests. I cannot emphasize this enough but any component in the system should have an associated unit test which is concise.  There are well documented strategies for crafting awesome unit test but they should abide by the AAA principle of Arrange, Act and Assert.  Google this. Also, make these test very easy to follow.  All dependencies should be arranged or created up front.  if you are resolving entities from some container somewhere, which includes configuring some sort of logger, which requires some additional piece of configuration somewhere, you probably need to step back and rethink your tests and class design.

Code should be closed for modification and open for extension, as stated in the Open/Closed Principle, code should be easy to extend but closed for modification.  This is a tough one but think if it this way.  If you start creating code with long switches, then it is time to sit back and think of some patterns to use.

Code should be robust against anomalies but at the same time need not be overly micro-optimized.  Beautiful code means that it is easy on the eye, easy on the mind, free flowing, yet robust against extremities.  This includes excellent exception handling and logging.

Happy Coding.


Visual Studio is not updating my NuGet Packages – what to

You have projects relying on NuGet packages.  Some of these NuGet packages have been updated and you know this.  However, for one reason or another, perhaps the NuGet packages do not increment their version for each new release, Visual Studio does not observe the packages have changed and, therefore does not update them.  What do you do?

Quick and dirty solution is to just delete these respective NuGet packages from these two places and then build your solution again.

  • Local NuGet repository: on my Windows 8 machine it is found here:   C:\Users\my-username\.nuget\packages
  • Local NuGet cache for the packages in this solution.  This folder is typically found in the root of the solution.

Do this and you are good to go.

Happy Coding.



Unable to start debugging on a web server. Could not start ASP.NET debugging.

If you attempt to debug an ASP.NET web application and get the following error, you can Google it and find a lot of hits.  There are also numerous posts on StackOverflow for this error but interestingly none of the answers in these posts helped for me.

What worked for me was simple.

The application was being deployed in the DefaultApp pool with the following properties:

  • .NET CLR version = 4.0
  • Managed Pipeline = Integrated
  • Identity = domain/my-user-name

I changed the target application pool to one called .NET 4.5 with the following properties:

  • .NET CLR version = 4.0
  • Managed Pipeline = Integrated
  • Identity = ApplicationPoolIdentity

With this change, I was able to run the application in debug mode.

Happy coding.



Be on lookout for StackExchange Redis ConnectionMultiplexer ConnectionFailed event misfires

We are using the StackExchange.Redis ConnectionMultiplexer class to manage our connections to Redis.  Without clear guidance from the documentation, we have attempted to create our own retry strategy with this client with code which looks like this:


  public RedisClient()
 : this(GetConnectionString())
 public RedisClient(string connectionString)
 _logger = LogServiceProvider.Instance.GetLogger(GetType());
 _connectionString = connectionString;
 MaxRetryAttempts = 10;
 DelayBeforeRetryInMillisecs = 500;

 private void InitializeConnection()
 _logger.Info("Initializing a connection to the Redis cluster. ");
 bool isReconnectionAttempt = false;

 if (_connectionMultiplexer != null)
 Debug.WriteLine("disposing " + _connectionMultiplexer.GetHashCode());
 _connectionMultiplexer.ConnectionFailed -= HandleConnectionFailedEvent;

 // test this change.....
 isReconnectionAttempt = true;
 _logger.Info("This is reconnection attempt to the Redis cluster.");

 _connectionMultiplexer = ConnectionMultiplexer.Connect(_connectionString);
 _needConnect = !_connectionMultiplexer.IsConnected;
 _connectionMultiplexer.ConnectionFailed += HandleConnectionFailedEvent;

 Debug.WriteLine("created " + _connectionMultiplexer.GetHashCode());

 if (!_needConnect && isReconnectionAttempt)
 _logger.Info("Reconnection to the Redis cluster was succeeded.");

 if (!_needConnect)
 _logger.Info("Connection is successfully established to the Redis cluster.");
 _logger.Error("Cannot establish a connection to the Redis cluster. ");

 private void HandleConnectionFailedEvent(object sender, ConnectionFailedEventArgs args)
 // we could potentially receive multiple of these events so we need to be be careful
 Debug.WriteLine( "received connection failure event from " + sender.GetHashCode());
 Debug.WriteLine(" multiplexer id is " + _connectionMultiplexer.GetHashCode());

 // There's a known issue with the Redis ConnectionMultiplexer which prevents if from 
 // completely releasing an event handler even after it has been disconnected and disposed.
 // Se we need the following line of code.

 if (sender.GetHashCode() != _connectionMultiplexer.GetHashCode())

 _logger.Error("Connection to the Redis cluster has failed with the following event arg:", args.Exception);


 private void AttemptReconnection(int trial = 1)
 if (trial == MaxRetryAttempts)
 _logger.Info("Have attempted to re-connect to the Redis cluster 3 times. Aborting.");

 if (_connectionMultiplexer.IsConnected)
 _logger.Info("Connetion to Redis cluster is no longer required.");

 _logger.InfoFormat("Attempting reconnection attempt {0} to the Redis cluster.", trial);
 RedisReconnectionAttempt(this, new RedisReconnectionAttemptEventArgs(trial));

 // wait for a few seconds and then try to reconnect.....

 trial = trial + 1;
 TotalNumberOfReconnectionAttempts = TotalNumberOfReconnectionAttempts + 1;

When a network drop is simulated, the ConnectionFailed event is fired as expected. When this happens, an attempt is made to dispose of the current instance of the ConnectionMultiplexer object and create a new one. We do this to avoid the situation where attempting to access the current ConnectoinMultiplexer instance throws an exception indicating the object has already been disposed.

So we dispose the current instance, or atleast we think we do and create a new one. Yet somehow, the original instance which experienced the network drop, but is now disposed, still manages to fire ConnectionFailed events even though we are no longer supposed to be listerning.  After all, we unsubscribed to the event as indicated in the InitializeConnection method above.

I could not get an answer from StackOverflow. I also could not determine the cause just by looking at the source code for this ConnectionMultiplexer class.  What I could do, however, is put in a small hack to ensure these multiple noisy events are ignored, unless they are coming from the correct ConnectionMultiplexer instance.

I even have test to ensure my hack works.

Happy coding.

HTTP 301: The requested resource SHOULD be accessed through returned URI in Location Header.

Ever attempted to test a public API using, what you think is the correct credentials and query parameters only to get the above error?

For example, I tried the SumoLogic Search Jobs API with what I thought was the correct credentials and query parameters and kept getting this response:

 "status" : 301,
 "id" : "OLXXO-HINM6-3BXX7",
 "code" : "moved",
 "message" : "The requested resource SHOULD be accessed through returned URI in Location Header."

I even passed in an invalid URL and received the same response with a different “id”.

Quick solution to this problem is to run the curl command with the -v (verbose) option and it will spit out the location you are supposed to target as shown below:

< Cache-control: no-cache="set-cookie"
< Content-Type: application/json; charset=ISO-8859-1
< Date: Sat, 05 Nov 2016 03:59:46 GMT
< Location:

In my case, I was targeting:

instead of

Sprint story work break down – how do you do it?

Every agile team appears to break down the tasks involved in completing a sprint story item a little different  Also, every team  appears to have a different perspective on the Srum Definition of Done.  My perspective aligns closely with one of my previous manager’s definition:

A story is considered done, if it is shelvable, packageable and shippable.  Basically at the toss of a hat, it can be deployed to production or made available to end customers with all associated artifacts including supporting documentation.

This is the philosophy which I use as guideline when confronted with the task of splitting down sprint stories.  And why so?

Let’s start by stating the assumption that we have a story with a well written, understood and itemized set of user, technical or deployment requirements. These requirements should drive both development, test and documentation.  Or should they?

When a story is considered done, it should be possible to validate that each requirement was met, including existence of a shippable component to end customers, whoever they may be.   Such that at the end of the sprint, the team collectively as a whole, including PM and additional stakeholders should be able to validate that each of the requirements listed in the story is appropriately captured in the resulting artifact.

There are various distinct developmental tasks involved in taking a marketable idea from concept to market.  These tasks include:

  • Design : captures some high level design activity including UX work for UI related stories.
  • Development : everyone knows what this is all about.  Yes, this is the task which captures all coding effort.
  • QA: everyone also should know what this is about.   Someone has to validate the output from the Development tasks to ensure it does meet user requirements specified in the story. An interesting point of contention I have run into involves the source document from which QA should author test cases.  One school of thought says test cases should be driven from user requirements in the story and another says  from the output of the design task. This alone is an interesting topic on its own.
  • Deployment : this task captures work such as creating of chef scripts or other activities dedicated to ensuring code makes it from the build machine into our production servers.  Some companies use DevOps engineers for this.  Others get the same developers to do it all.  Again another interesting topic on its own.
  • Documentation: almost a task which never gets the time of day, especially if product is targeted to internal customers.

On one team, we had a hard and fast rule stating that EVERY story should be broken down into each of the aforementioned subtasks.  However, as with a lot of things in life such hard and fast rules do not apply and often times completely break down, forcing the team into a “process oriented as opposed to goal oriented mindset” as one of my team members  succinctly puts it.  I see these tasks as mere guidelines. Completing some stories will require all of these subtasks while others will not and it is up to the Scrum master in consultation with the team to use wise judgement to make this decision.

Should these distinct activities be captured by individual JIRA subtasks?  I personally think so.  Individual subtasks allow distinct teams to start the work in parrallel, enabling early engagement by all respective teams, possibly allowing for faster delivery of the feature.  Of course, the assumption here is we have distinct teams responsible for each facet of the development workflow.  If we have a single developer responsible for orchestrating each of the aforementioned phases, it is no necessary to explicitly break down the story into such tasks even though it may be worth capturing the effort required during each stage of the workflow.

What do you other Scrum Masters do?

Should models have simple methods

I recently ran into an interesting conversation with some members of my team.

We have a class to define configuration of a communication infrastructure.  While utilizing an instance of this class with a factory,  we decided to create several methods in the model to allow us perform simple checks, such as IsChannelEnabled as illustrated below:

public class ClientCommunicationConfiguration

public string Version { get; set; }
 public string Description { get; set; }
 public bool IsEnabled { get; set; }
 public List<ChannelConfiguration> Channels { get; set; }

public bool HasChannels()
 return Channels != null && Channels.Any();

public bool IsChannelEnabled(string channelName)
 if (!HasChannels())
 return false;

return Channels.Any(c => string.Compare(c.Name, channelName,StringComparison.OrdinalIgnoreCase) == 0 &amp;amp;&amp;amp; c.IsEnabled);

public bool IsMethodEnabled(string messageName)
 return Channels
 .Where(c => c.HasMessages() && c.IsEnabled)
 .SelectMany(c => c.Messages)
 .Any(m => string.Compare(m.Name, messageName, StringComparison.OrdinalIgnoreCase) == 0 &amp;amp;&amp;amp; m.IsEnabled);


A member of my team made a good argument stating that models should only expose properties.  It is not the responsibility of the model to make these kinds of decisions.  His argument is that these methods should reside in the factory class or in some other management entity which contains business logic to make these determinations.  It is definitely a good point although I made the following counter arguments:

  1. Every class in .NET comes with 3 methods, ToString(), GetHashCode and Equals.  Therefore models are not pure in that sense.
  2. We can encapsulate minimal logic in a model to allow one make certain determinations that are inherent in the model’s definition.
  3. Encapsulating such logic in the model makes the model testable as well, otherwise we have to create entities just to wrap such logic and ensure testability.
  4. There is no such thing as a strict model without methods.  Models are simply serializable objects. They can still have methods exposed to do simply checks, return data based on their internal state and just do simple validation.
  5. This validation code can be re-used.
  6. OO means a piece of data should contain properties and methods.

One advantage I see with not having these methods in model is simply a matter of purity. Otherwise, I do not see a realistic reason why models should not expose methods.

What are your thoughts?