Resharper with NUnit does not find local files

This is one of the problems we encounter ever so often but do not take the time to document it.

Say you have some test files in your project and have set their properties to Content and Copy Always.  You are using our favorite Resharper to run your unit tests using the NUnit test runner.  When you attempt to execute this test, you get an error as follows:

 Could not find a part of the path 'C:\Users\knji\AppData\Local\JetBrains\Installations\ReSharperPlatformVs15_427a36eb\TestFiles\PEs\notification.xml'.
 at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
 at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions

When you continue digging, you do realize that your files are indeed present in the bin/Debug folder of your application but Resharper just will not find it. Possible solutions:
1. Turn off Resharper shadow copying. This did not work for us.
2. Instruct your test to explicitly use the current working directory from the TestContext. This worked for us. So here’s the fix. Instead of doing this:

var file = File.ReadAllLines(@"my-relative-folder/some-cool-date.xml");

do this

var file = File.ReadAllLines(@TestContext.CurrentContext.TestDirectory + /"my-relative-folder/some-cool-date.xml");

Happy Coding.


Composing a Client in a Client/Server Architecture Model without Service

I will start by saying this:

If all you have is the functional requirements of the client application and known table structure of the underlying database and the service interface has not yet been established, then let the requirements of the client application drive the service contract.  The structure of the underlying database should not drive this service contract.

Over the course of my career, I have had to architect several solutions against external systems, mostly webservices, which either had not been developed nor had not yet been conceptualized or were in the initial stages of development.

Recently at Phreesia, we had to develop against a web service which was in its initial stages of conception.   At Honeywell’s Global Tracking division , when we migrated the legacy OCC 200, a 20+ years old legacy VMS Search and Rescue application, to the OCC 600, a modern client/server architecture, there was enormous amounts of effort expended in formulating the functional requirements of the desktop application since getting the user interface and all its usability concerns right was of utmost concern to facilitate adoption.

While such efforts was spent on defining the requirements of the client, the requirements of the server were not being developed at the same pace.  Consequently, development of the client started well in advance of the server and in such a case, a decision had to be made on how to compose the Data Access Layer of this desktop application.

We started development of the client side WPF desktop application, by decomposing the application into the three main layers: Presentation Layer Tier, Business Service Layer, Data Access Layer as shown below:

three layered application.png

We had a solid set of functional requirements against the Presentation Layer, which also drove the Middle Layer as well as the DAL. We also had an understanding of the database even though this was not documented and were able to work with the back-end developers to ensure that the contracts we were formulating could be met.  Based on the nature of the application’s use case in Search and Rescue; we are talking about a heavy desktop GIS mapping application, significant amount of functionality was developed before the Web Service came online, creating fake Web Services which implemented the same service contracts.

The Data Access Layer

As the saying goes, this is where the rubber hits the road,  this layer was responsible for retrieving data from the underlying source of truth, our databases and making this available to the application.  It’s purpose was two fold:

  1. Manage all connectivity with the underlying data source:  This was achieved via WCF service constructs.
  2. Encapsulate data retrieval from the application: This was achieved via interfaces and dependency injection.
  3. Map external entities retrieved from the underlying data source to business objects are required by the application: This was achieved via the Service Oriented Architectural and Chain of Responsibility pattern.

This layer was a dedicated set of DLLs which were injected into the application via IoC allowing the front and backend development to continue in a loosely coupled fashion.  The application’s DAL ended up looking like this:

DAL decomposition (1).png

The experiment was a success, after which we had a highly usable and performance WPF desktop application which the end customer has grown to like and adopt.

In another blog, post I will address the issue of mapping from web service entities to the application domain model.  How we did this in a consistent, loosely coupled, testable and extensible manner.

What has been your experiences?

Configuring Asp.NET Core 2 to return Pretty JSON

When creating a WebAPI which returns JSON, it is often times very useful to return formatted JSON, especially if your API does not come with any sort of documentation.

ASP.NET Core 2, makes this easy.  All you have to do in your Web API project is to configure the Json serializer before the application starts and this is done in Startup.cs as follows:

// this method is in the Startup class......

public void ConfigureServices(IServiceCollection services)
 .AddJsonOptions(options => options.SerializerSettings.Formatting = Formatting.Indented );

If your ValuesController returned

["value1", "value2"]

after the above change, it will now return



Once again, happy coding.

Windbg determination of race condition

A report came back from the field indicating that one of our services was not doing what it was designed to do.  Will not accept subscriptions, etc.  No errors were also reported in our Sumo logs.

We decided to take a memory dump of the server and determined there were about 300 threads waiting for something. A quick investigation using Windbg revealed a deadlock or race condition.

First we used !threads to get the following:

0:000> !threads
 ThreadCount: 279
 UnstartedThread: 0
 BackgroundThread: 273
 PendingThread: 0
 DeadThread: 6
 Hosted Runtime: no

Next we used !syncblck to get the following:

0:000> !syncblk
 Index SyncBlock MonitorHeld Recursion Owning Thread Info SyncBlock Owner
 33 000000fbfcbf0be8 497 1 000000fc03846780 548 26 000000fbe2724e78 System.Object
 88 000000fbfcbf1458 5 1 000000fc03846780 548 26 000000fbe27248c0 System.Object
 90 000000fbfcbf1368 3 1 000000fc02eeafb0 ad8 23 000000fbe26f80c8 System.Object
 Total 200
 CCW 3
 RCW 5
 ComClassFactory 0
 Free 132

The above table does not readily identify a deadlocked situation.  However, it references two threads worth further investigating to see if the are waiting on locks:  0x548 and 0xad8.  To further probe into each thread, we clicked on the link provided under the “Thread” column in the above table, and once the thread information was retrieved, issued !clrstack as demonstrated below.

Let’s start with thread 0x548.

0:000> ~~[548]s
00007ff9`b0540c6a c3 ret
0:026> !clrstack
OS Thread Id: 0x548 (26)
 Child SP IP Call Site
000000fc0353cca8 00007ff9b0540c6a [GCFrame: 000000fc0353cca8] 
000000fc0353ce18 00007ff9b0540c6a [GCFrame: 000000fc0353ce18] 
000000fc0353ce58 00007ff9b0540c6a [HelperMethodFrame_1OBJ: 000000fc0353ce58] System.Threading.Monitor.Enter(System.Object)
000000fc0353cf50 00007ff948ea067f MyCompany.Redis.Client.RedisClient.CheckForConnect()
000000fc0353cfc0 00007ff948ea05f0 MyCompany.Redis.Client.RedisClient.get_Subscriber()
000000fc0353cff0 00007ff948ea0540 MyCompany.Redis.Client.RedisClient.Subscribe(System.String, System.Action`2)
000000fc0353d060 00007ff948e9fe85 MyCompany.Redis.MessageBus.RedisMessageBus.Subscribe(System.String, System.Action`2)
000000fc0353d138 00007ff9a6bcc29c [StubHelperFrame: 000000fc0353d138] 
000000fc0353d190 00007ff948e9f894 ACompany.Redis.WebSocket.Channel.OnDemandMessageListener.Attach(Integration.OnDemand.WebSocket.Channel.IOnDemandMessageHandler)
000000fc0353d230 00007ff948e9e65b ACompany.Redis.WebSocket.Channel.OnDemandWebSocketHandler.OnOpen()
000000fc0353d380 00007ff948e9e0b6 Microsoft.Web.WebSockets.WebSocketHandler+d__9.MoveNext()
000000fc0353d3e0 00007ff948e9df75 System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start[[Microsoft.Web.WebSockets.WebSocketHandler+d__9, Microsoft.WebSockets]](d__9 ByRef)
000000fc0353d490 00007ff948e9dece Microsoft.Web.WebSockets.WebSocketHandler.ProcessWebSocketRequestAsync(System.Web.WebSockets.AspNetWebSocketContext, System.Func`1>)
000000fc0353d550 00007ff949024c4d System.Web.WebSocketPipeline+c__DisplayClass9_0.b__0(System.Object)
000000fc0353d5b0 00007ff9490243f1 System.Web.Util.SynchronizationHelper.SafeWrapCallback(System.Action)
000000fc0353d600 00007ff9490242b6 System.Web.Util.SynchronizationHelper.QueueSynchronous(System.Action)
000000fc0353d660 00007ff949022da1 System.Web.WebSocketPipeline+d__9.MoveNext()
000000fc0353d6f0 00007ff94902268f System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.__Canon, mscorlib]].Start[[System.Web.WebSocketPipeline+d__9, System.Web]](d__9 ByRef)
000000fc0353d7a0 00007ff9490225df System.Web.WebSocketPipeline.ProcessRequestImplAsync()
000000fc0353d860 00007ff949022435 System.Web.WebSocketPipeline.ProcessRequest()
000000fc0353d8b0 00007ff947b6ceb0 System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)
000000fc0353da60 00007ff947b6c3e4 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)
000000fc0353daa0 00007ff947b6b8ab DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)
000000fc0353e2a0 00007ff9a6b07fde [InlinedCallFrame: 000000fc0353e2a0] System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion(IntPtr, System.Web.RequestNotificationStatus ByRef)
000000fc0353e2a0 00007ff948157a7e [InlinedCallFrame: 000000fc0353e2a0] System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion(IntPtr, System.Web.RequestNotificationStatus ByRef)
000000fc0353e270 00007ff948157a7e DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr, System.Web.RequestNotificationStatus ByRef)
000000fc0353e330 00007ff947b6cdc1 System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)
000000fc0353e4e0 00007ff947b6c3e4 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)
000000fc0353e520 00007ff947b6b8ab DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)
000000fc0353e6f8 00007ff9a6b08233 [ContextTransitionFrame: 000000fc0353e6f8]

This thread is waiting on a lock to be released as indicated by this line at the top of its stack trace:

000000fc0353ce58 00007ff9b0540c6a [HelperMethodFrame_1OBJ: 000000fc0353ce58] System.Threading.Monitor.Enter(System.Object)

Let’s perform a similar exercise for thread 0xad8.  It’s stack trace is as follows:

0:026> ~~[ad8]s
00007ff9`b0540c6a c3 ret
0:023> !clrstack
OS Thread Id: 0xad8 (23)
 Child SP IP Call Site
000000fc0071ccc8 00007ff9b0540c6a [GCFrame: 000000fc0071ccc8] 
000000fc0071cef0 00007ff9b0540c6a [GCFrame: 000000fc0071cef0] 
000000fc0071cf28 00007ff9b0540c6a [HelperMethodFrame: 000000fc0071cf28] System.Threading.Monitor.Enter(System.Object)
000000fc0071d020 00007ff9491de236 Phreesia.Redis.MessageBus.RedisMessageBus.OnRedisConnectionReestablished(System.Object, System.EventArgs)
000000fc0071d110 00007ff947942cd3 [MulticastFrame: 000000fc0071d110] System.EventHandler`1[[System.__Canon, mscorlib]].Invoke(System.Object, System.__Canon)
000000fc0071d170 00007ff9489c6485 ACompany.Redis.Client.RedisClient.InitializeConnection()
000000fc0071d1c0 00007ff948ea06e7 ACompany.Redis.Client.RedisClient.CheckForConnect()
000000fc0071d230 00007ff948ea4da0 ACompany.Redis.Client.RedisClient.get_Database()
000000fc0071d260 00007ff9491b817a ACompany.Redis.Common.RedisOnDemandSubscriptionManager.RemoveSubscription(System.String, System.String, System.String)
000000fc0071d370 00007ff9491b7d73 ACompany.Redis.WebSocket.Channel.OnDemandWebSocketHandler.OnClose()
000000fc0071d4b0 00007ff948e9e50c Microsoft.Web.WebSockets.WebSocketHandler+d__9.MoveNext()
000000fc0071d500 00007ff948e9e3ca Microsoft.Web.WebSockets.WebSocketHandler+d__9.MoveNext()
000000fc0071d560 00007ff948e9df75 System.Runtime.CompilerServices.AsyncTaskMethodBuilder.Start[[Microsoft.Web.WebSockets.WebSocketHandler+d__9, Microsoft.WebSockets]](d__9 ByRef)
000000fc0071d610 00007ff948e9dece Microsoft.Web.WebSockets.WebSocketHandler.ProcessWebSocketRequestAsync(System.Web.WebSockets.AspNetWebSocketContext, System.Func`1>)
000000fc0071d6d0 00007ff949024c4d System.Web.WebSocketPipeline+c__DisplayClass9_0.b__0(System.Object)
000000fc0071d730 00007ff9490243f1 System.Web.Util.SynchronizationHelper.SafeWrapCallback(System.Action)
000000fc0071d780 00007ff9490242b6 System.Web.Util.SynchronizationHelper.QueueSynchronous(System.Action)
000000fc0071d7e0 00007ff949022da1 System.Web.WebSocketPipeline+d__9.MoveNext()
000000fc0071d870 00007ff94902268f System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1[[System.__Canon, mscorlib]].Start[[System.Web.WebSocketPipeline+d__9, System.Web]](d__9 ByRef)
000000fc0071d920 00007ff9490225df System.Web.WebSocketPipeline.ProcessRequestImplAsync()
000000fc0071d9e0 00007ff949022435 System.Web.WebSocketPipeline.ProcessRequest()
000000fc0071da30 00007ff947b6ceb0 System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)
000000fc0071dbe0 00007ff947b6c3e4 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)
000000fc0071dc20 00007ff947b6b8ab DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)
000000fc0071e420 00007ff9a6b07fde [InlinedCallFrame: 000000fc0071e420] System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion(IntPtr, System.Web.RequestNotificationStatus ByRef)
000000fc0071e420 00007ff948157a7e [InlinedCallFrame: 000000fc0071e420] System.Web.Hosting.UnsafeIISMethods.MgdIndicateCompletion(IntPtr, System.Web.RequestNotificationStatus ByRef)
000000fc0071e3f0 00007ff948157a7e DomainNeutralILStubClass.IL_STUB_PInvoke(IntPtr, System.Web.RequestNotificationStatus ByRef)
000000fc0071e4b0 00007ff947b6cdc1 System.Web.Hosting.PipelineRuntime.ProcessRequestNotificationHelper(IntPtr, IntPtr, IntPtr, Int32)
000000fc0071e660 00007ff947b6c3e4 System.Web.Hosting.PipelineRuntime.ProcessRequestNotification(IntPtr, IntPtr, IntPtr, Int32)
000000fc0071e6a0 00007ff947b6b8ab DomainNeutralILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int64, Int32)
000000fc0071e878 00007ff9a6b08233 [ContextTransitionFrame: 000000fc0071e878]

Similarly, this thread is waiting for a lock to be released as indicated at the top of its stack trace:

000000fc0071cf28 00007ff9b0540c6a [HelperMethodFrame: 000000fc0071cf28] System.Threading.Monitor.Enter(System.Object)

It gets tricky from this point on with Windbg at one cannot easily identify the lock identifies, which would have helped.  A deadlock means one thread is waiting for a lock to be released, but this thread also has acquiring and is holding a lock which another thread is waiting to be released.

So what can we make from the above?

It starts with thread 0x548 acquiring a lock on its call to RedisClient.Subscribe, let’s call this _redisMessageBusLock, as indicated below:

000000fc0353ce58 00007ff9b0540c6a [HelperMethodFrame_1OBJ: 000000fc0353ce58] System.Threading.Monitor.Enter(System.Object)
000000fc0353cf50 00007ff948ea067f Phreesia.Redis.Client.RedisClient.CheckForConnect()
000000fc0353cfc0 00007ff948ea05f0 Phreesia.Redis.Client.RedisClient.get_Subscriber()
000000fc0353cff0 00007ff948ea0540 Phreesia.Redis.Client.RedisClient.Subscribe(System.String, System.Action`2)   // we acquire _redisMessageBus lock here

Simultaneously, thread 0xad8, had first attempted to check for the Redis connection state in its call to CheckForConnect().

Servicing of this call, requires acquisition of a lock, we will call _redisClientLock. We acquire the lock, attempt to fix the Redis connection, if there is a problem, raise an event when the connection has been re-established, then release the lock.

In this case, there was an issue with the Redis connection, so we acquired the lock, _redisClientLock, fixed the connection and then raised the event OnRedisConnectionReestablished, while still holding the lock, _redisClientLock.

000000fc0071cf28 00007ff9b0540c6a [HelperMethodFrame: 000000fc0071cf28] System.Threading.Monitor.Enter(System.Object)    // we are waiting for redisMessageBusLock to be released
000000fc0071d020 00007ff9491de236 Phreesia.Redis.MessageBus.RedisMessageBus.OnRedisConnectionReestablished(System.Object, System.EventArgs)
000000fc0071d110 00007ff947942cd3 [MulticastFrame: 000000fc0071d110] System.EventHandler`1[[System.__Canon, mscorlib]].Invoke(System.Object, System.__Canon)
000000fc0071d170 00007ff9489c6485 Phreesia.Redis.Client.RedisClient.InitializeConnection()
000000fc0071d1c0 00007ff948ea06e7 Phreesia.Redis.Client.RedisClient.CheckForConnect()     // we acquired redisClientLock here

Since the event handling is synchronous, the same thread which raises the event, handles, it.  However, part of handling the event requires acquisition of redisMessageBusLock, which was already acquired by thread ox548.

Based on the above, our lock state looks like this:

Thread       LockAcquired           LockWaitingFor          
0xad8        _redisClientLock        _redisMessageBusLock     
0x548        _redisMessageBusLock    _redisClientLock

which clearly demonstrates out deadlock.

Happy debugging.

c# volatile demystified -my take

According to MSDN (
a thread is the basic operating system entity that is used to execute a given task defined by a set of instructions. Therefore, a thread has its context, which includes all information it needs to resume execution such as set of CPU registers and stack of local variables, all in the address space of the thread’s host process.

Consider the following class:

public EncryptionService
public static readonly EncryptionService Instance = new EncryptionService();
private EncryptionService(){}
private int _counter = 0;
public int Encrypt(string input)
// run some encryption algorithm
return ++counter;

This class is singleton because it contains a private constructor and you can only access the single Instance. Since there is only a single instance of EncryptionService, there is a chance
that multiple threads could be running Encrypt and updating the local variable _counter.  In other words, the class is not thread safe.

Now, recall the thread context we talked about.  When a thread is created, the CLR allocates it 1MB stack, atleast for x86 CPU architectures.

The stack space is used for passing arguments to a method and for holding local variables defined in the method. Before executing a method the CLR executes prologue code [REF CLR via C#] that initializes the method, such as getting its execution and return address and allocating memory for its local variables on the thread’s stack, which is part of the thread’s execution context. When the thread executes more methods, its stack fills up with local variables from these methods until the method’s epilogue code runs clearing all of the variables that should not [or be] out of scope.

While the unit of execution is a thread, the physical entity which executes code, that have been translated to machine instructions is the CPU.  Naturally, this means that those variables defined on the thread’s stack are stored in the CPU cache.  When a thread is executing code on a specific CPU, its variables are therefore stored in this particular CPU’s cache.

So what exactly happens when multiple threads execute a method on a single instance of our EncryptionService?  We first need to understand how the above C# program is converted to assembly instructions which is what the CPU executes.

The Microsoft C# compiler generates Microsoft Intermediate Language (MSIL).  Which is what is stored in the assembly.  When the CLR loads this assembly and attempts to invoke the Decrypt method, it Just-in-Time (JIT) compilers the code to CPU specific assembly language instructions, storing this in a location in memory, at which point execution transfers to the memory location where the Decrpyt method is saved.

When the executing thread invokes Decrypt, is encounters a local variable called _counter stored at a certain memory location.  This variable has to be loaded from its memory location into the CPU’s register.  If this memory location was not already in any of the CPU’s caches, a cache miss occurs and this data and sorrounding bytes, as defined by the cache line size, is retrieved from memory and saved in the CPU cache, then loaded into the CPU register.  Then the CPU updates the register value, as part of the add instruction, and then stores the changed value in the respective cache. For this new value to become visible to other processes, or threads, it needs to be written back to main memory.  When exactly this happens is defined by what is known as a CPU write policy.

If you have two threads doing the same thing, it could happen that the values of _counter in each CPU’s cache is not consistent, since each CPU was working on its local local copy of the _counter variable in its registers/cache. Based on your use case, this could not be a desirable outcome.

More on Static Variables

When you decorate a variable as static, interesting things start to happen.

A static type is defined as part of the type definition. What this effectively means is that a static field is defined in the the header block of the type definition object.

This means that the bytes backing these static fields are part of the type’s header block. And since there is just one type header block, we have only one static field in the entire application domain.

Since a static field is backed by a memory block defined as part of the object’s type header which is at a fixed location on the managed heap, several threads can potentially override the work of each other.

This is a similar effect to having a s single instance object being operated upon my multiple threads without any synchronization mechanism.

On Chip Cache Memory
As noted before, a static member is defined at a fixed location in the object’s type header block in memory.  This means that instructions executing on the CPU need to fetch this static variable in memory to write or read from it.

To improve performance, today’s CPUs have something called a Cache Memory, which is a set of CPU registers defined on the CPU chip [ref??].  These are called Level caches, based on how close they are to the CPU.  There is an L1 cache, L2 and L3 cache, with the L1 cache resident on the CPU chip. “L” standing for level.   This means that instead of accessing RAM via the memory bus [TODO PICS], the CPU simply refers to its register cache to retrieve a value of a variable.

The first time a thread needs to read a value from memory such as our static variable, the CPU will read this value and various surrounding bytes, also known as the cache-line and store this cache line it in its on-chip cache.  You can imagine an on-chip cache as simply a dictionary of of key/value pairs of cache lines, keyed on memory address or address range.  These address keys are actually known as tag and each value of this key-value pair is a cache entry that comprises of actual data and some sort of flag indicating if this field has been updated from the last time it was read from memory.


When the value needs to be updated, the CPU updates its registry cache instead of writing directly to static field in memory. Periodically, the CPU then flushes out all of its content to memory (RAM).

Static Variables Concurrency
With all of this in mind, let us examine what happens with our static variable, within a multi-threaded, multi-core execution context.\

public EncryptionService


public static readonly EncryptionService Instance = new EncryptionService();
private EncryptionService(){}
private int _counter = 0;
public int Encrypt(string input)
// run some encryption algorithm
return ++counter;

public int Decrypt(string input)
// run some encryption algorithm
return ++counter;

Say thread A running on CPU A is about to execute Encrypt so it reads bytes in memory
for _counter variable. This variable is now in CPU A’s cache as shown below:

Now a second thread, thread B running on CPU B, attempts to execute Encrypt and
copys the value of _counter from memory, just before CPU A has had a chance to
flush out its update to the memory, recall this is a singleton.
The content of CPU B’s cache is now different from what is in memory for the value of _counter.

This is a classic cache concurrency problem and happens because CPU flushes happen at an unpredictable time in the future. Although some CPUs provide an ability to control this problem such as the IA64 with volatile read with write acquire semantics.

Volatile Read/Write
Recall a cache concurrency happens when one CPU updates a value in its on chip cache and before it has flushed this change to memory, another CPU has read the previous value from memory.  So the new flushed value is not reflected in the second CPU’s cache.

Volatile write means that a CPU writes a value of a register out to its cache and then flushes the entire content of the CPU cache to memory.   Interesting choice of word volatile.

Volatile read means that a CPU reads a value into memory and invalidates its cache?  Invalidating the cache means that this register value is no longer current and therefore, the next subsequent read of this value will come from memory.  Hence the word volatile, which means liable to change rapidly or unpredictably.  The fact that CPU register read and write is causing rapid flushing and invalidation of the CPU cache is reason why these are referred to as volatile operations, since CPU caches are meant to do otherwise.

There is also a Memory Fence which flushes the cache content to memory and then invalidates the cache.

How does this prevent cache concurrency?

public EncryptionService
public static readonly EncryptionService Instance = new EncryptionService();
private EncryptionService(){}
private int _counter = 0;
public int Encrypt(string input)
var current = Thread.VolatileRead(ref _counter); //1
Thread.VolatileWrite(ref _counter, current + 1); //2
return _counter;

public int Decrypt(string input)
// run some decryption algorithm
var current = _counter;
counter = current + 1;
return _counter;

With these changes, this is what happens now.

When thread A on CPU A executes line 1, it performs a volatile read meaning that is reads the current value of _counter from memory but invalidates it cache, so that the next subsequent read of this value comes from memory.  If thread B now modifies that same memory in its CPU register location, since the field is marked a volatile, CPU B will also flush out the cache line containing this variable to memory.

In C#, this is what methods VolatileRead and VolatileWrite do.

c# volatile To The Rescue

All of the above is nice but cumbersome to use so the C# team provided a simple keyword volatile that simply wraps reading and writing to a variable in volatile read/write.

public EncryptionService
public static readonly EncryptionService Instance = new EncryptionService();
private EncryptionService(){}
private volatile int _counter = 0;
public int Encrypt(string input)
var current = _counter; //1
_counter, current + 1; //2
return _counter;

public int Decrypt(string input)
// run some decryption algorithm
var current = _counter;
counter = current + 1;
return _counter;

This means that a CPU does not cache this variable and reads/writes directly to memory. So while this solves cache concurrency, there is still a problem since multiple threads can attempt to write to this memory location at the same time.

To prevent this you have to use C# thread synchronization constructs,

Note that this problem, while not the same as two threads writing to/reading from the same memory location, manifest the same way as an application problem. This is a problem resulting from running on a multi-core machine.




Sample Quiz on Delayed Execution of IEnumerable

A sample quiz to test your knowledge of delayed execution of IEnumerable<T>.

Assume you have a method as follows:

public IEnumerable<AppointmentSlot> GetOpenSlots(Dictionary<string, string> request)
return _openSlots;

The above method returns a list of open slots which could simply have been a collection prepoluated when the class was constructed.

Say somewhere else in the code you call this method as follows:

public RelayCommand GetOpenAppointmentSlotsCmd
get { return new RelayCommand(o => true, o =>
OpenAppointmentSlots = _resourcesServiceClient
.Select(s => new AppointmentSlotViewModel(s));

Where OpenAppointmentSlots is a property of type IEnumerable<T> and T is of type AppointmentSlotViewModel which in simple form could look something like  this:

public class AppointmentSlotViewModel : ViewModelBase
public AppointmentSlot AppointmentSlot { get; }

public AppointmentSlotViewModel(AppointmentSlot slot)
AppointmentSlot = slot;

public DateTime StartsAt => AppointmentSlot.StartsAt;
public int DurationInMiutes => AppointmentSlot.DurationInMiutes;
public string ProviderId => AppointmentSlot.ProviderId;

public RelayCommand BookAppointmentSlot
return new RelayCommand(o => NotBooked, o =>
NotBooked = false;

private bool _notBooked = true;
public bool NotBooked
get { return _notBooked; }
SetField(ref _notBooked, value, "NotBooked");

Further assume you had the list of OpenAppointmentSlots bound to a ListView and had a DataTemplate which manipulated the NotBooked property of AppointmentSlotViewModel.

Now, say somewhere else in the code, perhaps in the same view model which gets the list of open slots, you attempt to submit these selected open slots for booking as follows:

public RelayCommand BookAppointmentCmd
return new RelayCommand(o => true, o =>
var booked = OpenAppointmentSlots..Where(s => s.IsBooked()).Select(nb => nb.AppointmentSlot);
BookedAppointmentStatus = _resourcesServiceClient.BookAppointments(booked);

After manipulating the UI such that you now have two slots booked,  what do you suppose will be the result of the variable called booked, in the above snippet and why?


Yet another practical pocket guide on writing clean code

Being a big proponent on crafting beautiful, robust and maintainable code, I have read several books and articles on the subject.  One of my favorite resources is a book titled “Clean Code” by Robert C. Martin.

I would propose that such a resource be readily available in the company library and would even go as far as advocating for every software engineer joining the company to either have read this book or is required to read this book as part of the on-boarding process.

In summary what is clean code:  Code which mostly abides by the SOLID principles of software design.  In my own words.

Methods should have a single responsibility.  It is better to have a class with many small methods that have a class with a small number of large methods.

Methods should not be overly long. A well known and acceptable measure is that a method should not span the entire code editor space in Visual Studio, when viewed over a 15″ laptop monitor.  For example, this is a long method:

On the otherhand, this is a nice, short and terse method:

Methods should return early if possible.  This avoid too many nested iffs.  For example, consider the following method which takes in input object of some sort

   private void InitializeActionMethods()
            if (_configurationManager.Configuration == null)
                Logger.Warn("Some configuration is not defined.");

We fail fast and early.  In contrast, we could written the code like this:

   private void InitializeActionMethods()
            if (_configurationManager.Configuration =! null)
                // continue

This creates a code base with too many nested-iffs which is hard to read and maintain.

Method names and variables should clearly indicate purpose.  I often say code is a story.  Write code as if you are writing a story.  Books with shorter paragraphs are more engaging than books with longer paragraphs.  I often see developers naming variables using acronyms instead of taking the time to craft out descriptive variable names. Again, if code is a story, we need to clearly identify the characters.

Entities themselves should have single responsibility.  This one is also easy to violate.  I have seem some very large and weird looking classes over the years.  I have also seen classes that are almost impossible to refactor and unit test as it is composed of a collection of large, deeply nested methods with a large number of inter-dependencies.  Keep classes small.  I have told my devs that is is better to have a code base with thousands of small entities that one with a small number of large entities.  The former system, if well organized, is easily to reason with, maintain, modular and robust.

Entities should have dependencies passed to them.  The term coined for this is dependency injection or inversion of control.  I always get confused here but the idea is for a factory to construct a car, it needs to have all of its dependent bits, such as assembly line, etc.  These must be explicit and defined up front.

Unit test, unit tests and more unit tests. I cannot emphasize this enough but any component in the system should have an associated unit test which is concise.  There are well documented strategies for crafting awesome unit test but they should abide by the AAA principle of Arrange, Act and Assert.  Google this. Also, make these test very easy to follow.  All dependencies should be arranged or created up front.  if you are resolving entities from some container somewhere, which includes configuring some sort of logger, which requires some additional piece of configuration somewhere, you probably need to step back and rethink your tests and class design.

Code should be closed for modification and open for extension, as stated in the Open/Closed Principle, code should be easy to extend but closed for modification.  This is a tough one but think if it this way.  If you start creating code with long switches, then it is time to sit back and think of some patterns to use.

Code should be robust against anomalies but at the same time need not be overly micro-optimized.  Beautiful code means that it is easy on the eye, easy on the mind, free flowing, yet robust against extremities.  This includes excellent exception handling and logging.

Happy Coding.