Dot Net For All

Race Condition and Thread Synchronization .NET

In my previous article I have discussed about the Basics of threads and Thread Pooling in C#. In this article I want to discuss about the Race condition and Thread synchronization when we are working with multiple threads in .NET platform using C# language.

In the first part of the article I want to discuss the race condition, how it happens and in the later part how we can prevent from the race condition from happening using the synchronization contexts which takes help of the Monitor class and the lock keyword.

Race Condition

Race condition is the scenario in programming where many threads compete to execute on the same code part resulting in undesirable results. Please have a look at the below code

class Program
    {
        static void Main(string[] args)
        {
            SharedResource sharedInst = new SharedResource();
            Thread[] localThreads = new Thread[10];
            for (int i = 0; i < localThreads.Length; i++)
            {
                localThreads[i] = new Thread(SharedResource.Sum);
                localThreads[i].Start();
            }

            for (int i = 0; i < localThreads.Length; i++)
            {
                localThreads[i].Join();
            }

            Console.WriteLine("Total Sum " + SharedResource.SumField);
            Console.Read();

        }
    }

    public class SharedResource
    {
        public static int SumField { get; set; }        

        public static void Sum()
        {            
            SumField++;
            Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " output is " + SumField);           
        }
    }

In the above code example I am trying to access the shared resource among the multiple threads. I am incrementing the value of the SumField property for each call to the Sum() function. The outcome of the above code looks simple and clear to get, if we execute the Sum() method 10 times using the threads, then the value of the SumField should be 10.

Let’s try and execute the above code, the result of which is shown in the below figure

As we can see in the above figure that the output of the above program is not at all consistent. Why this happened?

As we know that all the threads don’t run in parallel, it is just that the CPU executes all the threads one after another using the time slicing mechanism and it gives a false impression that threads are executing in parallel. One thread executes at a time.

Now when we compile the above code, the code is first compiled into the IL instructions using the C# sharp compiler and the IL instructions are in turn compiled into the machine specific instructions using the JIT compiler.

The following figure shows the JIT compiled code only for the Sum() function where it executes SumField++,

In the above figure we can see that in step 1 the values in the local variable are copied into some thread specific register. In step 2 the value in the register is incremented by one and in step 3 the value in the register is copied back to the variable.

Now suppose, thread 1 is executing the above code and it has completed execution till step 2, and due to time slicing mechanism of CPU, the execution is handed over to thread 2, it means that the thread 1 suspends its execution. The value of the sum field is incremented in the execution context of thread 1 but still it is not copied to the local variable. And as we know that every thread has it’s own share of the stack   memory, which means that each thread creates its own set of instruction. In the meantime thread 2 starts its execution with the original value i.e. 0 as the first thread’s value is not copied back and carry on with the same operation of incrementing the variables value.

Meanwhile the first thread also resumes execution and copies the incremented value into the SumField but thread 2 has already picked the value of the variable as 0.

Now both threads complete their operation and copy their value i.e. 1 to the local variable.

From the previous discussion we can see that even after executing the two thread’s consecutively the value of the SumField is still 1.

Though this scenario is completely based on the CPU’s context switching and time slicing mechanism. There may be chances that the result is as per our expectation, if context switching and time slicing works in accordance with the program execution. But that part is not in developer’s hand. So to prevent our program to work wrongly we should execute the threads using thread synchronization techniques which I will discuss next.

Thread Synchronization in .NET

The above mentioned race condition can be mitigated using the thread synchronization techniques provided in .NET framework using the Monitor.Enter() and Monitor.Exit() methods.

The code for the SharedResource class can be changed as shown below to acquire exclusive locking

    public class SharedResource
    {
        public static int SumField { get; set; }
        private static object _locker = new object();

        public static void Sum()
        {
            try
            {
                Monitor.Enter(_locker);
                {
                    SumField++;
                    Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " output is " + SumField);
                }
            }
            finally
            {
                Monitor.Exit(_locker);
            }
        }
    }

If we execute the above program to get the results we will continuously get the desired results i.e. 10 in the output.

What Monitor class here does is that it creates a gated access to the part of the code it is operating on. It means that only a single thread can execute the code which is under monitor’s gated access which prevents multiple threads to work on the same resource at the same time.

Monitor class can be used only with a reference type, as reference types have a sync block which helps the threads to check if the particular portion of the code is taken by some other thread. If some thread is operating on the code, the other threads keep waiting for the monitor to exit. And once it is free other thread can access the same code block again acquiring the lock.

Monitor.Enter(_locker, ref isLockTaken) has an overloaded version which takes a bool parameter as reference, which helps to check if any exception is thrown in the Enter  method like OutOfMemoryException or Abort being called. In that case isLockTaken will be false and following code will not be executed.

            finally
            {
                if(isLockTaken)
                    Monitor.Exit(_locker);
            }

Thread Synchronization using lock Keyword

In Place of the Monitor.Enter() and Monitor.Exit() we can simply use the lock keyword as shown in the below code

  public static void Sum()
        {
            lock (_locker)
            {
                SumField++;
                Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " output is " + SumField);
            }
        }

The above code is syntactic shortcut for the previous code which we have written using the Monitor class.

If some exception is thrown in the lock keyword block it will automatically handle it, as it generated a finally block at runtime.

Conclusion

In this article I have discussed about the race condition and ways to improve on it using thread synchronization in .NET platform using the Monitor class and the lock keyword in the C# programming language.

I hope this will make you understand about these concepts in C# language.

Top career enhancing courses you can't miss

My Learning Resource

Excel your system design interview