Skip to main content

C++11 Multithreading - How to avoid race conditions

This article presents how to avoid problem of Race conditions described in this article.

As we already know race conditions can occur when two or more threads are trying to invoke some part of code or use the same resource (ex. variable) at the time. That can cause problem of unexpected results. We could see that problem in 'Race conditions' article where two threads was able to modify the same variable value which caused different program output every time we invoked it.

In order to avoid that problem we should synchronize operations of our threads using multithreading mechanism called mutexes (abbr. Mutual exclusion). Mutex is some kind of program flow control lock which which is able to make sure that some part of code locked by itself can be invoked by only one thread at the same time. Such locked part of code is called critical section.

When other thread will try to go into critical section it will be blocked and will need to wait until previous thread will unlock mutex of that critical section.

For better understanding that mechanism let's take a look on 'Race conditions' example code containing mechanism of mutexes to avoid race condition problem: Output of that code is (always the same): You can invoke that code few times and you will see that output of that example is always the same right now. It is because of synchronization mechanism. Let's analyze our synchronization mechanism using mutexes.

In point I we are declaring variable increment_mutex which is our mutex used for synchronization.

Now take a look on point II and III. As you know from previous article our race condition is caused by increment of variable value in two threads. We sould put such incrementation into critical section locked by our mutex increment_mutex.

mutex type has functions lock() and unlock() which could be start point and end point of critical section. We could invoke increment_mutex.lock() to start critical section and increment_mutex.unlock to finish critical section. This is enough mechanism for synchronization our mechanism of incrementation. Notice commented critical sections in point II and III. However, better method of creating critical section in C++11 code is using lock_guard mechanism. That mechanism uses RAII idiom which allows to create critical section start in the place of creating lock_guard typed variable. End point of such critical section is end of scope where such variable is defined (most often it is nearest close curly bracket sign '}' as in our situation). Such lock_guard object invokes mutex.lock() function in constructur of itself and mutex.unlock() function in the destructor of itself. Thanks to that we do not need to remember close critical section initialized by mutex. Non closing critical section could cause problem of deadlock which will be described in one of next articles.

Right now we have good working synchronized multithreaded program. Output is always the same which is expected. However, notice that our program works slower right now. Let's analyze time of working our application using Linux's time command for previous (containing race conditions) and current application (synchronized). Why is synchronized application slower? The answer is simple. Because there is high possibility that two threads working in parallel will have to wait each other before go to critical section. As I describe above when one thread is going to go into critical section (mutex.lock() function) which is locked by other thread it needs to wait for thread to exit such critical section (mutex.unlock() function). Sum of such waitings causes that application is slower.

Code of above application can be found as usual on our GitHub account here: https://github.com/xmementoit/CppAdventureExamples/tree/master/multithreading/raceConditionsAvoidance

Comments

Popular posts from this blog

C++ Multithreading - Race conditions

In the previous C++ Multithreading article I presented you how to pass parameters between threads. Take a detail look on the output of that example once again: In the first line of that output you can notice that output text from two threads is mixed. You are probably wondering why it happens? It is because we are not protecting resources which are shared by two threads (in this example cout stream is shared in both threads) which causes multithreading's phenomenon called race condition . Because threads switching and accessing to shared resources are managed by operating system we do not know when std::cout stream will be accessed by main thread and when it will be accessed by second thread. Therefore in the previous article I mentioned that output of the example can be little different on your computer than my output example. What's more it is possible that this output will be different for few consecutive invoking of the example on the same machine. It is beca...

Advanced C++ - Stack unwinding

Stack unwinding is normally a concept of removing function entries from call stack (also known as Execution stack, Control stack, Function stack or Run-time stack). Call Stack is a stack data structure that stores active functions' addresses and helps in supporting function call/return mechanism. Every time when a function is called, an entry is made into Call stack which contains the return address of the calling function where the control needs to return after the execution of called function. This entry is called by various names like stack frame , activation frame or activation record. With respect to exception handling , stack Unwinding is a process of linearly searching function call stack to reach exception handler. When an exception occurs, if it is not handled in current function where it is thrown, the function Call Stack is unwound until the control reaches try block and then passes to catch block at the end of try block to handle exception. Also, in this proc...

Advanced C++ - Template Functions

Template function are special types of C++ function which can be used with different types (generic types). Thanks to that we can create one body of function which can be used for many different types. When we are creating template function compiler does not define any function for use at that time. Generation of funciton basing on templates are done during compilation process basing of differnt usage of template class. For better understanding take a look on following example: Output of this example is: In point I, we are defining template function which should return sum of two parameters. Without templates we need to define such function separately for each type which we should use it with (ex. separate function for int type, separate for double , separate for any other type). Thanks to template we can generate body of that function only once (as in point I) and use it to any type which is able to use body of that function. In our example we can use this template for an...