Skip to main content

C++11 Multithreading - How to avoid race conditions

This article presents how to avoid problem of Race conditions described in this article.

As we already know race conditions can occur when two or more threads are trying to invoke some part of code or use the same resource (ex. variable) at the time. That can cause problem of unexpected results. We could see that problem in 'Race conditions' article where two threads was able to modify the same variable value which caused different program output every time we invoked it.

In order to avoid that problem we should synchronize operations of our threads using multithreading mechanism called mutexes (abbr. Mutual exclusion). Mutex is some kind of program flow control lock which which is able to make sure that some part of code locked by itself can be invoked by only one thread at the same time. Such locked part of code is called critical section.

When other thread will try to go into critical section it will be blocked and will need to wait until previous thread will unlock mutex of that critical section.

For better understanding that mechanism let's take a look on 'Race conditions' example code containing mechanism of mutexes to avoid race condition problem: Output of that code is (always the same): You can invoke that code few times and you will see that output of that example is always the same right now. It is because of synchronization mechanism. Let's analyze our synchronization mechanism using mutexes.

In point I we are declaring variable increment_mutex which is our mutex used for synchronization.

Now take a look on point II and III. As you know from previous article our race condition is caused by increment of variable value in two threads. We sould put such incrementation into critical section locked by our mutex increment_mutex.

mutex type has functions lock() and unlock() which could be start point and end point of critical section. We could invoke increment_mutex.lock() to start critical section and increment_mutex.unlock to finish critical section. This is enough mechanism for synchronization our mechanism of incrementation. Notice commented critical sections in point II and III. However, better method of creating critical section in C++11 code is using lock_guard mechanism. That mechanism uses RAII idiom which allows to create critical section start in the place of creating lock_guard typed variable. End point of such critical section is end of scope where such variable is defined (most often it is nearest close curly bracket sign '}' as in our situation). Such lock_guard object invokes mutex.lock() function in constructur of itself and mutex.unlock() function in the destructor of itself. Thanks to that we do not need to remember close critical section initialized by mutex. Non closing critical section could cause problem of deadlock which will be described in one of next articles.

Right now we have good working synchronized multithreaded program. Output is always the same which is expected. However, notice that our program works slower right now. Let's analyze time of working our application using Linux's time command for previous (containing race conditions) and current application (synchronized). Why is synchronized application slower? The answer is simple. Because there is high possibility that two threads working in parallel will have to wait each other before go to critical section. As I describe above when one thread is going to go into critical section (mutex.lock() function) which is locked by other thread it needs to wait for thread to exit such critical section (mutex.unlock() function). Sum of such waitings causes that application is slower.

Code of above application can be found as usual on our GitHub account here: https://github.com/xmementoit/CppAdventureExamples/tree/master/multithreading/raceConditionsAvoidance

Comments

Popular posts from this blog

Advanced C++ - Mutable Class Field

Today I would like to present C++ class' feature called mutable class field . Mutable class field is class' field modifier which allows to change its value even if object of the class is declared as const . Take a look at the example: Output of this example is: In point I of that example we are defining object of TestClass . Note that this object is const . As you can see in point Ia this class has three different member fields ( constInt, mutableConstInt, nonConstInt ). Those variables are public for this example, but do not worry about encapsulation here. It is just omitted for simplify this example. As you can see one of this member fields is marked as mutable class file using mutable keyword ( mutableConstInt ). Such variable can be modified even if object of class TestClass is const . It will be explained in next points of this example. In point II we are printing default values of testObject object initialized in initialization list of TestClass' default c...

C++ Multithreading - Race conditions

In the previous C++ Multithreading article I presented you how to pass parameters between threads. Take a detail look on the output of that example once again: In the first line of that output you can notice that output text from two threads is mixed. You are probably wondering why it happens? It is because we are not protecting resources which are shared by two threads (in this example cout stream is shared in both threads) which causes multithreading's phenomenon called race condition . Because threads switching and accessing to shared resources are managed by operating system we do not know when std::cout stream will be accessed by main thread and when it will be accessed by second thread. Therefore in the previous article I mentioned that output of the example can be little different on your computer than my output example. What's more it is possible that this output will be different for few consecutive invoking of the example on the same machine. It is beca...

C++14 - Tuple addressing via type

Today I would like to introduce one of new features which will arrive to C++ with new language standard (C++14) which is going to be release in 2014. In order to compile example from this article you need to have compiler supporting C++14 standard. The newest version of GCC supports it. I would like to introduce you features called Tuple addressing via type which allows us to get tuple element value using type name instead of tuple parameter number. Of course it is possible only for type names which are not ambiguous. Let's take a look on below example for better understanding: In point I we are declaring our tuple type containng of 2 int elements and on string element. In point II we are using std::get function to get values of our tuple typed variable using tuple parameters numbers. This feature is well known from C++11 standard. Point III shows new (introduced in C++14 standard) way of getting values of tuple elements. We are getting string type element using...