Skip to main content

C++11 Multithreading - How to avoid race conditions

This article presents how to avoid problem of Race conditions described in this article.

As we already know race conditions can occur when two or more threads are trying to invoke some part of code or use the same resource (ex. variable) at the time. That can cause problem of unexpected results. We could see that problem in 'Race conditions' article where two threads was able to modify the same variable value which caused different program output every time we invoked it.

In order to avoid that problem we should synchronize operations of our threads using multithreading mechanism called mutexes (abbr. Mutual exclusion). Mutex is some kind of program flow control lock which which is able to make sure that some part of code locked by itself can be invoked by only one thread at the same time. Such locked part of code is called critical section.

When other thread will try to go into critical section it will be blocked and will need to wait until previous thread will unlock mutex of that critical section.

For better understanding that mechanism let's take a look on 'Race conditions' example code containing mechanism of mutexes to avoid race condition problem: Output of that code is (always the same): You can invoke that code few times and you will see that output of that example is always the same right now. It is because of synchronization mechanism. Let's analyze our synchronization mechanism using mutexes.

In point I we are declaring variable increment_mutex which is our mutex used for synchronization.

Now take a look on point II and III. As you know from previous article our race condition is caused by increment of variable value in two threads. We sould put such incrementation into critical section locked by our mutex increment_mutex.

mutex type has functions lock() and unlock() which could be start point and end point of critical section. We could invoke increment_mutex.lock() to start critical section and increment_mutex.unlock to finish critical section. This is enough mechanism for synchronization our mechanism of incrementation. Notice commented critical sections in point II and III. However, better method of creating critical section in C++11 code is using lock_guard mechanism. That mechanism uses RAII idiom which allows to create critical section start in the place of creating lock_guard typed variable. End point of such critical section is end of scope where such variable is defined (most often it is nearest close curly bracket sign '}' as in our situation). Such lock_guard object invokes mutex.lock() function in constructur of itself and mutex.unlock() function in the destructor of itself. Thanks to that we do not need to remember close critical section initialized by mutex. Non closing critical section could cause problem of deadlock which will be described in one of next articles.

Right now we have good working synchronized multithreaded program. Output is always the same which is expected. However, notice that our program works slower right now. Let's analyze time of working our application using Linux's time command for previous (containing race conditions) and current application (synchronized). Why is synchronized application slower? The answer is simple. Because there is high possibility that two threads working in parallel will have to wait each other before go to critical section. As I describe above when one thread is going to go into critical section (mutex.lock() function) which is locked by other thread it needs to wait for thread to exit such critical section (mutex.unlock() function). Sum of such waitings causes that application is slower.

Code of above application can be found as usual on our GitHub account here: https://github.com/xmementoit/CppAdventureExamples/tree/master/multithreading/raceConditionsAvoidance

Comments

Popular posts from this blog

Blog's new layout

As you noticed this blog has new layout from today. I hope you like it. I think new layout looks better and more modern than previous one. Please, write you opinion about new layout in comments. If you have some ideas how to make this blog better, all ideas are welcomed. Enjoy new layout and blog articles.

QT - foreach algoriithm with const references performance improvement

Today I would like to show you optimal way of using foreach QT algorithm . I will show you why we should pass elements of foreach algorithm by const reference instead of passing them by value. Let me explain it on the below example: Output of this example is: In point I we are creating 3 objects of MyClass class and push them to myClasses QList element. In point II we are using QT foreach algorithm to invoke getValue() method for each object from myClasses list. As you can see on output text for that part of code we are invoking copy constructor before and destructor after invoking getValue() function. It is because we are passing each myClasses list element to foreach algorithm by value. Therefore we are copying that element at the beginning of foreach loop step and removing them (destructing) at the end. This is inefficient solution, especially when class of object being copied is big. It decreases performance. of our application. Solution for that i...

C++ in 2014 - Predictions

Today I would like to share with you interesting article about prediction of development C++ programming languages (and its well-known frameworks and libraries) in 2014. It is written by Jens Weller and I think it is very interesting for every C++ programmer and user. You can open this article by clicking on the image below: Enjoy!