Lab 2 - Synchronization Elements in Pthreads
Introduction
When it comes to parallel programming, we can encounter situations where multiple parallel threads want to access the same resources simultaneously. For example, consider the following scenario. We have two threads (T0 and T1) with shared access to an integer variable initialized to 0. In the thread function, both T0 and T1 increment the variable by 2. Ideally, we would expect the value of the variable after our program's execution to be 4, as we have two increments of the variable in the two threads.
In reality, the situation is not always that simple. If we were to translate the increment of an integer variable into assembly code, this operation might look like the following (in the example below, eax0 represents the eax register of thread T0, and eax1 represents the eax register of thread T1):
T0 | T1 |
---|---|
load(a, eax0) | load(a, eax1) |
eax0 = eax0 + 2 | eax1 = eax1 + 2 |
write(a, eax0) | write(a, eax1) |
We can have the following scenario:
- T0 reads the value of a (0) into its own register eax0, at the same time, T1 reads the value of a (also 0) into its own register eax1
- T0 increments the value of eax0, which becomes 2
- T1 does the same, and eax1 also becomes 2
- T0 writes the value from eax0 into a, which becomes 2
- T1 writes the value from eax1 into a, which remains 2.
It can be observed that, depending on how threads T0 and T1 are scheduled, the result of the sequence above can be either 2 or 4. This is called a race condition and is caused by the fact that the calculation result is dependent on the scheduling of uncontrollable events. The operation of incrementing a by 2 is not atomic, as it is composed of multiple operations that can interleave when running on multiple threads.