Cost Evaluation of Synchronization Algorithms for Multicore Architectures

Cost Evaluation of Synchronization Algorithms for Multicore Architectures

Masoud Hemmatpour (Politecnico di Torino, Italy), Renato Ferrero (Politecnico di Torino, Italy), Filippo Gandino (Politecnico di Torino, Italy), Bartolomeo Montrucchio (Politecnico di Torino, Italy) and Maurizio Rebaudengo (Politecnico di Torino, Italy)
Copyright: © 2018 |Pages: 15
DOI: 10.4018/978-1-5225-2255-3.ch346


In a multicore environment, a major focus is represented by the synchronization among threads and processes. Since synchronization mechanisms strongly affect the performance of multithread algorithms, the selection of an effective synchronization approach is critical for multicore environments. In this chapter, the cost of the main existing synchronization techniques is estimated. The current investigation covers both hardware and software solutions. A comparative analysis highlights benefits and drawbacks of the considered approaches. The results are intended to represent a useful aid for researchers and practitioners interested in optimization of parallel algorithms.
Chapter Preview


When threads are working simultaneously on a shared object, their synchronization should be managed properly, otherwise the instructions of different threads interleave on the shared object in a wrong way. For example, Figure 1 shows the program order of two threads that are working on the shared object counter (Silberschatz, 2006). Since one thread is incrementing the counter and another one is decrementing it, at the end, the counter is expected to have the initial value. However, as Figure 1 illustrates, there is a possible execution order of instructions that leads to an incorrect result.

Figure 1.

Incorrect execution of the instructions order

Synchronization mechanisms are used to avoid the problematic interleaving instructions. The part of the code that accesses to the shared object is called critical section. The critical section should be protected by synchronization primitives to avoid concurrent access to the shared object:

Key Terms in this Chapter

Memory Barrier: An operation to avoid the reordering of instructions.

Multicore Architecture: A processor with two or more cores, i.e., independent processing units.

Performance: Number of operations in the unit of measure.

Race Condition: Attempt to read and write a shared object, by more than one thread with an undefined behavior.

Synchronization: A technique for coordinating threads or processes to have appropriate execution order.

Spinning: The act of querying (or in some cases modifying) an object, and waiting till desired content is achieved, before entering into the critical section.

Critical Section: A part of the code that access to the shared object.

Complete Chapter List

Search this Book: