Design Space Exploration for Implementing a Software-Based Speculative Memory System

Design Space Exploration for Implementing a Software-Based Speculative Memory System

Kohei Fujisawa, Atsushi Nunome, Kiyoshi Shibayama, Hiroaki Hirata
Copyright: © 2018 |Pages: 13
DOI: 10.4018/IJSI.2018040104
(Individual Articles)
No Current Special Offers


To enlarge the opportunities for parallelizing a sequentially coded program, the authors have previously proposed speculative memory (SM). With SM, they can start the parallel execution of a program by assuming that it does not violate the data dependencies in the program. When the SM system detects a violation, it recovers the computational state of the program and restarts the execution. In this article, the authors explore the design space for implementing a software-based SM system. They compared the possible choices in the following three viewpoints: (1) which waiting system of suspending or busy-waiting should be used, (2) when a speculative thread should be committed, and (3) which version of data a speculative thread should read. Consequently, the performance of the busy-waiting system which makes speculative threads commit early and read non-speculative values is better than that of others.
Article Preview

1. Introduction

Shared-memory multiprocessors are commonplace in current computers, and multi-core processor chips are used not only in high-end servers but also in desktop and portable computers, even embedded ones. To benefit from core multiplicity, application programmers have to write parallel programs. They need to expose the inherent parallelism in an algorithm and express it explicitly, for example, by using a multithreading library.

Many techniques for parallelizing sequentially-coded programs have been developed (Zima, 1990). Most of them analyze the dependencies among iterations of a loop in a program and execute iterations only if it is assured that they have no dependencies on each other. In some simple cases, a compiler can automatically generate excellent codes executing a loop in parallel. However, when there are inter-iteration dependencies only in a small subset of iterations and other iterations have no dependency on each other, no compiler can parallelize the loop. To parallelize such loops, we proposed a concept called speculative memory (SM) (Hirata, 2016).

Conventionally, speculative execution (Kaeli, 2005; Hirata, 1992; Akkary, 1998; Marcuello, 1999; Hammond, 2000; Vijaykumar, 2001; Steffan, 2005; Ohsawa, 2005; Hertzberg, 2011; Odaira, 2014; Shoji, 2016) of a small segment in a program is provided mainly by hardware mechanisms—with the partial support of compilers—and is invisible to programmers. But with SM, programmers can specify the speculative execution of loop iterations explicitly in their programs. The SM system creates a thread to execute an iteration of a loop. The results of the execution of the thread are committed if the thread does not violate the dependency on other threads executing earlier iterations. If it does, the SM system forces it to abort and re-start the execution of the iteration. With this aborting capability, SM does not always require the assurance that the loop is parallelizable. Consequently, SM gives programmers more opportunities to extract the parallelism of their programs.

Complete Article List

Search this Journal:
Volume 11: 1 Issue (2023)
Volume 10: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 9: 4 Issues (2021)
Volume 8: 4 Issues (2020)
Volume 7: 4 Issues (2019)
Volume 6: 4 Issues (2018)
Volume 5: 4 Issues (2017)
Volume 4: 4 Issues (2016)
Volume 3: 4 Issues (2015)
Volume 2: 4 Issues (2014)
Volume 1: 4 Issues (2013)
View Complete Journal Contents Listing