A Buffered Dual-Access-Mode Scheme Designed for Low-Power Highly-Associative Caches

A Buffered Dual-Access-Mode Scheme Designed for Low-Power Highly-Associative Caches

Yul Chu, Marven Calagos
DOI: 10.4018/jertcs.2013040103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This paper proposes a buffered dual-access-mode cache to reduce power consumption for highly-associative caches in modern embedded systems. The proposed scheme consists of a MRU (most recently used) buffer table and a single cache structure to implement two accessing modes, phased mode and way-prediction mode. The proposed scheme shows better access time and lower power consumption than two popular low-power caches, phased cache and way-prediction cache. The authors used Cacti and SimpleScalar simulators to evaluate the proposed cache scheme by using SPEC benchmark programs. The experimental results show that the proposed cache scheme improves the EDP (energy delay product) up to 40% for instruction cache and up to 42% for data cache compared to way-prediction cache, which performs better than phased cache.
Article Preview
Top

On-chip power consumption in memories becomes a challenge with the use of deep submicron technologies. Alipour et al. (2011) explores a design space for memory architecture. This helps chip designers find cache sizes for optimum power consumption and performance for embedded processors.

Set-associative caches are used to improve cache hit rate but have higher energy consumption than direct-mapped cache due to some wasted energy dissipation (Powel et al., 2001) since, regardless of the number of banks in a set, only one bank has the desired data for a cache hit.

To resolve the energy issue, Hasegawa et al. (1995) proposed a low-power set-associative cache scheme, now commonly referred to as phased cache. In phased cache, all the tags are accessed in the first phase, and if one tag matches to a reference address, only one data block in a bank is accessed as the second phase. The basic idea is to avoid unnecessary data access to reduce power consumption. The disadvantage of a phased cache is a poor performance caused by using more clock cycles to access desired data, compared to other conventional caches.

Taking advantage of power savings over phased cache, Inoue et al. (1999) proposed a low-power set-associative cache scheme, called way-prediction cache that improves the latency of phased cache. The MRU (most recently used) algorithm is popular to predict one of the n banks to access. If the prediction is correct, the tag and data block are accessed in one cycle with the speed and power savings like direct-mapped cache. If the prediction is wrong, the rest of the banks are accessed during the next cycle in parallel. The performance and power efficiency of way-prediction cache is highly dependent on the accuracy of a way-prediction algorithm used.

Chung et al. (2008) proposes a pipeline change for way determination. An early tag lookup stage, between branch prediction and fetch stage, is used to determine the next way to be accessed. In this method, prediction accuracy and hit rate of the original way-prediction cache are maintained while reducing power consumption. Chung et al. (2008) did not evaluate this scheme for data caches since the early tag lookup stage was proposed for instruction cache only.

Complete Article List

Search this Journal:
Reset
Volume 15: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 14: 1 Issue (2023)
Volume 13: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 12: 4 Issues (2021)
Volume 11: 4 Issues (2020)
Volume 10: 4 Issues (2019)
Volume 9: 2 Issues (2018)
Volume 8: 2 Issues (2017)
Volume 7: 2 Issues (2016)
Volume 6: 2 Issues (2015)
Volume 5: 4 Issues (2014)
Volume 4: 4 Issues (2013)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing