Software-Based Self-Test of Embedded Microprocessors

Software-Based Self-Test of Embedded Microprocessors

Paolo Bernardi, Michelangelo Grosso, Ernesto Sánchez, Matteo Sonza Reorda
Copyright: © 2011 |Pages: 22
DOI: 10.4018/978-1-60960-212-3.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In the recent years, the usage of embedded microprocessors in complex SoCs has become common practice. Their test is often a challenging task, due to their complexity, to the strict constraints coming from the environment and the application, and to the typical SoC design paradigm, where cores (including microprocessors) are often provided by third parties, and thus must be seen as black boxes. An increasingly popular solution to this challenge is based on developing a suitable test program, forcing the processor to execute it, and then checking the produced results (Software-Based Self Test, or SBST). The SBST methodology is particularly suitable for being applied at the end of manufacturing and in the field as well, to detect the occurrence of faults caused by environmental stresses and intrinsic aging (e.g., negative bias temperature instability, hot carriers injection) in embedded systems. This chapter provides an overview of the main techniques proposed so far in the literature to effectively generate test programs, ranging from manual ad hoc techniques to automated and general ones. Some details about specific hardware modules that can be fruitfully included in a SoC to ease the test of the processor when the SBST technique is adopted are also provided.
Chapter Preview
Top

Introduction

In the last years, the market demand for higher computational performance in embedded devices has been continuously increasing for a wide range of application areas, from entertainment (smart phones, portable game consoles), to professional equipment (palmtops, digital cameras), to control systems in various fields (automotive, industry, telecommunications). The largest part of today’s Systems-on-Chip (SoCs) includes at least one processor core. Companies have been pushing design houses and semiconductor producers to increase microprocessor speed and computational power while reducing costs and power consumption. The performance of processor and microprocessor cores has impressively increased due to technological and architectural aspects. Microprocessor cores are following the same trend of high-end microprocessors and quite complex units may be easily found in modern SoCs.

From the technological point of view, process miniaturization allows logic densities of about 100 million transistors per square centimeter, and taking into account the increasing pace of technology advances, which provides at least a 10% reduction of the feature-size every year, it is likely that in the near future transistor densities will go beyond 140 million transistors per square centimeter. Additionally, VLSI circuits achieve clock rates beyond the GHz and their power consumption decreases thanks to operative voltages below 1 volt. However, all these technology advancements impose new challenges to microprocessor testing: as device geometries shrink, deep-submicron delay defects are becoming more prominent (Mak, 2004), thereby increasing the need for at-speed tests; as core operating frequency and/or speed of I/O interfaces rise, more expensive external test equipment is required.

Considering the evolution of the processors’ architecture, this is being characterized by a high regularity of development. From the initial Von Neumann machines up to today’s speculative or even hyper-threaded processors, processor features have been supported by the advantages in technology. Initial processors were distinguished by an in-order sequential execution of instructions: the preliminary instruction fetch phase was very rigid and the parallel execution of instructions was not possible. Soon after, the evolution of instructions allowed executing extremely complex operations in order to perform a series of multifaceted functions, such as the LOOP instruction present in the x86 architecture. Further evolution led to processor architectures presenting a high level of parallelism in the execution of instructions. Moving from RISC processors to superscalar ones, the level of parallelism increased, providing significant advantages in terms of performance.

The increasing size and complexity of microprocessor architectures directly reflects in more demanding test generation and application strategies. These problems are especially critical in the case of embedded microprocessors, whose incredible diffusion is increasing the challenges in the test arena. Modern designs contain complex architectures that increase test complexity. Indeed, pipelined and superscalar designs have been demonstrated to be random pattern resistant. The use of scan chains, even though consolidated in industry for integrated digital circuits, has proven to be often inadequate, for a number of reasons. First of all, full-scan may introduce excessive overhead in highly optimized, high-performance circuit areas such as data flow pipelines (Bushard, 2006). In addition, scan shifting may introduce excessive power dissipation during test, which may impair test effectiveness (Wang, 1997). Scan test does not excite fault conditions in a real-life environment (power, ground stress and noise). At-speed delay testing is severely constrained by the employed Automatic Test Equipment (ATE) features, which are frequently outpaced by new manufactured products (Speek, 2000). Conversely, due to the increased controllability achieved on the circuit, at-speed scan-based delay testing may identify as faulty some resources that would never affect the system’s behavior (false paths) (Chen, 1993), thereby leading to yield loss.

Complete Chapter List

Search this Book:
Reset