Reliability Aware AMS/RF Performance Optimization

Reliability Aware AMS/RF Performance Optimization

Pietro M. Ferreira, Hao Cai, Lirida Naviner
DOI: 10.4018/978-1-4666-6627-6.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Reliability has become an important issue in the continuously CMOS technology scaling down. The exploration of the technology limits using classic performance optimization techniques and leads to the best trade-off for the area, power consumption, and speed. Nevertheless, such key characteristics have been degraded in a context of continuous use and stressful environment. Thus, circuit reliability emerges as a design criterion for AMS/RF performance optimization. Aiming a design for reliability, this chapter presents an overview of CMOS unreliable phenomena. Reliability-aware methodologies for circuit design, simulation, and optimization are reviewed. The authors focus in particular on large and complex systems, providing circuit design insights to achieve a reliability specification from system-level to transistor-level. They highlight the more sensitive building blocks in CT S? modulator and demonstrate how performance is affected by unreliable phenomena. A system-level direct-conversion RF front-end design is described in top-down approach. Electrical simulations are presented with 65 nm CMOS technology.
Chapter Preview
Top

Introduction

Towards to nanoscale integrated circuit (IC), modern ICs’ technology has emerged new reliability challenges. The next generation of analog-mixed signal (AMS) and radio frequency (RF) circuits will encounter an increasing failure rate during all circuit operation-time. Such a drawback will be responsible of a reducing circuit lifetime as a pay-off for an increasing IC performance. A new design challenge emerges and motivates the research field of reliability aware AMS/RF performance optimization.

Exploiting the technology limits, classic design methodologies often look for the basic design criteria: die area, power consumption, and speed. The optimum is the design point where we have the specified performance. Since elusive physical phenomena emerged, designers start to establish some design margins in order to guarantee specified performance priority. Such design techniques lead to a non-optimum circuit by the presence of exaggerated redundancy, the use of overdesigned margins, and postulated design recommendations without any physical phenomena insight.

The increasing IC variability has proved to be big enough to find a large number of chip samples with performance far away from the specification. This phenomenon is known as yield reduction. The yield can be defined as the ratio of chip samples which meet the design specifications and of all chip samples in a context of a complete production process. However, yield concept cannot measure the number of chip samples which still meet the design specifications in a context of continuous use under a known environment condition.

During IC operation time, a number of physical phenomena may affect the circuit performance generating transient faults. Harsh environments present a thread for ICs, because transistors are much sensitive to alpha and neutron particles strikes, crosstalk, Electrostatic Discharge (ESD), and temperature variation. Nevertheless, transient faults concept is only applicable when performance degradation is very low probable. If such a performance degradation is increasing in time by an accumulative effect, we may define this moment as the circuit lifetime after which such a circuit is no longer useful since the specified performance cannot be guaranteed.

The IC ageing (also named wareout) is a cause of performance degradation under stressful environment condition during a period of time. The specified period of time is including a time-varying concept into IC performance quality. If such a period of time is zero and so it is the complete production process moment; we measure the circuit yield. If the circuit performance quality drops out of the specification, the time when it occurs is defined as the circuit lifetime. Combining stressful environment condition and the circuit lifetime, the reliability is defined as the ability of a circuit to conform to its specifications over a specified period of time and under specified conditions.

In order to evaluate the reliability of a circuit, we assume that a circuit is composed of statistical identical and independent components that were put into operation at same time (t = 0). The empirical reliability of an IC can be defined according to

978-1-4666-6627-6.ch002.m01
(1) where u(t) represents how many parts did not failed yet at t. It can be noted that the behavior of u(t) is a continuous decreasing step function. A direct application of the law of large numbers 978-1-4666-6627-6.ch002.m02 yields that 978-1-4666-6627-6.ch002.m03 converges to the reliability function 978-1-4666-6627-6.ch002.m04 (Birolini, 1994).

The empirical rate of failure-in-time (FIT) is given by

978-1-4666-6627-6.ch002.m05
(2) converges to the failure-in-time expressed by
978-1-4666-6627-6.ch002.m06
(3) for 978-1-4666-6627-6.ch002.m07, 978-1-4666-6627-6.ch002.m08 and 978-1-4666-6627-6.ch002.m09(Birolini, 1994). Considering that at t = 0, the circuit executes its functions perfectly, that means R(0) = 1. In this case, the reliability function can be defined as

Complete Chapter List

Search this Book:
Reset