Optimization of Advanced Signal Processing Architectures for Detection of Signals Immersed in Noise

Optimization of Advanced Signal Processing Architectures for Detection of Signals Immersed in Noise

Marcos A. Funes, Matías N. Hadad, Patricio G. Donato, Daniel O. Carrica
DOI: 10.4018/978-1-5225-0299-9.ch008
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

The use of Field Programmable Gate Array (FPGA) devices in the signal processing field has been on a constant rise since the beginning of the last decade. In particular, in the field of signal processing applications, the implementation of methods and techniques for the detection of coded signals immersed in noise should be highlighted. In this chapter, focus is placed on a special type of coding known as Complementary Sequences, and on some of the coding schemes derived from them. These sequences have been employed in many different application fields, ranging from safety sensors and radar systems to communications and material characterization. Specifically, this chapter deals with issues related to algorithms improvement and to their implementation in FPGA platforms, with particular emphasis on hardware resources efficiency and on the reliability of the whole processing scheme.
Chapter Preview
Top

Introduction

During the last decade, the use of Field Programmable Gate Array (FPGA) technologies has grown from process control applications, mainly based on combinatorial and sequential schemes, to very complex architectures for advanced signal processing. Nowadays FPGAs are used in a very wide set of applications and markets, comprising industrial (Momanson, 2011; Antoszczuk, 2014), medical (Zhou, 2014) and security fields as well as communication (Sobaihi, 2012; Zhang, 2014), sensors (Pérez, 2015) and video, among others, which demand high performance signal processing (García, 2014). These applications involve techniques ranging from digital filtering to nonlinear processes, through embedded processors and mathematical operations. One of the most frequent operations in signal processing, regardless of the application itself, is the detection and validation of digital signals immersed in noise and/or interfered by other signal sources. In those cases, a typical solution is the use of coding techniques combined with the correlation function to improve the performance of signal detection. The codes used should, at least, be easily distinguished from a time-shifted version of itself, and/or from a time-shifted version of any other signal. The first property is important for applications such as ranging systems, radar systems, and spread-spectrum communication systems; while the second is key for simultaneous ranging to several targets, multiple-terminal system identification, and code-division multiple-access (CDMA) communications systems (Sarwate, 1980).

Many algorithms have been designed to generate codes, but not all of them do perfectly meet the properties enunciated. For example, the main constraint of Barker codes (Golomb, 1965) is that they are only available for certain lengths (shorter than thirteen), which limits the maximum signal-to-noise ratio (SNR) of the system. Besides there are no uncorrelated codes (understanding uncorrelation as a property that allows to identify two different codes despite being overlapped in time), which restricts their use to single user systems. Regarding binary pseudorandom sequences, their periodic cross-correlation is very low, but their aperiodic characteristics are not suitable, being practical only for high lengths (De Marziani, 2007). The same applies to Walsh–Hadamard sequences, whose main advantage is being perfectly orthogonal (useful for multi-user systems). Yet the non-zero time shift versions of these codes are strongly correlated, requiring good synchronization between the transmitter and the receiver when different codes are used.

Complete Chapter List

Search this Book:
Reset