Article Preview
TopIntroduction
The newly emerged data-minded cellular standard Long Term Evolution Advanced (LTE-A), which is the most promising candidate for next generation mobile communication networks, has been derived by the demand for higher capacity and data rates on mobile networks. Orthogonal Frequency Division Multiple Access (OFDMA) technology has been proposed in order to achieve the aforementioned goals. It is based on dividing the available spectrum into a number of parallel orthogonal narrowband subcarriers through Orthogonal Frequency Division Multiplexing (OFDM), allowing dynamic assignment of the subcarriers to different users at different time instances, thus providing higher data rates and increased spectral efficiency.
The goal of spectral efficiency though, can be easily converted to performance liability, due to the interference phenomenon that affects mostly neighbouring areas using the same frequency, namely Co-Channel Interference (CCI). The situation is worse for users located near the edges of the cells, since both the reception of a weakened signal power from the Base Station (BS) and the simultaneous use of the same subcarriers by different users in adjacent cells, can lead to major performance degradation.
Several frequency reuse schemes have been discussed for OFDMA-based networks, each one exhibiting different efficiency and complexity, in order to achieve the conflicted goals of utilizing the limited radio spectrum efficiently, and mitigating interference effects on network performance. Resource partitioning schemes are common means to overcome the CCI problems, i.e., from transmitters located in neighbouring cells with the same frequency bands as the reference cell of interest. The most common techniques are Soft Frequency Reuse (SFR) and Fractional Frequency Reuse (FFR). In all these schemes the whole frequency band is divided into several sub-bands. The network is then divided in areas that may utilize all or part of the sub-bands depending on the scheme implemented.
The two-tier femtocell/macrocell environments comprise of a conventional cellular network overlaid with shorter-range self-configurable BS, called femtocells (Chandrasekhar & Andrews, 2008). Femtocells or home BSs use antennas with low transmission power, which are typically connected to high-speed backhauls. They are one of the emerging cost-effective solutions for both operators and users aiming to expand mobile coverage and to support higher data rates by bridging mobile handsets with broadband wired networks. Femtocells constitute an economical solution conceived for improving the indoor coverage and achieving high network capacity. The main reason lies in the fact that they are deployed in an unplanned manner with minimal radio frequency RF planning, thereby resulting in cost savings for the operator. In order to deploy femtocells in a pre-existing macrocellular network, intelligent frequency band allocation considering the effect of CCI for femtocells and conventional macrocells is required in order for them to operate harmoniously in the same network (Lopez-Perez, Valcarce, Roche & Zhang, 2009).
The limiting factor in two-tier femtocell/macrocell networks is the interference between macrocells and femtocells that can suffocate the capacity due to the near-far problem. A macrocell BS may receive interference from femtocell BS, whereas a femtocell BS may receive interference from both macrocells and femtocells. It can impair the spatial frequency reuse gain. Since femtocell BSs are user deployed and thus are installed in an unplanned, random manner, the technical difficulties of interference management is increased (Son, Lee, Yi & Chong, 2011). This means that femtocells should use a different frequency channel from the one of the potentially nearby high-power macrocell users (MUs).