Managing Spectral and Energy Efficiency for Optimal 5G Networks

5G 6G 
Jan 18, 2021

As 5G is being deployed, behind this process are two key factors that are usually in conflict with one another: system capacity (spectral efficiency) and system cost (energy efficiency). Spectral efficiency means the amount of data capacity that can be provided, typically measured in bps (bits per second) per Hertz of frequency. Energy efficiency, on the other hand, describes the costs of operating the network for a given data capacity.

In previous generations of mobile technology, providing higher capacity was almost directly proportional to the costs, because this required either the construction of more base stations or increasing the spectral bandwidth within the network. Even though this has been previously sustainable, 5G (and 6G) requires 10 to 100 times the capacity of 4G networks, making this approach economically unfeasible, because the consumers of the increased capacity are not likely willing to welcome the increase of their expenses by 10 to 100 times. As shown in Figure 1, the industry needs to solve the issue of how to increase capacity across the entire network while simultaneously decreasing the costs of running the network, if it wants to advance the mobile networks.

Figure 1 : 5G Business Case

What are the costs of operating a cellular network?

While the evolution from 2G to 4G has seen base stations become more energy efficient, the costs of increasing capacity through network densification have grown significantly (Figure 2). The biggest costs of setting up and running a cellular network are caused by the remote provision of air conditioning and site rental for the base stations (References 1 and 2). The air conditioning causes over 50% of the initial capital expenditures, with the remainder generated by base station equipment. As for the recurring operating expenditures (OPEX), the trend is similar, with electricity taking up almost 50% of the OPEX.  Most of the electricity is needed for operating the remotely distributed air conditioning network in order to cool the baseband processors, since radio units are usually mounted near the antenna with air-cooling. Only 7% of the OPEX is used up by the actual transmission of energy. If more base stations are deployed, the 30% of site rent scales accordingly, making it unattractive to simply deploy more base stations. This is potentially a problem for 5G FR2, with its reduced cell sizes, compared to 5G FR1.

Figure 2: Power Consumption in a Cellular Network

By analyzing power consumption, it becomes obvious that the distributed and remote deployment of air conditioning for the baseband processing portion of a base station is the cause of the majority of the costs. China Mobile, for example, has suggested centralizing the baseband processing in the same way as data storage facilities are centralized for the internet. This has led to the development of C-RAN and then evolved into O-RAN where portions of the base station baseband become virtual machines within a centralized cloud.  Even network appliances that have been traditionally separate, such as gateways, can be integrated into the cloud as virtual machines. Through the centralization of the baseband processing, the remote air conditioning also becomes centralized. This leads to significantly decreased operating and capital expenditures. This architecture is independent of the radio access network (RAN) and can be used to control a mixed generation cellular network.

The energy efficiency of the network is also influenced by the type of information being transmitted.  Figure 2 shows that different types of data have different data packet to signaling packet ratios (DSR). A high DSR represents a more efficient usage of the channel for transmitting data. As an example, text messages take up 60% of all network traffic, but have a DSR between 1 to 3 while photos and videos are more energy efficient, because this data requires fewer signaling packets. 5G FR1 addresses this problem by adjusting the subcarrier spacing. This enables different types of data to more efficiently use the available channel capacity.

What determines the spectral capacity of a network and how to increase it?  

In the 1920s, two independently working researchers developed a relatively simple equation that serves as a Moore’s Law for the wireless industry: the Shannon-Hartley theorem. It defines the maximum amount of information that can be transmitted over the wireless channel as the individual channel capacity is dependent on only two parameters: channel bandwidth (BW) and the signal to noise ratio (SNR). While the capacity scales linearly with the channel bandwidth, it only scales at log2 for the signal to noise ratio:

Based on the Shannon-Hartley theorem, there are four basic methods for increasing network capacity:

1. Increase the channel bandwidth (BW): 4G and 5G FR1 use carrier aggregation to increase the available signal bandwidth. In 5G FR2, the wide bandwidth mmWave frequencies are used to obtain larger data capacity.

2. Increase the number of channels (Cj): MIMO transmits on several channels at the same time by utilizing the multipath scattering inside the network. Like the channel bandwidth, network capacity also scales linearly with this effect. However, its upper limit is determined by the correlation (or similarity) of the multipath inside the network. 5G FR1 relies on increasing data rates by scaling up MIMO up to 64 layers from the base station.

3. Increase the output power of the network (SNR): The SNR method has its limits, due to the presence of noise, the asymptotic log scaling of the SNR, and due to certain health and safety concerns related to high electromagnetic energy. This is why using femtocells in areas of decreased coverage is a safer way to increase the SNR throughout the network. However, in case too many omnidirectional femtocells are deployed in a single area, the interference between the femtocells creates an upper limit to the capacity gain of the network.

4. The energy efficiency of the network can, however, be increased by targeting the energy to a specific user. This method is called “beamforming” and it is a key technology for both 5G FR1 & FR2 base stations.

Improving Energy Efficiency Through Beamforming

Traditionally, a cell in a cellular network is associated with a base station transmitting energy across a wide area, usually in a 120 degrees angular arc in front of the base station. A part of this energy is received by users within the base station’s cell, but the vast majority of it is absorbed by the surroundings, including buildings, people, trees, cars, etc. This wasted power reduces energy efficiency and causes higher operating expenditure of the network (Figure 3). If, using beamforming technology, the single base station antenna is replaced by 120 antennas directing the energy to individual users, the base station requires only 0.1% of the original output power (reference 3). However, this reduction is theoretical, as in practice the output power for the same capacity only decreases to 30% of the original power. This is caused by the efficiency and losses of the RF components within the base station.

Figure 3: Beamforming and Energy Efficiency

A beam can be formed in any direction by having a set of periodically spaced antennas change the phase differences between the antennas. An antenna array is usually spaced at half wavelengths, leaving the beam angle (θ) directly related to the phase difference between the antennas: θ = sin-1 (Δφ). Although the beam can focus its energy in a desired direction, some energy is also going in other directions (sidelobes and backlobes). This additional energy causes interference to other users inside a base station cell. The effects of this interference can be reduced by either making sure the adjacent users are in the nulls of the main beam or by lowering the energy in the sidelobes by weighting the individual antennas with an amplitude distribution.

There are three beamforming architecture varieties with a direct impact on the energy efficiency of the base station and the UE (Reference 6):

Analog beamforming (ABF): Traditionally, beams are formed by using attenuators and phase shifters as part of the analog RF circuit where a single data stream is divided into separate paths. This method is convenient, because only one RF chain (PA, LNA, filters, switch/circulator) is required.  However, the disadvantage is the loss caused by the cascaded phase shifters at high power.

Digital beamforming (DBF): This method assumes there is a separate RF chain for each antenna element, after which the beam is "formed" by matrix-type operations in the baseband where artificial amplitude and phase weighting is carried out.   This is a preferred method for frequencies lower than 7 GHz in 5G FR1, because the RF chain components are relatively inexpensive and can combine MIMO and beamforming into a single antenna array. For higher frequencies, upwards from 24 GHz, the power amplifiers and ADCs cause a lot of energy loss for standard CMOS components.  The losses can decrease when using unusual materials such as gallium arsenide and gallium nitrate, but the downside is higher cost.

Hybrid beamforming (HBF): By combining digital beamforming with analog beamforming, hybrid beamforming allows the flexibility of multiple radio transceivers as well as beamforming while reducing the cost and losses of the beamforming unit (BFU). In HBF, each data stream has its own separate analog BFU with a set of M antennas. With an N number of data streams there are NxM antennas. By using a selective beamformer such as a Butler matrix or a Rotman lens, analog BFU loss caused by phase shifters can be reduced. One suggested architecture is based on using the digital BFU to steer main beam direction while the beam within the digital envelope is steered by the analog BFU (reference 4).

Theory of Optimal Networks: Spectral and Energy Efficiency

Combining C-RAN, MIMO, new spectrum and beamforming results in increased capacity for 5G while reducing costs when compared to traditional and current cellular networks. The Shannon-Hartley theorem can be expanded to include the energy efficiency of the channel (reference 2). Using the constraints in the performance of the base stations and the network, the joint optimal spectral-energy efficiency of 2G and 4G networks can be calculated at 4 bps/Hz for GSM and 8 bps/Hz for LTE. It is worth noting, though, that in real networks these values are often lower at 4 bps/Hz for LTE).

Combining MIMO and digital beamforming in 5G FR1 can, theoretically, lead to over three times more capacity compared to an LTE network while reducing costs by a factor of 10, assuming that there are eight transceivers per user with beamforming. While 5G FR1 has limited frequency spectrum available, 5G FR2 spectrum goes far above 24 GHz. The spectral efficiency of 5G FR2, assuming hybrid beamforming with 8 transceivers per antenna array, is comparable to LTE at 10 bps/Hz, but with higher energy efficiency compared to LTE (reference 4).

Figure 4: Optimized Networks: Spectral and Energy Efficiency

Results from Deployed Networks: Spectral and Energy Efficiency

A recently deployed heterogenous network in China, based on both 4G and 5G FR1 macro base stations (reference 5), is an example which illustrates that operator networks can achieve both spectral and energy efficiency using 5G massive MIMO, compared to the traditional 4G network, boosting data rates while lowering the energy cost per data packet.

TypePowerSignal BandwidthBaseband PowerRadio Unit PowerMIMO LayersThroughput
4G LTE40 W20 MHz150 W950 W8120 Mbps
5G FR1240 W100 MHz220 W4077 W64~ 2 Gbps

As shown in Table 1, comparing base station power consumption with throughput, the 5G FR1 energy consumption is nearly 4 times higher than a 4G base station, due to the 64 separate RF chains with separate power amplifiers, well below the theoretical energy efficiency predictions in Figure 4. The data capacity (throughput), however has increased nearly 16 time.   This is due to the increase of spectral efficiency from a 4G LTE base station of 6 bps/Hz to a 5G FR1 base station of 20 bps/Hz (Figure 4). The resulting 5G FR1 network uses less power per transmitted bit than the 4G LTE network, resulting in cost savings together with higher network capacity.


In conclusion, combining spectral and energy efficiency enables operators to deploy new networks that can simultaneously boost capacity and lower operating expenditures. Future networks will combine the different solutions of FR1 and FR2 into single heterogenous networks, enabling the FR1 to deliver high data rates in a wide area network with in-building penetration while FR2 serves as the basis for data off-loading, hotspots and extreme network densification.  This network deployment will not only affect consumers and equipment vendors, but also has a fundamental impact on the entire test and measurement (T&M) industry.


  1. CMRI, “C-RAN: The Road Towards Green RAN,” Dec. 2013
  2. I Chih Lin, C. Rowell, et al, “Towards Green and Soft: A 5G Perspective”, IEEE Communications Magazine, Feb 2014
  3. F. Rusek, et al, “Scaling Up MIMO: Opportunities and Challenges with Very Large Arrays”, IEEE Signal Processing Magazine, Jan 2013
  4. H. Shuangfeng, et al, “Large Scale Antenna Systems with Hybrid Analog and Digital Beamforming for Millimeter Wave 5G”, IEEE Communications Magazine, Jan 2015
  5. I, Chih-Lin, et al, “Energy-Efficient 5G for a greener future”, Nature Electronics, Vol. 3, April 2020.
  6. Antenna Array Testing White Paper., Rohde & Schwarz: 1MA286, 2016

Contributed by

Rohde & Schwarz

Country: Germany
View Profile