The problem of testing amplifiers when close to compression

Oct 12, 2019

Signal & Spectrum Analyzers are vital tools for testing a wide range of electronic components and products. However, there are challenges when attempting to achieve accurate results for modulation quality measurements. This is especially true when talking about amplifiers which are usually operated close to their compression limits for efficiency reasons. Nonetheless, the cause of these challenges is relatively simple, and it can be overcome.

5G is everywhere right now and is going to be the next big thing, enabling faster data rates than ever before. Component designers are building their solutions and trying to get the best performance possible from their devices. Typically, the key performance metrics measured in this endless pursuit of signal quality perfection are considered to be error vector magnitude (EVM) and adjacent channel power (ACP) or adjacent channel leakage ratio (ACLR).

The problem with amplifiers and especially transmitters is that, for reasons of efficiency, they are almost always operated close to their compression limits. At this point the amplifier is already going into its nonlinear region and will start to generate higher EVM than desired and consequently more bit errors.

Power amplifier designers need values to put on their component data sheet, preferably the best figures available to imply the device has great quality. This helps to impress the customers into choosing their device for an OEM design. Consequently, the designers often claim to test in a way that the metrics of EVM and high efficiency are of the highest importance and the measurements do not correspond to real operating conditions that may include applying signals with digital predistortion or other methods.

Reference vector quality is key for EVM accuracy

When measuring EVM in the device's nonlinear region with a vector signal analyzer, the EVM value reported is likely to be "inaccurate" or even outright "wrong". The reason for incorrect results is simple.

A reference vector is needed to allow calculation of the error vector. Any vector signal analyzer, whether it is measuring a 5G signal, DVB-S or other standard, must calculate its reference vector from the signal being applied to the analyzer. If the signal applied is of low quality due to noise or distortion, the quality of the reference vector will be poor and therefore the accuracy of the EVM that can be measured will also be poor.

 

A suitable demonstration of this can be 5G NR signal, as it is a test model that will be used in new 5G base station testing. The test case uses a full 400 MHz bandwidth carrier and has a 256 QAM modulation. As can be seen from the screenshot, the constellation points are indistinguishable in terms of where exactly they should be.

According to the results, the EVM reported is 4.69 %. When taking into account that the test limit for a 5G base-station operating in FR 2 (mmWave) is 4.5 % (3GPP 38.141-2), the question is would this device pass or fail and whether the measurement result is accurate. 

 

Determining the correct symbol positioning and hence what "data" it represents would be impossible, meaning that the signal analyzer cannot calculate a correct reference. This poses a problem to the user since it is clear that the use of measurement personalities to measure 5G NR signals and other types of signals could be troublesome. A VSA Measurement Personality for 5G would follow how the standard defines the measurement should be implemented to the letter, to the exact detail, but the 5G standard does not account for this behavior in how measurements are made.

Known data demodulation

The only way to get a "correct" EVM is to have a perfect reference signal to compare against, rather than a signal that is noisy or distorted. Using this method, the EVM will always be correct relative to the ideal signal that has been provided. Software options from Rohde & Schwarz for signal analyzers such as the R&S FSV3000 and R&S FSW platforms can make use of this technique. An example is using the R&S FSW-K18 software option to measure the same 5G NR signal as tested above, but using a perfect reference signal.

 

Here the EVM measured is 5.149 %, which is clearly higher than the result obtained from the 5G measurement personality. This gives the user a more accurate and realistic measurement of how the device is performing under its true operating conditions.

If a mobile communications standard is in use, it is important that the decoder is active, i.e. forward error correction (FEC) is being applied. This ensures accurate measurements under conditions where interference and distortion are present. As long as the FEC is capable of correcting raw symbol errors, this procedure can be used as a known data reference for the demodulator. However, it requires the availability of a standard specific decoder and a signal quality within the FEC limits.

Conclusion

The RF Engineer tasked with making measurements needs to clearly make measurements to the standards that are relevant to where the device will be tested, but they should also be mindful that the results may not give a true picture of its performance under real-world operating conditions.

References

Contributed by

Rohde & Schwarz

Country: Germany
View Profile