Software Defined Radio Use Case for Low Latency

Mar 10, 2022

Software defined radio (SDR) systems have a broad array of applications in today's industries. This article explores the use of these systems in latency-sensitive applications. The next article in this series will explore how these devices are used to address various interoperability issues in satellite industries. Click here to see more articles in this series.

Low latency networks are used in a broad spectrum of applications including high speed train communication systems, high frequency trading systems, autonomous car networks... the list is endless. Before we focus on some applications, we will familiarize ourselves with latency and explore its causes in SDR systems. Next, we will focus on two examples of networks that demand low latency. Lastly, we will explore the benefits of using SDR systems in low latency systems.

What is an SDR?

As shown in Figure 1 below, a typical SDR system consists of two sections: an analog radio frontend and a digital backend. The radio front end processes signals that are in analog format while the backend processes digitized signals. The front end of an SDR system features the transmit (Tx) and receive (Rx) functions and is designed to handle signals over a broad range of frequencies, typically 0-18 GHz. When a wider tuning range is required, an SDR system can be upgraded to operate at up to 40 GHz. In addition, these systems are designed to deliver high instantaneous bandwidth, usually 1 GHz to 3 GHz. Highest-bandwidth SDR systems are engineered to offer a higher instantaneous bandwidth.

Figure 1: This is the system architecture of an SDR

The digital backend of an SDR system features a field-programmable gate array (FPGA) with onboard digital signal processing (DSP) capabilities such as upconverting, downconverting, modulation, and demodulation. The reconfigurability of its software-based modules allows protocols, DSP algorithms and low latency modems to be implemented with ease and without requiring any hardware modifications. An SDR system features multiple independent transmit and receive channels, making them ideal for applications involving multiple transmit and receive functions. These channels have dedicated analog-to-digital converters (ADCs)/digital-to-analog converters (DACs). The high flexibility of SDR systems allows them to be easily integrated into low latency RF systems. 

Latency in SDR-Based Networks

Latency is a measure of delay encountered in sending a signal from one point to another. In the case of an SDR system, latency refers to the time required for the signal to travel from the antenna to the digital interfacing equipment of a radio chain. There are two types of latency: transmit/receive latency and round trip latency. The latter includes the time required for processing a signal. The speed of a signal increases with a decrease in latency. Apart from the latency, the overall speed of a signal is also dependent on the distance travelled by a signal through Ethernet cables, optical fiber cables, or any other communication link. High performance SDR systems are optimized to deliver low latency, thus making them ideal for a wide range of latency-sensitive applications.

Both the radio front end and the digital backend contribute to the overall latency of an SDR system. Sources of latency in the frontend of an SDR system includes amplifiers, filters, mixers, ADCs and DACs. The radio group delay is approximately constant and does not vary with changes in sampling rate and frequency. 

In the digital backend, latency is caused by the FPGA hardware as well the DSP functions. The FPGA has parallel architecture and contributes minimal latency. Some of the DSP functions that can cause significant amounts of latency include filtering, interpolation, DDC/DUC and decimation. Other functions that introduce latency include sampling and data framing and deframing across qSFP+ transceiver ports connected to and from host system network interface cards (NICs). 

NICs, system load, operating systems and other components of the network and host system also contribute to the overall latency of an RF system. This latency is significantly dependent on the sampling rate used as well as the number of channels. Systems that produce a consistent output for a given initial state are said to be deterministic. This property allows accurate modelling of the behavior of digital systems, thereby making it possible to realize complex systems such as computer aided systems with precise task and time scheduling.

ADCs/DACs play critical data processing roles in SDR systems. In systems involving high speed sampling and processing, arrays of analog-to-digital converters are used. For such systems, it is critical to ensure that the latency variation across the converters is matched. JESD204B interfaces are commonly used to synchronize latency in applications where converters are used in arrays. JESD204B is a standardized serial interface for converters and other logic devices such as FPGAs and ASICs. This converter interface is designed to meet the sampling rates and data rates of today’s converters and is ideal for use in high density systems. In addition, JESD204B is designed to allow implementation of scalable systems. The need to build reliable and predictable network services has triggered the emergence of several standards. Some of the most common ones include ultra-reliable and low latency communication (URLLC), time sensitive networking (TSM), and IETF deterministic networking (DetNet).

To ensure that ADCs/DACs are properly synchronized, it is critical to maintain the integrity of the clock system. Common causes of clock errors include noise, clock frequency variation and process variation. When such an error occurs to a system with a common reference clock, the error is propagated to the entire computer system. While most of the causes of clock errors are predictable and easier to prevent, some sources such as stochastic noise are impossible to predict. Accumulation of this noise can greatly affect the performance of the clock system leading to loss of synchronization. 

The time that an RF signal takes to travel from one point to another is greatly dependent on the refractive index of a medium. The speed of light varies as it propagates through media with different refractive indices, as in Figure 2 (below). A signal propagating through such media can either be refracted or reflected depending on its angle of incidence. The angle beyond which a light signal is reflected and confined within one medium is referred to as the critical angle and the principle is known as total internal reflection. One of the most common applications of this principle is in optical fiber cables. Figure 3 shows how total internal reflection confines light within the core of an optic fiber cable. Multi-mode optical fiber cables are designed to support multiple light modes. Although these cables offer higher bandwidth, their use is limited to short distance applications.

Signal conversion and medium-dependent latency are some of the main contributors of latency in optical fiber cables. Conversion latency is greatly dependent on the design of a system while transmission medium latency mainly depends on the length of a cable. Optimizing the design of an optical fiber system helps to minimize the overall latency. For applications in which the integrity of data is more critical than the speed, bit error rate and bit rate are given priority over latency.

Figure 2: There is variation in the speed of light as it propagates from vacuum to media with higher refractive indices

Figure 3: The total internal reflection of light in an optical fiber cable varies

Application #1: High Frequency Trading (HFT)

Today's stock market is characterized by a high number of speculators and short stock holding times. This means that time is very critical and even a slight delay can make a huge difference. High frequency trading entails the use of powerful algorithms that are capable of executing transactions within a fraction of a second. These sophisticated algorithms are capable of performing analysis and executing orders at high speeds. 

In order to minimize latency, high frequency trading firms look for places that are as close as possible to the stock exchanges. Minimizing the distance helps to reduce the time required to send or receive signals. Using low latency links to connect to stock exchanges also helps to cut latency. Some trading firms use HF band radio links in order to get signals slightly earlier than other traders. Integrating FPGA-based SDRs into your firm’s infrastructure can also help to reduce latency and give your firm an edge over other firms. 

The speed at which data is transferred from one point to another is greatly dependent on the type of communication link used. Optical fiber cables and microwave antennas are commonly used in HFT applications. Unlike optical fiber cables, microwave links are highly susceptible to environmental conditions. Buffering, queuing and congestion of data in network components such as Ethernet switches further increases the time that a signal takes to reach its intended destination.

For latency-sensitive applications such as high frequency trading, it is critical to ensure that the latency is deterministic. With inconsistent and unknown latency, some traders can get signals ahead of others thereby exposing the latter to latency arbitrage. Ensuring that the latency is consistent and known helps to combat latency arbitrage. The speed at which trading algorithms analyze data and execute orders matter a lot. The firms with the fastest trading algorithms have a competitive advantage over traders who rely on slower trading algorithms.

When choosing a communication link for your HFT system, it is necessary to consider a broad array of factors including distance, tolerance to weather changes, and susceptibility to attenuation. One of the methods of tracking latency in communication networks is time stamping of data. Once unacceptable levels of latency are detected, the system should be adjusted accordingly to optimize performance.

The parallel architecture of the FPGAs used in SDR systems help to minimize round trip latencies. Lowest latency HF radio systems offer faster data transfers and shorter order execution times, thereby maximizing the profitability of HFT trading systems. This architecture also makes SDR based systems suitable for testing advanced trading algorithms used in modern HFT systems.

The ultra low latency of FPGA-based SDR systems makes them ideal for use with transatlantic links, microwave antennas, and other types of links that are commonly used in HFT systems. SDR platforms for use in lowest latency radio systems are optimized to deliver minimal delay. Some are even capable of executing complex trading algorithms on the FPGA for optimum performance and efficiency. 

Artificial intelligence and machine learning algorithms are increasingly becoming popular in stock trading. These algorithms are used for a broad range of applications including classification and regression. SDR systems support a broad array of machine learning and statistical algorithms used for predicting trends, optimization, decision theory, fitting data, and so on.              

Application #2: 5G Wireless Edge Computing

The Internet of Things (IoT) has seen the number of connected devices increase exponentially, leading to an explosion in the amount of collected data. Edge computing is an advanced computing paradigm that is highly optimized to handle such high volumes of data. Unlike traditional computing models, this distributed model stores and processes data closer to its sources, thereby reducing the distance travelled by the data over the internet. There is no doubt that this distributed model will be a core component of most new, emerging, and future technologies including autonomous vehicles, augmented and virtual reality, and even industry 4.0. Figure 4 shows the layers of a basic edge computing architecture.

Figure 4: Layers of a simplified edge computing architecture are shown

5G networking and wireless edge computing have introduced complex networking challenges. Some of these challenges stem from the differences in the data flow and control flow operations of the wireless physical (PHY) and medium access control (MAC) layers. The IoT paradigm has also triggered an explosion in the number of standards and protocols. This has further complicated the wireless communication environment making it even harder to achieve standardization. One of the main consequences of lack of standardization is that it undermines the overall security of a communication environment. 

To meet the performance requirements of modern networks, SDR systems are paired with software defined networking (SDN). This allows SDRs to integrate seamlessly with new and emerging networking protocols such as 5G wireless and edge computing systems. This pairing also makes it possible to realize highly flexible multi-frequency, multi-mode, and multi-function wireless equipment. 

The architecture of SDR systems makes them suitable for performing a wide range of complex functions such as automatic modulation classification (AMC). This enables realization of various cognitive radio (CR) techniques required for optimizing throughput and reliability of RF systems. In addition, the SDR paradigm allows building of secure encryption and decryption algorithms and multi-level verification systems.


Some applications are latency-sensitive and require networks that are optimized to deliver ultra low latency. The architecture and flexibility of SDR systems make them ideal for use in applications that demand low latency such as HFT systems and 5G/edge computing systems. The design of SDR systems also makes it easy to integrate them with a wide range of system RF systems. In the next article in this series, we will explore the suitability of SDR systems in addressing the interoperability issues in the satellites industry.

Click here to see Software Defined Radios on everything RF.

Click here to learn more about Per Vices on everything RF.

Contributed by

Per Vices

Country: Canada
View Profile