The new connected world requires wireless connectivity everywhere you go – whether it is at home, on the go, or at work. The rapid increase in the volume of data, plus the need to process said data in multiple places – at the endpoint, the edge, and in the cloud – has created a challenge and made faster movement of data a priority. We have discussed previously how Open RAN provides Communications Service Providers (CSPs) more choice and flexibility to efficiently and cost-effectively deploy their radio networks, let's further explore the abilities of accelerator cards.

Introduction

The new connected world requires wireless connectivity everywhere you go – whether it is at home, on the go, or at work. The rapid increase in the volume of data, plus the need to process said data in multiple places – at the endpoint, the edge, and in the cloud – has created a challenge and made faster movement of data a priority. We have discussed previously how Open RAN provides Communications Service Providers (CSPs) more choice and flexibility to efficiently and cost-effectively deploy their radio networks, let's further explore the abilities of accelerator cards.

Hardware accelerator cards accelerate data processing and relieves some of the burden from the CPUs, to enable operators and infrastructure vendors the ability to maximize the benefits of high performance, low latency, and power efficient 5G, while accelerating the cellular ecosystem’s transition towards virtualized radio access networks.

Benefits of Accelerator Card

The PCIe card is designed to seamlessly plug into standard Commercial-Off-The-Shelf (COTS) servers to offload CPUs from latency-sensitive and compute-intensive 5G baseband functions such as demodulation, beamforming, channel coding, and Massive MIMO computation needed for high-capacity deployments. Especially in compute-heavy scenarios, hardware acceleration can increase raw performance, reduce the number of processors required and reduce power consumption. Accelerator card can free up the server CPU to focus on the layer 2 workloads and simplifies overall vDU deployments.

In-line vs Look-aside Accelerator

In the compute-intensive Layer 1 functionality of OpenRAN architecture, there are two fundamental approaches – look-aside acceleration and inline acceleration. When a small subset of 5G L1 functions are offloaded, such as forward error correction (FEC), from the host to an external FPGA-based accelerator and this process is executed offline. This kind of look-aside (offline) processing of time-critical L1 functions outside the data path adds latency that degrades system performance. Look-aside acceleration requires massive data transfer between the CPU and accelerator, therefore only selected functions are sent to the accelerator.

Inline acceleration, a part of or the entire layer 1 can be offloaded to the accelerator, allowing for a less data-heavy interface between the CPU and the accelerator. In-line acceleration card has better performance and efficiency and more effective total system cost, less power consumption, a mix of programmable and hard blocks, a less data-heavy interface between CPU and accelerator, zero-copying of memory copy back and forth, unleashing x86 CPU cores and mitigate scheduling conflict.

The principal difference between them is that in look-aside acceleration, only selected functions are sent to the accelerator and back to the CPU, while inline acceleration, parts of or the whole data flow and functions are sent through the accelerator. For higher bandwidth Open RAN applications, full L1 processing offload may be required. In this case, an inline hardware DU acceleration will deliver better results with the lowest latency and minimum cores required.

Conclusion

Hardware acceleration can significantly enhance any open RAN implementation, increasing raw performance, reducing the number of CPU cores required, and reducing power consumption.

Lanner’s Open DU/CU network appliance ECA-4027 equipped with Compal’s in-line acceleration card is designed to offload the CPU loading with reduced power, resulting in improved performance and power efficiency. According to the latest benchmark testing, ECA-4027 integrated with Compal’s in-line acceleration card can achieve 44% CPU computing resources with 17% less energy, making it an ideal platform to enable Kubernetes-containerized 5G RAN applications.

Featured Product


ECA-4027

Short Depth Chassis Edge Computing Appliance with Intel Xeon® D-2100 Multi-core Processor (Codenamed Skylake-DE)

CPU Intel® Xeon® D2100 12/16 Cores
Chipset N/A

Read more