Multi-Processor System-on-Chip 2. Liliana AndradeЧитать онлайн книгу.
The low-end use case of the implemented algorithm easily fits within one 1 GHz 512 bit vector processor core requiring 1.01 − 5.39 MHz of the processor clock budget, therefore enabling seamless time multiplexing with other software (SW) kernels running on the core. The carrier aggregation (CA) high-end use case requires 921 − 5,505 MHz, which allows the prior clock-efficient version to fit on one 1 GHz 512 bit vector processor core. This sets the stage for the system designer to opt between a generalized vector processor or some more specialized HW accelerator engine, such as a dedicated (application-specific) HW accelerator or an application-specific instruction set processor (ASIP). Finally, the implemented algorithm running the multiple-input, multiple-output (MIMO) CA high-end use case would require a budget of 7.37 − 44.04 GHz, the theoretical equivalent of eight or forty-five 512 bit vector processor cores, making the high-end use case more suitable for execution on the dedicated HW accelerator or the ASIP, which were otherwise powered down. The use case corners demonstrate a high variability requirement from HW, which makes heterogeneous multi-processor system-on-chip (MPSoC) solutions ideal future-proof HW for beyond 5G. In this chapter, we present our key findings that connect the dots from vision to future HW in wireless communications.
1.1. Introduction
As we are writing this chapter in early 2020, it is obvious that there exists a gap between the conventional 5G vision (Fettweis 2012; NGMN Alliance 2015; Qualcomm 2016) and the deployed 5G. We do not have coordinated unmanned areal vehicles (UAV) groups humming over our cities like busy worker bees around hives (ARIB et al. 2016). We do not have critical vehicle to everything (V2X) communication with 1 ms end-to-end latency coordinated “emergency trajectory adjustment” (3GPP 2018d) keeping us safe, nor the smart infrastructure that would make traffic lights obsolete (Fettweis 2012). We do not have the cellular virtual reality (VR) and augmented reality (AR) (NGMN Alliance 2015) helping us to acquire and select important information about our environment when we explore new places. Let us investigate why is that so. How did we come into this situation, where are we now and what should we do to traverse this rift?
1G was a great step, which created the vision of ubiquitous voice telephony, but we needed 2G to deliver on the expectations created such as, for example, national and international roaming. The 3G standards were a great step towards ubiquitous cellular data, but we needed 4G to fix the problems. Now 5G should be an infliction point in bringing cellular data to new applications. However, do we need to use the 5G system to understand what is really needed, and have to wait for 6G to fix the issues? And are these fixes required to make the Tactile Internet a reality?
As we see 5G unfold, expectations on the economic and societal impact are very high. Many new opportunities for business will emerge with the new communications epoch. Besides Gb/s data rates, the Tactile Internet is the most highlighted promise of 5G, enabling remote control applications over cellular data. We will review opportunities and their technical requirements. This helps us to build an understanding, to detect missing pieces. It is of particular interest to see the broad set of chances in semiconductors unfolding before us, making the pathway to 6G maybe the next wild and open opportunity for entrepreneurship and change in company economics.
As of the first half of the 2010s, different sectors of the industry, research and, later, politicians, as well as regulatory bodies, overwhelmingly adopted and unanimously agreed that 5G was needed. The basic driver for creating the Tactile Internet vision for 5G was published in early 2014 (Fettweis 2014) and later acknowledged in (ARIB et al. 2016; 3GPP 2018d). The overall consensus on the need for 5G resulted in the typical standardization iterations of the 5G standard in 3GPP (2017, 2018c), landing firmly on the 5G stand-alone (3GPP 2019e, c, d). The main innovation, next to higher data rates, is the introduction of ultra-reliable low latency communications (URLLC) around the idea of the Tactile Internet, which again addresses a new domain not served by cellular data so far. As we slowly start understanding the true requirements and impact of URLLC, it turns out that 5G will not fully deliver a solution as required.
Given that 3GPP 5G NR specifications (3GPP 2019c, d) regarding handsets for frequency range 1 (FR1) (0.45 GHz − 6 GHz) and frequency range 2 (FR2) (24.25 GHz − 52.6 GHz) operating ranges exist, the MPSoCs, which can be scaled in data rate as well as latency, are still to be designed.
For the contemporary reader, this chapter offers an incremental contribution to the fulfillment of that vision, spanning the gap between 5G and 6G. For the future reader, this chapter offers a methodology of translating workloads via a specific algorithm to HW.
Section 1.2 shows the trends and analyzes the workloads defined by the standards. In addition, it gives lower and upper requirements to be considered when implementing a communications modem. In section 1.3, we give the reader a background on the 6G candidate waveform modulation generalised frequency division multiplexing (GFDM), develop a GFDM-associated dataflow processing graph and pseudo-code. Section 1.4 covers precision requirements, in which we explore required bit-lengths to represent data and satisfy 3GPP LTE/NR error requirements. Section 1.5 presents the implementation, GFDM vectorization variants, loop order variants and many properties that arise from loop and vectorization arrangements, such as the possibility of minimizing cycle counts for maximum throughput or minimizing the number of memory accesses for low-power operation.
1.2. Breadth of workloads
The first step towards estimating the HW requirements for a typical beyond 3GPP 5th Generation New Radio (5G) algorithm is sizing the span of workloads in terms of throughput and deadlines under which data processing has to finish. To study the matter, we go over the holistic vision and trends of what we expect in 6G, followed by an analysis of the key deployed 5G standard specifications. From the latter, we identify the far corners that stretch the workload requirement space, see how these corners fit into the vision and trends and provide numerical values for HW requirements of a future MPSoC. Finally, throughout the workload analysis, the emerging theme is the required high flexibility, which leads us to pick a vector digital signal processor (vDSP) as a middle ground that provides both flexibility and good data throughput for single instruction, multiple data (SIMD) processing.
1.2.1. Vision, trends and applications
The trend in cellular communications over the past generations was to pioneer with a technology in one generation and to optimize it in the next. As mentioned in the abstract, 1G and 2G introduced and optimized voice, 3G and 4G introduced and optimized broadband data streaming and, lastly, 5G and 6G introduce and should optimize the Tactile Internet. Furthermore, the generations also had their killer applications that captured the mass market: for 3G, it was a video call; for 4G, it was social media and streaming; for 5G and 6G, we cannot say for certain, but we can make informed predictions.
As of the advent of 4G, cellular networks are not built for low latency, and the introduction of the low-latency connection between network end points as a key requirement is a challenge for cellular network operators. With the current network infrastructure, it is hard to provide both massive data rates and low latency. Early solutions revolve around trading off one for the other. Hence, in the early adaptation stages, we can expect a killer application that requires low end-to-end latency, but does not require high data rates or vice versa. Likely candidates for low rate - low latency (LRLL) are applications such as factory automation, remote control, trajectory alignment and emergency stop aspects of self-driving for cars, UAVs and robots across a broad field of industries, from construction sites to warehouses. On the other hand, for high rate - high latency (HRHL), a killer application would likely be 8K streaming or other massive data transfers, where latency is not a constraint. Drawn from experience, we can expect 6G