High-Density and De-Densified Smart Campus Communications. Daniel MinoliЧитать онлайн книгу.
both technologies fill a role, and both technologies are clearly needed.
There are several Wireless Local Area (WLAN) standards that have evolved over time, including Institute of Electrical and Electronics Engineers (IEEE) standards 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ax. The new standards have been developed to accommodate the evolving requirements for higher speeds. Some protocols and wireless routers provide backward compatibility with older Wi‐Fi systems. The Wi‐Fi Alliance (an industry group) has announced a banding “generation” designation, as follows:
Wi‐Fi 4 is 802.11n, released in 2009
Wi‐Fi 5 is 802.11ac, released in 2014
Wi‐Fi 6 is the new version, also known as 802.11ax (scheduled for release in 2019)
Earlier versions of Wi‐Fi have not been officially branded, but one could label the previous generations as follows:
Wi‐Fi 1: 802.11b, released in 1999
Wi‐Fi 2: 802.11a, released in 1999
Wi‐Fi 3: 802.11g, released in 2003
Radio technologies in cellular communications have grown rapidly. They have evolved since the launch of analog cellular systems in the 1980s, starting from the First Generation (1G) in the 1980s, Second Generation (2G) in the 1990s, Third Generation (3G) in the 2000s, and Fourth Generation (4G) in the 2010s (including LTE and variants of LTE). Fifth Generation (5G) access networks, which can also be referred to as New Radio (NR) access networks, are currently being deployed and are expected to address the demand for exponentially increasing data traffic and are expected to handle an extensive range of use cases and requirements. Basic use cases include, among others, Mobile Broadband (MBB) and Machine‐Type Communications (MTC), for example, involving IoT devices – Machine‐to‐Machine (M2M) communication is a specific IoT niche. The IoT refers to the network of physical objects with Internet connectivity (connected devices) and the communication between them; these connected devices and systems collect and exchange data. The IoT has been defined as “the infrastructure of the information society”; it extends Internet connectivity beyond traditional devices such as desktop and laptop computers and smartphones to a range of devices and everyday entities that use embedded technology to communicate and interact with the external environment [1]. Massive Multiple Inputs and Multiple Outputs (MIMO) designs, new multiple access methods, and novel channel coding approaches are being assessed for use in 5G and HDC environments [2–7].
The upcoming 5G access networks may utilize higher frequencies (i.e. > 6 GHz) to support increasing capacity by allocating larger operating channels and bands, although some lower frequencies can also be used. Millimeter wave (mmWave), the band of spectrum between 30 and 300 GHz, have shorter wavelengths that range from 10 to 1 mm. Currently, much of the mmWave spectrum is underutilized; thus, it can be used to facilitate the deployment of new high‐speed services. While it is known that mmWave signals experience severe path loss, penetration loss, and fading, the shorter wavelength at mmWave frequencies also allows more antennas to be packed in the same physical dimension, which allows for large‐scale spatial multiplexing and highly directional beamforming [8].
Some observers have predicted the “death of Wi‐Fi” at various points in the recent past. To quote Mark Twain (as told by his biographer Albert Bigelow Paine), “the report of my death has been grossly exaggerated.” Ignoring the ALOHAnet of the late 1960s/early 1970s, wireless LANs started to appear in the late 1980s/early 1990s (e.g. with the WaveLAN system originally designed by NCR Systems Engineering/Wireless Communication and Networking Division, available commercially in 1990 and for several years, some concepts eventually making their way into the 1997 IEEE 802.11 standard3). The generic technology has thus been around for 30 years. When (some form of) 3G/4G/LTE was starting to be deployed, some predicted that it would be the death knell of (public hotspot) Wi‐Fi, but it did not happen. In fact, many devices developed the capability of transferring connectivity and roaming seamlessly between the local Wi‐Fi (corporate, public, residential) and cellular service – some users even use their cellular‐based smartphone to create a small local hotspot to support traditional Wi‐Fi elements in their environment. Now with 5G on the horizon, some are offering the same (questionable) prediction about the future of Wi‐Fi [9]. As is the case with many pairs of technologies, one technology moves ahead, the other lagging; then at some point, the second technology makes a quantum leap forward, and the original one lags; then again, the original technology makes a new advancement and leapfrogs the other technology, and so on. One can apply this idea to cellular and Wi‐Fi in terms of speed/throughput as well as cost and end‐device capabilities. In broad terms, Wi‐Fi generally offers higher data rates and service can be cheaper; however, large‐geography coverage and large‐geography roaming are more “natural” in the cellular context. Another observation is that 5G will often require small cells, implying both a similarity with a Wi‐Fi hotspot and increased infrastructure and deployment cost. 5G is advocated from the perch of higher speeds, higher density, and reliable connectivity; however, it remains to be seen if these features can be achieved on a large scale (i.e. over a large geographic, national, or international geography) and in a cost‐effective manner. The global standard could in theory benefit dispersed IoT sensor support, in a smart city setting, for example, but until recently, the cost of the cellular interface for the sensor tended to be fairly expensive (e.g. in the $20–40 range); thus, the use of other Low Power Wide Area Network (LPWAN) technologies such as LoRa or Sigfox have taken hold. This interface cost must decrease substantially if the use of 5G cellular in IoT applications is to become ubiquitous.
1.2 REQUIREMENTS FOR HIGH‐DENSITY COMMUNICATIONS
HDC can be characterized by several (requirement) metrics. Basic metrics include, but are not limited to, user connection density, traffic volume density, experienced data rate, and peak data rate. Many venues require ultra‐high connection density and ultra‐high traffic volume density; applications that entail M2M and may typically (but not always) require very low end‐to‐end latency. For example, 5G systems aim at the following key performance indicators: (i) connection density: one million connections per square kilometer; (ii) traffic volume density: tens of Gbps per square kilometer; (iii) user experienced data rate: 0.1–1 Gbps; (iv) peak data rate: tens of Gbps, and; (iv) end‐to‐end latency: 1–10 ms. See Figure 1.1. In addition, there is a need for scalability: it is one thing to have high density in a small area (say, a classroom), and it is another matter to be able to sustain that over a large venue (for example, a stadium or airport). For this discussion, it is assumed that the mobility speed is not a factor: pedestrian rates (≤10 km/h) are assumed.
One million connections per square kilometer (also definable as 1 connection per m2) equates to one connection every 10 ft2 (1 km2 = 10 763 910 ft2); this is considerably higher than the connectivity goals in an office environment, where typically one has an allocated space of 130–150 ft2 per worker, with one or two connections per worker; this is also higher than the connectivity in a classroom (say a 40 × 40 ft locale and 32 students, or one connection every 50 ft2). Another example could be train cars with 200 users (perhaps not all simultaneously active) in 1000 ft2, or one connection every 10 ft2 if only 50% of the passengers are active at any one point in time.
FIGURE 1.1 Requirements bouquet.
TABLE 1.1 Key Performance Indicators HDC Key Performance Indicators (KPIs)
Key Performance Indicators |
Description
|
---|