Software Networks. Guy PujolleЧитать онлайн книгу.
priority can be accentuated by distributing any excess resources to the priority networks, which will then always have a token available to handle a packet. Of course, isolation requires other characteristics of the hypervisors and the virtualization techniques, which we will not discuss in this book.
Virtualization needs to be linked to other features in order to fully make sense. SDN (Software-Defined Networking) is one of the paradigms strongly linked to virtualization, because it involves the uncoupling of the physical part from the control part. The control part can be virtualized and deported onto another machine, which enables us, for example, to have both a far great processing power than that of the original machine and also a much larger memory available.
1.5. Virtual devices
All devices can be virtualized, with the exception of those which handle the reception of terrestrial and wireless signals, such as electromagnetic signals or atmospheric pressure. For example, an antenna or thermometer could not be replaced by a piece of software. However, the signal received by that antenna or thermometer can be processed by a virtual machine. A sensor picking up a signal can select an appropriate virtual processing machine in order to achieve a result that is appropriate for the demand. One single antenna might, for example, receive signals from a Wi-Fi terminal as well as signals from a 4G terminal. On the basis of the type of signal, an initial virtual machine determines which technology is being used, and sends the signal to the virtual machine needed for its processing. This is known as SDR (Software-Defined Radio), which is becoming increasingly widely used, and enables us to delocalize the processing operation to a datacenter.
The networking machines that we know can always be virtualized, either completely or at least partially. A partial virtualization can correspond to the processing part, the control part or the management part. Thus, today, we can uncouple a physical machine that, in the past, was unique, into several different machines – one of them physical (e.g. a transceiver broadcasting along a metal cable) and the others virtual. One of the advantages of this uncoupling is that we can deport the virtual parts onto other physical machines for execution. This means that we can adapt the power of the resources to the results we wish to obtain. Operations originating on different physical machines can be multiplexed onto the same software machine executing on a single physical server. This solution helps us to economize on the overall cost of the system, as well as on the energy expended, by grouping together the necessary power using a single machine that is much more powerful and more economical.
Today, all legacy machines in the world of networking have either been virtualized already or are in the process of being virtualized – Nodes-B for processing the signals from 3G, 4G and 5G mobile networks, HLRs and VLRs, routers, switches, different types of routers/switches such as those of MPLS, firewalls, authentication or identity management servers, etc. In addition, these virtual machines can be partitioned so they execute on several physical machines in parallel.
We can appreciate the importance of the Cloud and associated datacenters, because they are placed where the processing power is available at a relatively low cost, as is the memory space needed to store the virtual machines and a whole range of information pertaining to the networks, clients and processing algorithms. For the past few years, with server virtualization, the tendency has been to focus on huge datacenters, but with the help of distribution, the size of datacenters is decreasing. This size varies more and more and some are becoming smaller, becoming skin datacenters or femto-datacenters, or Fog and MEC (Mobile Edge Computing) datacenters.
Another interesting application of virtualization is expanding. It is about digital twins. A hardware is associated with a virtual machine executed in a datacenter located either near or far from the hardware. The virtual machine executes exactly what the hardware does. Obviously, the hardware must supply the virtual machine with power when there is a change in parameters. The virtual machine should produce the same results as the hardware. If results are not similar, this shows a dysfunction from the hardware, and this dysfunction can be studied in real time on the virtual machine. This solution makes it possible to spot malfunctions in real-time and, in most cases, to correct them.
Examples of digital twins are being used or developed just like a plane engine twin that is executed in a datacenter. Similarly, soon, vehicles will have a twin, allowing us to detect malfunctions or to understand an accident. Manufacturers are developing digital twins for objects, but in this case, the digital twin’s power can be much bigger and it can perform actions which the object is not powerful enough to perform.
Scientists dream of human digital twins which could keep working while the human sleeps.
1.6. Conclusion
Virtualization is the fundamental property of the new generation of networks, where we make the move from hardware to software. While there is a noticeable reduction in performance at the start, it is compensated by more powerful, less costly physical machines. Nonetheless, the opposite move to virtualization is crucial: that of concretization, i.e. enabling the software to be executed on reconfigurable machines so that the properties of the software are retained and top-of-the-range performances can again be achieved.
Software networks form the backbone of the new means of data transport. They are agile, simple to implement and not costly. They can be modified or changed at will. Virtualization also enables us to uncouple functions and to use shared machines to host algorithms, which offers substantial savings in terms of resources and of qualified personnel.
2
SDN (Software-Defined Networking)
The SDN (Software-Defined Networking) technology is at the heart of this book. It was introduced with strong control centralization and virtualization, enabling physical networking devices to be transformed into software. Associated with this definition, a new architecture has been defined: it decouples the data level from the control level. Up until now, forwarding tables have been computed in a distributed manner by each router or switch. In the new architecture, the computations for optimal control are performed by a different device, called the controller. Generally, the controller is centralized, but it could perfectly well be distributed. Before taking a closer look at this architecture, let us examine the reasons for this new paradigm.
The limitations of traditional architectures are becoming significant: at present, modern networks no longer optimize the costs at all (i.e. the CAPEX and OPEX). The networks are not agile. The time to market is much too long, and the provisioning techniques are not fast enough. In addition, the networks are completely unconnected to the services. The following points need to be taken into account in the new SDN paradigm:
– overall needs analysis;
– dynamic, rather than static, configuration;
– dynamic, rather than static, policies used;
– much greater information feedback than is the case at present;
– precise knowledge of the client and of his/her applications, and, more generally, his/her requirements.
2.1. The objective
The objective of SDN (Software-Defined Networking) is to reduce costs by virtualization, automation and simplification. For this purpose, SDN facilitates the customization of the networks, a very short set-up time and a network deployment with the right quality of service rather than a general quality of service.
The architecture of SDN can be summarized with three fundamental principles, as shown in Figure 2.1. The first is the decoupling of the physical and virtual layers (hardware and software). This enables virtual devices to be loaded on hardware machines provided, of course, that these hardware machines can host a hypervisor or containers. The second principle is going from a hardware to a logical aspect. This new environment enables us to spontaneously change the network by adding a new network or by taking