Software Networks. Guy PujolleЧитать онлайн книгу.
outsources almost all of the company’s IT and networks.
Figure I.7 shows the functions of the different types of Cloud in comparison with the classical model in operation today.
Figure I.7. The different types of Clouds
The main issue for a company that operates a Cloud is security. Indeed, there is nothing to prevent the Cloud provider from scrutinizing the data, or – as much more commonly happens – the data from being requisitioned by the countries in which the physical servers are located; the providers must comply. The rise of sovereign Clouds is also noteworthy: here, the data are not allowed to pass beyond the geographical borders. Most states insist on this for their own data.
The advantage of the Cloud lies in the power of the datacenters, which are able to handle a great many virtual machines and provide the power necessary for their execution. Multiplexing between a large number of users greatly decreases costs. Datacenters may also serve as hubs for software networks and host virtual machines to create such networks. For this reason, numerous telecommunications operators have set up companies that provide Cloud services for the operators themselves and also for their customers.
In the techniques that we will examine in detail hereafter, we find SDN (Software-Defined Networking), whereby multiple forwarding tables are defined, and only datacenters have sufficient processing power to perform all the operations necessary to manage these tables. One of the problems is determining the necessary size of the datacenters, and where to build them. Very roughly, there are a whole range of sizes, from absolutely enormous datacenters, with a million servers, to femto-datacenters, with the equivalent of only a few servers, and everything in between.
I.3. “Cloudification” of networks
Figure I.8 shows the rise of infrastructure costs in time. We can see that a speed increase implies a rise in infrastructure costs whereas the income of telecommunication operators stagnates, partly due to very high competition to acquire new markets. It is therefore absolutely necessary to find ways to reduce the gap between costs and income. Among other reasons, two aspects are essential to start a new generation of networks: network automation using autopilot and the choice of open source software in order to decrease the number of network engineers and to avoid license costs for commercial software. Let us examine these two aspects before studying the reasons to turn to this new software network solution.
The automation of the network pilot is the very first reason for the new generation. The concept of the autopilot created here is similar to that of a plane’s autopilot. However, unlike a plane, a network is very much a distributed system. To achieve autopilot, we must gather all knowledge about the network – which means contextualized information – in all nodes if we want to distribute this autopilot or in a single node if we want to centralize this autopilot. Centralization was chosen for obvious reasons: simplicity and network congestion by packets with knowledge. This is the most important paradigm of this new generation of networks: centralization. This way, the network is no longer a decentralized system. It becomes centralized. It will be necessary to pay attention to the center’s security by doubling or tripling the controller, which is the name given to this central system.
The controller is the control device that must contain all knowledge about users, applications, nodes and network connections. From there, smart systems will be able to pilot packets in the infrastructure for the best possible service quality for all the clients using the network. As we will see later on, the promising autopilot for the 2020s is being finalized: the open source ONAP (Open Networking Automation Platform).
The second important aspect of the new generation of networks is the open source software. The rise of these open source software always comes from a need to reduce costs, and also to implement standards that can easily be followed by companies. The Linux Foundation is one of the major organizations in this area, and most of the software shaping future networks come from this Foundation, among which is the OPNFV (Open Platform Network Functions Virtualization) platform. This is the most important one since it gathers open source software that will act as a basic frame.
This tendency towards open source software raises questions such as: what will become of network and telecom suppliers since everything comes from open source software? Is security ensured with these thousands of thousands of coding lines in which bugs will occur? And so on. We will answer these questions in the Chapter 4, on open source software.
The rise of this new generation of networks, based on datacenters, has an impact on energy consumption in the world of ICT. This consumption is estimated in 2019 to account for 7% of the total carbon footprint. However, this proportion is increasing very quickly with the rapid rollout of datacenters and antennas for mobile networks. By way of example, a datacenter containing a million servers consumes approximately 100 MW. A Cloud provider with 10 such datacenters would consume 1 GW, which is the equivalent of a sector in a nuclear power plant. This total number of servers has already been achieved or surpassed by 10 well-known major companies. Similarly, the number of 2G/3G/4G antennas in the world is already more than 10 million. Given that, on average, consumption is 1500 W per antenna (2000 W for 3G/4G antennas but significantly less for 2G antennas), this represents around 15 GW worldwide.
Continuing in the same vein, the carbon footprint produced by energy consumption in the world of ICT is projected to reach 20% by 2025 if nothing is done to control the current growth. Therefore, it is absolutely crucial to find solutions to offset this rise. We will come back to this in the last chapter of this book, but there are solutions that already exist and are beginning to be used. Virtualization represents a good solution, whereby multiple virtual machines are hosted on a common physical machine, and a large number of servers are placed in standby mode (low power) when not in use. Processors also need to have the ability to drop to very low speeds of operation whenever necessary. Indeed, the power consumption is strongly proportional to processor speed. When the processor has nothing to do, it should almost stop, and speed up again when the workload increases.
Mobility is also another argument in favor of adopting a new form of network architecture. Figure I.8 shows that in 2020, the average speed of wireless solutions will be of several dozens of Mbit/s on average. Therefore, we need to manage the mobility problem. Thus, the first order of business is the management of multi-homing – i.e. being able to connect a terminal to several networks simultaneously. The word “multi-homing” stems from the fact that the terminal receives several IP addresses, assigned by the different networks it is connected to. These multiple addresses are complex to manage, and the task requires specific functionalities. Mobility must also make it possible to handle simultaneous connections to several networks. On the basis of certain criteria (to be determined), the packets of the same message can be separated and sent via different networks. Thus, they need to be re-ordered when they arrive at their destination, which can cause numerous problems.
Figure I.8. Speed of terminals based on the network used
Mobility also raises the issues of addressing and identification. If we use the IP address, it can be interpreted in two different ways: for identification purposes, to determine who the user is, and also for localization purposes, to determine the user’s position. The difficulty lies in dealing with these two functionalities simultaneously. Thus, when a customer moves sufficiently far to go beyond the sub-network with which he/she is registered, it is necessary to assign a new IP address to the device. This is fairly complex from the point of view of identification. One possible solution, as we can see, is to give two IP addresses to the same user: one reflecting his/her identity and the other the location.
Another revolution that is currently under way pertains to the “Internet of Things” (IoT): billions of things will be connected within the next few years. The prediction is that 50 billion will be connected to the IoT by 2020.