Software Networks. Guy PujolleЧитать онлайн книгу.
releases in only nine years (start of 2011 to start of 2019). The latest version is Stein, released at the start of 2019. OpenStack is open source software with an Apache license.
The OpenStack architecture is shown in Figure 2.13. It is modular, and contains numerous modules, such as Nova for computation, Swift for storage, Glance for the imaging service, Dashboard for the settings and control panel, etc.
Figure 2.13. The OpenStack system
The part that is of interest to us most in this book is Neutron, which handles the network module. In this context, OpenStack provides flexible network models to take care of the applications. In particular, OpenStack Neutron manages the IP addresses, and allows for static addresses or uses DHCP. Users can create their own network in which SDN technology is used. OpenStack has numerous extensions, such as IDS (Intrusion Detection Systems), load balancing, the option to deploy firewalls and VPNs.
To conclude this section and summarize the SDN architectures in place, in Figure 2.14, we have shown the different components that have been put in place to achieve overall operation. The top and bottom parts represent the Cloud and the physical/logical networks. Between these two points, management and control of the network and of the applications needs to take place. In terms of the business application, we find sets of software modules – mostly open source – to deploy cloud-computing infrastructures, and more specifically IaaS (Infrastructure as a Service). On the contrary, we find the applications to establish a virtualized network structure, with the commands necessary to handle the business applications.
Figure 2.14. The overall architecture of SDN solutions
2.9. Urbanization
We have already mentioned the issue of urbanization of virtual machines in a network. Let us now look again at this concept in a little more detail. It involves placing the virtual machines in the network, i.e. in the Cloud, so that optimum performance is attained. While performance is, obviously, very important, in today’s world, the cost of datacenters, and therefore of networks, is based mainly on energy expenditure. To clarify the issue, Figure 2.15 shows the cost of a datacenter, distributed between the infrastructure and maintenance costs.
Figure 2.15. The cost of a datacenter environment. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip
Urbanization is an initial response to this demand to economize on power supply, attempting to bring together the virtual machines on common servers so as to be able to put a large number of servers, which have become idle, on standby. This solution is very useful at night and at low-demand times of the day. During the peak hours, the servers need to be awakened again, with a wider distribution of the virtual machines between the different servers.
Urbanization also takes account of cost factors, leading us to use the physical machines at night, and migrate the virtual machines, having them go all around the world in the space of 24 hours to stay in night mode. Evidently, this solution is viable only for light virtual machines that can be shifted with no problems, and is absolutely not appropriate for virtual storage machines processing “Big Data”.
Urbanization may affect other criteria such as the availability of the network, by facilitating access to emergency paths and virtual machines, distributed appropriately so that there are multiple access paths. Reliability also pertains to the same sensitive points, and solutions to reliability issues may be found through virtualization and urbanization.
Security elements may also be behind an urbanization. For example, certain sensitive machines may regularly change places so as to prevent having to deal with DDOS attacks. Similarly, a network may be cloned and, in the event of an attack, highly authenticated clients are switched to the clone, while the original network is gradually deprived of its resources to prevent it becoming a honey pot for the attacker.
Figure 2.16 shows a set of software networks, supplemented by diverse and varied virtual machines (servers, IP-PBX, Wi-Fi access points, etc.), which must obey an urbanization algorithm to optimize a set of criteria. Later, we will revisit the question of the intelligence to be introduced into networks to optimize the processes of urbanization.
Figure 2.16. The urbanization of a network environment. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip
2.10. Conclusion
The new generation of networks is presented in Figure 2.17. It is presented in the form of datacenters of varying size, from gargantuan datacenters to femto-datacenters located in the user’s pocket. These datacenters contain virtualized networking devices or networking functions that are decoupled from the hardware machines. This ensemble is controlled by pilot, orchestrator or controller machines. Figure 2.17 only shows one controller, but, in fact, given the centralized vision of control, the number of devices handled by a single controller can be no greater than a few hundred. For a rather big network, we need a great many different controllers, which will be more or less mutually compatible. The eastbound and westbound interfaces are likely to play an increasingly important role in facilitating the connection of the different sub-networks. It is clear that a very great many new propositions are likely to emerge in this direction, with extensions of protocols already in use, such as BGP.
Figure 2.17. Vision of tomorrow’s SDN networks. For a color version of the figure, see www.iste.co.uk/pujolle/software2.zip
One difficult problem that is beginning to be considered is the urbanization of virtual machines that are in the datacenters. Depending on the optimization criterion, the time of day or the performance requirements, the virtual machines migrate so that they are located in the best possible place. Most typically, urbanizations are done for the purpose of saving energy, which tends to consist of grouping together the virtual machines on common servers, or for the purpose of optimizing performance, which involves the opposite operation – distributing the resources to the servers as much as possible.
It should be noted that in terms of networks, the optimum placement of the virtual machines is no longer an equal distribution between all the physical machines, but on the contrary, the filling of those physical machines with the maximum possible number of virtual machines, obviously without causing a degradation in performance. The other physical machines are placed on standby to save energy. In the networks, this involves sending the streams along common paths, rather than dispersing them in the infrastructure network.
The SDN market is fairly difficult to predict, because while demand is high, so too are the costs involved in making the move to this new architecture. These high costs are due to novelty and to the difficulty in integrating SDN into the existing networks. In particular, operators who possess a significant physical infrastructure are wondering about the integration of SDN: how can it be introduced without an additional costly structure? Indeed, it is not an option to demolish everything and rebuild from scratch in this new context. Therefore, we need to introduce SDN in specific environments such as the renovation of a part of the network or the construction of a new localized infrastructure.
Figure