Specialized Cloud Architectures (Part II of II)
Thomas Erl
LinkedIn Top Voice | Best-Selling Author and Speaker | Digital Business Technology Thought Leader | LinkedIn Learning Instructor | YouTube, Podcast Personality | Pearson Digital Enterprise Book Series from Thomas Erl
This article is comprised of a chapter excerpt from the newly released book "Cloud Computing: Concepts, Technology, Security, and Architecture - Second Edition" (by Thomas Erl and Eric Barceló, ISBN 9780138052256), and has been published with permission by Pearson Education.
This is Part II of a two-part article in which we'll cover the latter 8 cloud architecture models listed below. The first 8 models were covered in Part I .
Specialized cloud architecture models cover a broad range of functional areas that explore creative combinations of cloud mechanisms and specialized components.
The following architectures are covered:
? Direct I/O Access (Part I)
? Direct LUN Access (Part I)
? Dynamic Data Normalization (Part I)
? Elastic Network Capacity (Part I)
? Cross-Storage Device Vertical Tiering (Part I)
? Intra-Storage Device Vertical Data Tiering (Part I)
? Load-Balanced Virtual Switches (Part I)
? Multipath Resource Access (Part I)
? Persistent Virtual Network Configuration (Part II)
? Redundant Physical Connection for Virtual Servers (Part II)
? Storage Maintenance Window (Part II)
? Edge Computing (Part II)
? Fog Computing (Part II)
? Virtual Data Abstraction (Part II)
? Metacloud (Part II)
? Federated Cloud Application (Part II)
Where applicable, the involvement of related cloud mechanisms is described.
15.9 Persistent Virtual Network Configuration Architecture
Network configurations and port assignments for virtual servers are generated during the creation of the virtual switch on the host physical server and the hypervisor hosting the virtual server. These configurations and assignments reside in the virtual server’s immediate hosting environment, meaning a virtual server that is moved or migrated to another host will lose network connectivity because destination hosting environments do not have the required port assignments and network configuration information (Figure 15.24).
In the persistent virtual network configuration architecture, network configuration information is stored in a centralized location and replicated to physical server hosts. This allows the destination host to access the configuration information when a virtual server is moved from one host to another.
Figure 15.24 - Part A shows Virtual Server A connected to the network through Virtual Switch A, which was created on Physical Server A. In Part B, Virtual Server A is connected to Virtual Switch B after being moved to Physical Server B. The virtual server cannot connect to the network because its configuration settings are missing.
The system established with this architecture includes a centralized virtual switch, VIM, and configuration replication technology. The centralized virtual switch is shared by physical servers and configured via the VIM, which initiates replication of the configuration settings to the physical servers (Figure 15.25).
Figure 15.25 - A virtual switch’s configuration settings are maintained by the VIM, which ensures that these settings are replicated to other physical servers. The centralized virtual switch is published, and each host physical server is assigned some of its ports. Virtual Server A is moved to Physical Server B when Physical Server A fails. The virtual server’s network settings are retrievable, since they are stored on a centralized virtual switch that is shared by both physical servers. Virtual Server A maintains network connectivity on its new host, Physical Server B.
In addition to the virtual server mechanism for which this architecture provides a migration system, the following mechanisms can be included:
15.10 Redundant Physical Connection for Virtual Servers Architecture
A virtual server is connected to an external network via a virtual switch uplink port, meaning the virtual server will become isolated and disconnected from the external network if the uplink fails (Figure 15.26).
Figure 15.26 - A physical network adapter installed on the host physical server is connected to the physical switch on the network (1). A virtual switch is created for use by two virtual servers. The physical network adapter is attached to the virtual switch to act as an uplink, since it requires access to the physical (external) network (2). The virtual servers communicate with the external network via the attached physical uplink network card (3). A connection failure occurs, either because of a physical link connectivity issue between the physical adapter and the physical switch (4.1), or because of a physical network card failure (4.2). The virtual servers lose access to the physical external network and are no longer accessible to their cloud consumers (5).
The redundant physical connection for virtual servers architecture establishes one or more redundant uplink connections and positions them in standby mode. This architecture ensures that a redundant uplink connection is available to connect the active uplink, whenever the primary uplink connection becomes unavailable (Figure 15.27).
Figure 15.27 - Redundant uplinks are installed on a physical server that is hosting several virtual servers. When an uplink fails, another uplink takes over to maintain the virtual servers’ active network connections.
In a process that is transparent to both virtual servers and their users, a standby uplink automatically becomes the active uplink as soon as the main uplink fails, and the virtual servers use the newly active uplink to sends packets externally.
The second NIC does not forward any traffic while the primary uplink is alive, even though it receives the virtual server’s packets. However, the secondary uplink will start forwarding packets immediately if the primary uplink were to fail (Figures 15.28 to 15.30). The failed uplink becomes the primary uplink again after it returns to operation, while the second NIC returns to standby mode.
Figure 15.28 - A new network adapter is added to support a redundant uplink (1). Both network cards are connected to the physical external switch (2), and both physical network adapters are configured to be used as uplink adapters for the virtual switch (3).
Figure 15.29 - One physical network adapter is designated as the primary adapter (4), while the other is designated as the secondary adapter providing the standby uplink. The secondary adapter does not forward any packets.?
Figure 15.30 - The primary uplink becomes unavailable (5). The secondary standby uplink automatically takes over and uses the virtual switch to forward the virtual servers’ packets to the external network (6). The virtual servers do not experience interruptions and remain connected to the external network (7).
The following mechanisms are commonly part of this architecture, in addition to the virtual server:
15.11 Storage Maintenance Window Architecture
Cloud storage devices that are subject to maintenance and administrative tasks sometimes need to be temporarily shut down, meaning cloud service consumers and IT resources consequently lose access to these devices and their stored data (Figure 15.31).
The data of a cloud storage device that is about to undergo a maintenance outage can be temporarily moved to a secondary duplicate cloud storage device. The storage maintenance window architecture enables cloud service consumers to be automatically and transparently redirected to the secondary cloud storage device, without becoming aware that their primary storage device has been taken offline.
NOTE: The live storage migration program is a sophisticated system that utilizes the LUN migration component to reliably move LUNs by enabling the original copy to remain active until after the destination copy has been verified as being fully functional.
Figure 15.31 - A prescheduled maintenance task carried out by a cloud resource administrator causes an outage for the cloud storage device, which becomes unavailable to cloud service consumers. Because cloud consumers were previously notified of the outage, cloud consumers do not attempt any data access.
This architecture uses a live storage migration program, as demonstrated in Figures 15.32 to 15.37.
Figure 15.32 - The cloud storage device is scheduled to undergo a maintenance outage, but unlike the scenario depicted in Figure 15.31, the cloud service consumers were not notified of the outage and continue to access the cloud storage device.
Figure 15.33 - Live storage migration moves the LUNs from the primary storage device to a secondary storage device.
Figure 15.34 - Requests for the data are forwarded to the duplicate LUNs on the secondary storage device once the LUN’s data has been migrated.
Figure 15.35 - The primary storage is powered off for maintenance.
Figure 15.36 - The primary storage is brought back online after the maintenance task is finished. Live storage migration restores the LUN data from the secondary storage device to the primary storage device.
Figure 15.37 - The live storage migration process is completed and all the data access requests are forwarded back to the primary cloud storage device.
In addition to the cloud storage device mechanism that is principal to this architecture, the resource replication mechanism is used to keep the primary and secondary storage devices synchronized. Both manually and automatically initiated failover can also be incorporated into this cloud architecture via the failover system mechanism even though the migration is often prescheduled.
NOTE
Edge and fog computing architectures establish environments outside of clouds but are covered here because these environments still relate to clouds and are primarily created in support of alleviating clouds from processing responsibilities so as to improve the performance, responsiveness, and scalability of consumer organization solutions.
Edge and fog computing architectures offer data processing and storage capacity closer to end user devices to streamline the processing and storage of data that will eventually be processed and stored in the cloud.
Edge and fog architectures are commonly used for IoT solutions in support of geographically distributed IoT devices. However, both architectures can be utilized to improve the effectiveness of standard business automation solutions for organizations, especially those with end users in multiple physical locations.
15.12 Edge Computing Architecture
An edge computing architecture introduces an intermediate processing layer that is physically positioned between the cloud and the cloud consumer. The edge environment is intentionally designed and located to be more accessible and performant for the consumer organization.
领英推荐
Portions of the cloud-based solution are moved to the edge environment, where they can be supported with dedicated infrastructure that enables them to perform faster, more responsively, and with greater scalability. Typically, the heavier processing responsibilities will remain with the cloud, while the parts of a solution with lower-end processing responsibilities are moved to the edge layer.
Edge architectures are typically utilized by consumer organizations with multiple, distributed physical locations. For each such location, a separate edge environment can be established (Figure 15.38). Edge computing environments can be implemented in suitable third-party locations that have the necessary resources, such as internet service providers and telecommunication providers.
Figure 15.38 - An edge computing architecture with a set of edge environments, each of which accommodates users or devices in a separate physical location.
?Edge computing can benefit application architectures by reducing bandwidth requirements, optimizing resource utilization, improving security (by encrypting data closer to its origin), and even reducing power consumption.
15.13 Fog Computing Architecture
A fog computing architecture adds an additional processing layer in between edge environments and a cloud (Figure 15.39). This allows intermediate-level processing responsibilities to be moved from the cloud to fog environments, each of which can support and facilitate multiple edge environments.
Fog computing pushes data processing capacity from the cloud to the fog layer, where gateways may exist to effectively relay data back and forth between the edge environments and the cloud. When edge environments need to send massive volumes of data to the cloud, the fog environment can first determine which data carries more value to optimize the data transfers. The gateways in the fog then first send critical data to the cloud to be stored and processed, while the remaining data relayed by edge computers may need to then be locally processed by resources in the fog environment.
Figure 15.39 - The use of the fog computing architecture inserts an intermediary processing layer between the cloud and the edge environments.
As with edge computing, fog computing is also commonly used to support IoT solutions. The use of fog computing for a business automation solution is generally warranted when the solution needs to support many users across highly distributed user bases.
NOTE
The remaining three architectures in this chapter originated from content published in An Insider’s Guide to Cloud Computing (Pearson Education, ISBN: 9780137935697), authored by David Linthicum .
15.14 Virtual Data Abstraction Architecture
Cloud applications that require access to data sources that supply data in different formats, structures, and schemas will be burdened with the additional responsibility to transform and consolidate disparate data into relevant, uniform datasets. A further negative consequence is the tight coupling that the cloud applications need to form with data sources that may be subject to change, replacement, or retirement in the future.
The virtual data abstraction architecture alleviates these concerns by introducing a data virtualization layer that acts as the connection point for cloud applications that require access to disparate data sources (Figure 15.40). Within this layer, the data exists virtually in data virtualization software, which is configured to resolve the data structure differences to provide a single, uniform data API for cloud applications to access.
Figure 15.40 - The data virtualization layer introduced by this architecture sits in between disparate data sources and cloud applications.
The use of the virtual data virtualization layer enables cloud applications to establish a loosely coupled relationship with disparate data sources. Should those data sources change over time, the data virtualization layer can be updated, ideally without changes to the APIs it exposes to the cloud applications.
15.15 Metacloud Architecture
While the multicloud architecture empowers cloud consumers with the flexibility to utilize diverse clouds to best fulfill business requirements, it can also introduce complexity when it comes to having to manage heterogeneity—meaning operate and govern multiple clouds, each with potentially different administration requirements, proprietary features, and security controls.
The metacloud architecture (Figure 15.41) abstracts these management, operational, and governance controls into a single logical domain that provides a central administration access point for the cloud consumer. This architecture is ideally established prior to proceeding with a multicloud architecture so that the centralized administration layer can be put in place from the start.
Figure 15.41 - A metacloud architecture, in which a layer is introduced to abstract operational, management, security, and governance control.
The meta layer can be physically located wherever the cloud consumer chooses. It can be based in a specific cloud, distributed across multiple clouds, or even placed on premises. By abstracting management, operational, and governance controls into a central location, the cloud consumer can evolve its multicloud architecture more easily over time, which can significantly improve the organization’s overall agility and responsiveness to business change.
15.16 Federated Cloud Application Architecture
A common limitation of distributed cloud applications is that their components or services are typically located within a single cloud environment. This limits the performance and functioning of those distributed application parts to the capacity and feature set of a single cloud’s infrastructure.
When using a multicloud architecture, there is an opportunity to leverage the distributed nature of a cloud application by placing individual application components or services in different cloud environments to maximize the benefits each may have to offer. For example, for a given application service, one cloud may offer better high-performance compute power, another more resiliency, and another perhaps more favorable usage costs.
In a federated cloud application architecture (Figure 15.42), application components and services are distributed among available clouds, so that each is deployed in the most advantageous and beneficial location. This can result in a variety of improvements to the cloud application but will also introduce significant architectural complexity.
Figure 15.42 - In a federated cloud application architecture, the distributed parts of the application can end up residing in different hosting environments, including different clouds and on-premises environments. Each part of the application is placed in a location that best supports its distinct requirements.
About the Book
Cloud Computing: Concepts, Technology, Security, and Architecture - Second Edition by Thomas Erl and Eric Barceló (Pearson Education, ISBN 9780138052256).
This article contains excerpts from Chapter 15: Specialized Cloud Architectures, which is one of three chapters dedicated to covering common and distinct cloud architecture models (located in Part III, as per the book TOC below).
Table of Contents
Foreword by David Linthicum
Chapter 1: Introduction
Chapter 2: Case Study Background
Part I: Fundamental Cloud Computing
Chapter 3: Understanding Cloud Computing
Chapter 4: Fundamental Concepts and Models
Chapter 5: Cloud-Enabling Technology
Chapter 6: Understanding Containerization
Chapter 7: Understanding Cloud Security and Cybersecurity
Part II: Cloud Computing Mechanisms
Chapter 8: Cloud Infrastructure Mechanisms
Chapter 9: Specialized Cloud Mechanisms
Chapter 10: Cloud Security and Cybersecurity Access-Oriented Mechanisms
Chapter 11: Cloud Security and Cybersecurity Data-Oriented Mechanisms
Chapter 12: Cloud Management Mechanisms
Part III: Cloud Computing Architecture
Chapter 13: Fundamental Cloud Architectures
Chapter 14: Advanced Cloud Architectures
Chapter 15: Specialized Cloud Architectures
Part IV: Working with Clouds
Chapter 16: Cloud Delivery Model Considerations
Chapter 17: Cost Metrics and Pricing Models
Chapter 18: Service Quality Metrics and SLAs
Part V: Appendices
Appendix A: Case Study Conclusions
Appendix B: Common Containerization Technologies
The book is available via Amazon.com , InformIT , and most other book outlets.
Chapter descriptions, a detailed TOC and symbol legend downloads are here .
About the Authors
Eric Barceló Monroy is an IT professional with extensive experience in cloud computing, data science, IT strategic planning, operational and administrative process re-engineering, system implementation project management, and IT operations.
Follow Eric:?www.dhirubhai.net/in/ericbarcelo
Thomas Erl is a best-selling author and oversees the Pearson Digital Enterprise Series from Thomas Erl. He is the CEO and Director of Learning and Development for Arcitura Education Inc., the Founder and Senior Advisor for Transformative Digital Solutions Inc. and a LinkedIn Learning author and contributor.
Follow Thomas:?www.dhirubhai.net/in/thomaserl
Subscribe to Thomas on YouTube:?www.youtube.com/@terl
Books by Thomas:?www.informit.com/authors/bio/f8d115ad-20cc-4d42-ad99-7e349d34f90d ?and?www.thomaserl.com
Internationally Known AI and Cloud Computing Thought Leader and Influencer, Enterprise Technology Innovator, Educator, Best Selling Author, Speaker, Business Leader, Over the Hill Mountain Biker.
1 年Great job, Thomas Erl! I loved reading the second part of the Specialized Cloud Architectures article with my insights. I'm honored! #cloudcomputing #cloudarchitecture #cloudinnovation