Understanding 5G, A Practical Guide to Deploying and Operating 5G Networks, The 5G Evolution Story
(Part 1)

Understanding 5G, A Practical Guide to Deploying and Operating 5G Networks, The 5G Evolution Story (Part 1)

1.?????It’s All About the Architecture

Cellular mobile radio is an incredible story of exponential growth in subscriber numbers and data volume. As shown in Figure 1, subscriber growth slowed as saturation of the addressable part of the global population was approached around 2010, then accelerated again as new drivers for subscriptions appeared. These new drivers included multiple devices per individual and machine-type communications.

However, the revenue for operators has not kept up with the growth in subscriptions and data volume. A significant reason why the exponential growth in data has not been accompanied by a similar growth in revenue for the operators is that the advent of smartphones, and their supporting ecosystem of apps, facilitated the emergence of the Over the Top (OTT) players like Google, Netflix, YouTube, Facebook, Apple, etc., and the economic power shifted away from the operators. This revolution in market behavior was supported by the availability of open platforms to cultivate innovation in the application space that led to an explosion of applications for a huge range of functions from the frivolous to the timesaving. This was further shored up by the introduction of the open-source Android platform that made smartphones available for all budgets. This is a great topic of discussion and debate; however, this tutorial is not the place to posit how this situation arose, but we will look at its consequences and how it leads to the need for a new game-changer in both the mobile network RAN (Radio Access Network) and core architectures, and in the monetization aspects of such architecture.

To understand why 5G will be that game-changer, we need to go back in time and review the evolution of the mobile network architectures defining its radio, access, and core?elements and functions.

Figure 1. Global mobile radio subscribers, data and revenue growth

Figure 1. Global mobile radio subscribers, data and revenue growth

1.1.?Naming of Parts

However, before we embark on our journey into the past, this section introduces some terminology that can be skipped or revisited at the reader’s leisure. Telecommunication aficionados love acronyms and, sometimes, like an acronym so much that they use it for multiple purposes. For example, Inter-Operability-Testing was known exclusively as “IOT” until the Internet of Things “IoT” became a thing. Thus, naming the parts may help talk about the architectural revolution that 5G is introducing.

Figure 2 is a 10,000-foot view of the architecture of a cellular mobile network, comprising core network (CN), radio access network (RAN), and the user equipment (UE).

Figure 2. Simple mobile network architecture

Figure 2. Simple mobile network architecture

The data exchanged between the elements in a communication system and how these are interpreted are precisely defined by standardized protocol layers to ensure correct operation and interoperability between vendors. Figure 3 provides a protocol view of the architecture that is more or less generic across all generations of the cellular network, albeit the actual processing carried out along with the structure of the radio link between the antenna unit and the user equipment (which is known as the air interface) may be markedly different. However, the protocol architecture has undergone more significant changes regarding the CN, so the Figure only shows a generic CN; each major CN element is described below as it is introduced with its associated generation.

Figure 3. Protocol layers traversing the base station

Figure 3. Protocol layers traversing the base station

The principal services provided by each layer are briefly introduced. The control plane (CP) sets up and tears down connections; protocol layers that handle CP are highlighted in blue. The user plane (UP) exchanges data between the UE and the CN; protocol layers that handle UP are highlighted in green.

The non-access-stratum (NAS) manages direct signaling between the UE and the CN to establish and maintain communication sessions with the UE as it moves through the network.

Radio resource control (RRC) manages the broadcast of system information: contacting UEs (paging); establishment, modification, and release of active RRC connections; handover between cells; selection of cells when not connected along with measurement; and reporting of the strengths at which the transmissions from different cells are received at the UE.

Service data adaptation protocol (SDAP), new for 5G, manages the mapping of quality-of-service (QoS) flows to radio bearers, and provides QoS marking on data packets in the RAN so that packets can be prioritized appropriately. The SDAP communicates with the U-plane?entity in the CN.

Packet data convergence protocol (PDCP) manages ciphering packet header compression and sequence numbering.

Radio link control (RLC) manages packet segmentation and error correction with Automatic Repeat reQuest (ARQ). Before 5G, this layer also performed packet concatenation and reordering to maximize utilization of the air interface at the expense of increased latency.

Medium access control (MAC) manages multiplexing data from different logical channels into/from transport blocks for delivery on the radio physical layer, and error correction through Hybrid-ARQ (HARQ).

Physical (PHY) functions are dependent on the air interface, but generically include rate matching, modulation, resource element (RE) mapping, and mapping to antennas.


?

1.2.?Once Upon a Time

If we were to travel four decades into the past, we would find that the drivers for the first-generation mobile network were quite simple: the need for good quality analog voice services anywhere at any time. However, with each subsequent generation, demand for the service grew, and mobile users wanted to send and read emails on the go. These new drivers led to major innovations, which in turn required complexity in the architecture to adapt in response to the never-ending mobile broadband usage expansions and user expectations.

To add another wrinkle, these new-generation architectures must carry the burden of supporting old generations operating simultaneously in the field.

?

1.2.1.????The 1980s and Early 1990s: Here Cometh “GSM”

The GSM (global system for mobile communications) was a digital system using Time Division Multiple Access (TDMA) to share radio resources. It was developed primarily in Europe starting in the mid-1980s, replacing the existing “first generation” analog mobile radio systems, and hence is referred to as a second-generation system, “2G”. It became such a global success story after its initial roll-out in 1992 that the term GSM is now almost synonymous with 2G. There are other 2G systems, such as code division multiple access (CDMA) IS-95 in the US and personal digital cellular (PDC) in Japan. Like the VHS and Betamax conundrum, both CDMA and GSM were fighting for global dominance; however, this is all history and it doesn’t really matter which technology was better. Let us stick here with 2G (aka GSM) as it is relevant to our story of the 5G evolution.

The drivers for 2G were initially to provide a voice service while introducing a moderate data service that could replace the legacy 1G systems. Another driver was to introduce interoperability between mobiles and network elements produced by different vendors.

The GSM network RAN elements include the base transceiver station (BTS), which?contains the radio equipment needed to serve each cell in the network. A group of BTSs is?controlled by a base station controller (BSC) that manages all the radio-related functions of?a GSM such as a handover, radio channel assignment, and the collection of cell configuration?data. The GSM CN initially included the mobile switching center (MSC) that controls several BSCs, performs the telephony switching functions between mobile networks,?and connects to the Public Switched Telephone Network (PSTN); see Figure 4.

GSM was based on a circuit-switched technology that later had to be re-engineered?to provide a packet-based data approach, the so-called General Packet Radio Services?(GPRS). GPRS introduced new CN network nodes to enable data transport and connect the mobile network to the Internet. Serving GPRS support node (SGSN) and?gateway GPRS support node (GGSN) formed what is called the GPRS CN. A high-speed?circuit-switched data (HSCSD) was a better fit to the existing GSM architecture. However,?the market required a packet-switched solution to efficiently support simultaneous data?and voice services. The changes addressed the MAC, to allow air interface resources to

be allocated in small chunks with successful delivery managed by HARQ, and the CN, to?allow the connection to a packet data used to be dealt with as a series of temporary block?flows (TBF) rather than as a continuous connection. The system was later improved with?enhanced data rates for GSM evolution (EDGE) that introduced 8-phase shift keying (8PSK)?modulation rate to the air interface. It can be argued that EDGE was the innovation that?created the environment in which smartphones could evolve.

In fact, IBM introduced what is considered the first concept of a “smart” phone product?called Simon Personal Communicator in 1992. We also know the story of the Blackberry?phone that stormed the market in the late 1990s to the extent that it seemed everyone?who was anyone in business had one.

As GPRS was introduced and deployed, the network elements at that time were not?dimensioned to support a mobility management scenario where all GPRS-capable mobiles?on the system attached to the GPRS CN even though actual usage of GPRS data was low.?A further problem with GPRS related to managing the latency for data services that could,?for example, enhance the user experience when browsing the web. Latency could be minimized?by extending the duration of the TBF in case new data arrived in close succession.

However, this was at the expense of capacity for other users to access the system, along?with increased battery consumption.

Figure 4. Simplified view of the 2G and 2.5G architecture

Figure 4. A simplified view of the 2G and 2.5G architecture

It is also interesting to note that the initial GSM system was delivered with a control channel?mechanism to deliver short text messages over the air: the Short Message Service (SMS).?Originally envisioned as a diagnostic tool, it became a mass-market success delivering messages?in the billions and shaping a new generation of texters and shorthand languages (LOL).

Other features were added to the GSM standard during its life of active development?and adoption into the 3rd Generation Partnership Project (3GPP). For example, capacity was increased by applying cell-splitting to go from Omni-directional cells to sectored cells,?and concentric cells, then to micro-cells in street canyons and pico-cells inside buildings.

Frequency hopping was also used to introduce so-called fractional reuse where “classical”?frequency reuse was applied to the frequency layer that carries the broadcast channel?(BCCH); frequency reuse of one was applied to the other frequency channels, but with a?restricted occupancy of the channels.

We will discuss in a later Part the emergence of machine-type communications (MTC),?but it is worth noting here that the need for authentication with point-of-sale devices,?an early type of MTC service, was discovered to be well-suited to GSM/GPRS (sometimes?called 2.5G) due to its ubiquity and low cost.

Such uses fed into the creation of the overarching concept of the Internet of Things?(IoT). It was apparent that the requirements for IoT/machine type communications (MTC) differed from those of cell phone subscribers. There were more stringent requirements for?lightweight control signaling, to allow many more IoT devices than cell-phone subscribers?to be connected in a given area; for very low power consumption, to support sensor-type?use cases with no access to external power; for increased tolerance to path-loss, to support?deployment in difficult radio propagation situations such as basements. On the other hand,?requirements for latency were generally greatly relaxed.

Updates were made to GSM to better support IoT that sought to reduce control channel?overhead and to ration random access channel (RACH) usage in the uplink for MTC devices,?but the scope to retrofit them was limited and they were largely not taken up by the?market, which retained use of legacy devices. Better technical solutions were available with?other standards; nonetheless, GSM-based IoT solutions are available in the market at the?time of this writing.

Other features such as the previously mentioned high-speed circuit-switched data (HSCSD),?along with voice group call service (VGCS) and multimedia broadcast multicast service?(MBMS), were added to the standard, but not widely adopted.

Other 2G systems that are relevant when considering 5G system requirements are those?directed at specialized applications. For example, the TETRA system provided a group push-to-talk (PTT) service with a very fast (sub-second) call setup time, direct communication between?UEs that bypasses the infrastructure, and high-power mobiles, which are required to support?emergency services applications. The TETRA system did not keep pace with the changing?needs of its users and the requirements were merged into long-term evolution (LTE), which is?now in the process of replacing these legacy systems in some territories around the world.


?

1.2.2. the Late 1990s to Early 2000s: The Almighty UMTS

The universal mobile telecommunication system (UMTS) was a 3rd-generation system?developed across Europe, the US, and Japan starting in the mid-1990s to replace the?existing 2G system. Again, as the Betamax vs. VHS story continued, there were other 3G?systems, such as CDMA2000 in the US and TD-SCDMA in China. We will focus here on?UMTS as the proxy for 3G.

Based on a wideband CDMA (WCDMA) radio physical layer, the UMTS Terrestrial Radio?Access Network (UTRAN) was composed of a new element, the Node B, that replaced?the BTS elements in the GSM/GPRS network. The Node B streamlined the radio-function?interaction with mobile devices. This Node Bs are controlled by the radio network?controller (RNC) that carries out radio resource and mobility management functions.

The UMTS CN is similar to that for GSM where the RNC switches the data services plane?through the SGSN and GGSN CN subnetwork and the voice services through the MSC CN?subnetwork; see Figure 5.

The driver for 3G at the design stage was quite simply extra voice capacity to support the?growth in subscriber numbers, combined with an ability to support higher data rates for?both Circuit Switched (CS) and Packet Switched (PS) domain services.

Additionally, there was a perception that GSM had very inflexible service definitions and?consequently, there was a desire to introduce a very flexible methodology resulting in the?Radio Access Bearer (RAB)/Signaling Radio Bearer (SRB).

Finally, an additional driver was to minimize the latency for call setup at least for some?classes of service. Flow establishment latency badly affects subscriber experience with web?browsing. As described above, the solution adopted for this in GSM, which was to extend?TBF duration, was inefficient. In contrast, UMTS introduced three additional connected?mode states, intermediate between fully idle or fully connected state, which provided a?more efficient, albeit complex, solution.

Figure 5. Simplified view of 3G and 3.5G architecture

Figure 5. A simplified view of 3G and 3.5G architecture

The fully connected state is known as CELL_DCCH for the dedicated control channel?(DCCH) on which resources for a given UE’s connections are reserved for a period. The?intermediate states are CELL_FACH, CELL_PCH, and URA_PCH. CELL_FACH, which works?by keeping the network updated about which cell the UE is in and supporting limited?data transfer on the shared forward access channel (FACH). Similarly, the CELL_PCH state?also updates the network about which cell the UE is in, but it uses the paging channel (PCH) rather than the FACH for downlink data. Finally, URA_PCH [1] also uses the PCH for?data, but it differs from CELL_PCH in that it updates the network about the UE’s location?with the granularity of a UMTS routing area. These states allow a signaling connection?to be maintained to speed the setup of a dedicated connection. Consequently, they can limit?congestion on control channels for social media and messaging type applications where?many users simultaneously attach to the system and exchange small volumes of data?with frequent updates. However, the design stage did not foresee the “smart-phone?revolution” which created the demand for “always-on” type applications with low average?rates, but occasional high peak data rates with low latency. The UMTS system had several

shortcomings in supporting this use case and was re-engineered to address it. Particularly,?a fast shared-access channel was added, and the WCDMA approach was made more?TDMA-like, by time-slicing allocations to higher modulation channels with lower spreading?factor, to address the peak-user rate limitation by exclusively allocating resources to a few mobiles. Additionally, while the 10ms frame-based HARQ period was maintained, an?optional short 2ms HARQ period was introduced to reduce latency.

These enhancements started with high-speed downlink packet access (HSDPA) in 3GPP?Rel-5 and high-speed uplink packet access (HSUPA) in Rel-6. These enhancements ushered?in the 3.5G era. Later upgrades to these features known collectively as HSPA+ (in Rel-7 and

beyond) were characterized by the best-in-class vendors being able to make them available?as software upgrades.

One of the distinguishing features of the WCDMA-based UMTS system that was?deprecated with 3.5G was soft handover. This feature creates a robust radio channel by?enabling a mobile to simultaneously connect with transceivers from widely separated?base stations, considerably suppressing the effects of multi-path propagation and shadow?fading. This provided more resilient coverage for voice and circuit-switched data services.?However, it consumed extra network and air interface resources. In HSPA it was replaced?with fast handover to maintain the connection to the current best local base station. However,?the soft handover was retained for the shared control channel. Later, fractional DPCH was?introduced to conserve/share downlink code tree resources between a larger number of?connected users.

Other features were added to the UMTS standard during its life of active development and?adoption into the 3GPP. Updates were made to UMTS to better meet the requirements for?IOT/MTC, but the scope to retrofit them was limited. Those adaptations did not address?the issue of mobile complexity and power consumption reduction, and better technical?solutions were available with other standards. Nonetheless, UMTS-based IoT solutions are?available in the market at the time of this writing.

?

1.2.3.????The 2010s: The Rise of LTE

As mobile operators were busy expanding their 3G and 3.5G networks, a competing?technology was rising and in 2010 made a huge debut in the communications market.?This technology was WiMAX (worldwide interoperability for microwave access), based?on the advancement of IEEE 802.16 standard and refreshed from earlier versions of the?standard to satisfy the requirements for a fourth-generation (4G) system defined by the

International Telecommunication Union (ITU). WiMAX promised to replace the last mile of?the communications link for both residential and enterprise broadband and provide speeds?up to 100 megabits per second.

This WiMAX frenzy pushed the acceleration of the emergence of the 3GPP 4th-generation?network and, in particular, the LTE (Long Term Evolution) standard. Also, the expansion of?GSM and the need to support standardized worldwide roaming was one motivation for?carriers like Verizon, who had bet on CDMA during the 3G phase, to quickly adopt the LTE?standard. The fact that it took less than ten years for 3G to have the LTE standard ready?for implementation was an indication of how smartphone take-up and higher data rates?needed for the new apps frenzy exceeded initial planning and market predictions.

The central driver for LTE was to achieve higher data rates in the packet domain and?simplify the network by eliminating the need to support separate circuit-switched?connectivity for voice. The LTE advantage was the creation of a “flatter” network?architecture without the need for a base station controller (BSC/RNC). This was also?accelerated by increases in microprocessor processing capability and cheaper RAM. So,?what used to be a system of Node Bs and RNCs was repackaged into a system of single?elements, the evolved Node Bs (eNodeB). This simplification also reduced the number of?different states of the User Equipment (UE) activation that had been introduced with?UMTS, which was to have reduced data latency. A totally new CN, the Evolved Packet?Core (EPC), was introduced to support LTE. This included the Mobility Management Entity (MME), the Serving Gateway (S-GW), and the Packet Gateway (P-GW); see Figure 6. Other?key elements of the EPC were the Home Subscriber Server (HSS) to manage subscriptions,?Policy and Charging Rules Function (PCRF), and Authentication, Authorization, and?Accounting (AAA) server for security.

Figure 6. Simplified view of the 4G architecture

Figure 6. A simplified view of the 4G architecture


While LTE Rel-8 supported voice-over IP [2], an acceptable solution to provide handover?to legacy CS voice on 2G or 3G was not available when LTE was first deployed. This arose?because the EPC did not support the CS domain and PS to CS handover was not available.?Initially, networks had to rely on circuit-switched fallback (CSFB). Consequentially, Voice?Over LTE (VoLTE) deployment was delayed until after 2012 when the Single-Radio Voice?Call Continuity (SRVCC) feature based on an upgraded core network became sufficiently?available. To support this feature, the EPC CN is supplemented by the IP Multimedia?System (IMS) CN to provide an “anchor” to support the signaling to permit seamless handover between the PS domain and CS domain. The IMS CN includes the Call Session Control?Function (CSCF), the Subscriber Location Function (SLF), Breakout Gateway Control Function?(BGCF), Media Gateway Control Function (MGCF), and Media Gateway (MGW).

To make a more straightforward distinction from a marketing perspective, the LTE Rel-10?was dubbed “LTE-Advanced.” This increased peak data rates through the introduction of?carrier aggregation of up to five carriers (100MHz total bandwidth) and enhancement of multi-antenna techniques (8x8 MIMO downlink and 4x4 MIMO uplink). This is the LTE?release that was submitted to the ITU to meet the international mobile telecommunication?(IMT)-Advanced requirements [6]. The LTE air interface, which will be introduced fully in Part 2, sends multiple data transmissions in parallel using Orthogonal Frequency?Division Multiplexing (OFDM) and delivers data to and from multiple users at the same?time using Orthogonal Frequency Division Multiple Access (OFDMA). This scheme results?in a multitude of discrete units of communication which can be allocated flexibly. The?ability to allocate resources in a very granular way on the LTE air interface also meant?that adaptations, as well as addressing the possible control channel congestion for delay-tolerant devices in a similar manner to that done for UMTS, could also address the?enablement of low complexity and low power MTC devices. In particular, the Narrowband?Internet of Things (NB-IoT) feature, Rel-13, allowed devices to use a single OFDM subcarrier,?simplifying implementation, and using significant repetition coding, increasing the path-loss resilience by 20dB, and enabling low-power and in-building devices.

LTE Advanced Pro (LTE-A, also known as 4.5G) was introduced with 3GPP releases 13 and?14, and in 2016 was considered a precursor to the 5G evolution. 4.5G significantly increased?the data speeds and bandwidth available. This was achieved using several different?technologies, including carrier aggregation, which increased the number of simultaneous?carriers supported from five to thirty-two; license assisted access (LAA) to include carriers?in the unlicensed spectrum by using a listen-before-talk (LBT) mechanism to enable co-existence with existing users of the spectrum; advances in antenna systems with full?dimension (massive) MIMO increasing supported antennas from sixteen to sixty-four to support two-dimensional beamforming; and higher-order modulation up to 256 quadrature?amplitude modulation (256QAM) [3].

The geographical distribution of traffic demand across a mobile network is very uneven.?The coverage area of macrocells is large enough to provide spatial averaging. That is,?one macrocell in a given location will generally experience a similar traffic demand to its?neighbors. As cells get smaller, this no longer holds, and some cells will experience high?traffic and adjacent ones considerably less. This sets the minimum useful size of a macro?cell inter-site distance at about 200m. To further “densify” the network to provide extra?capacity in “hot-spots” and in-fill coverage in localized “not-spots,” LTE small cells and?heterogeneous network (HET NET) access were introduced. This development led to an

exponential increase in the number of RRHs (remote radio heads); it was not economical to?attach a single eNodeB to each RRH. The centralized RAN (C-RAN, sometimes called the cloud-RAN if virtualized), allowed the efficient management of thousands of RRHs with a central?eNodeB pool (called BBU hotel); see Figure 7.

Figure 7. Simplified view of 4.5G architecture

Figure 7. A simplified view of 4.5G architecture

1.2.4.????Virtualization and Telecommunications Lifecycle

Telecommunication standards have traditionally taken up to fifteen years to make a?major step forward to the next generation and delivery of a complete end-to-end?standard. Innovations in radio communication are ultimately analog games. Progress?is held back by limitations of what can be achieved with technology that must span?the digital and analog worlds while being high performing, requiring a reasonably low?amount of power, and, for the UE, fitting into a handheld device. Then there is the Shannon limit, which places an upper limit on how much information can be transferred?over a given channel bandwidth of spectrum with a given level of interference and noise.?This is an ever-present specter that, until the next innovation, appears to make progress?in ever-diminishing steps.

However, this is changing. LTE took less than ten years. Whether or not Androids dream of?electric sheep, there has been a steady convergence between the worlds of information?technology and telecommunication technology, leading to an increasing “softwarization”?and “virtualization” of telecommunication infrastructure.

It is easier to virtualize CN functions as they are less processor-intensive than the RAN,?so it is here that virtualization had its first significant effect with the introduction of?the bearer-independent CN in 2004 in Rel-4 of the UMTS/ GSM Standard. This saw the division of the MSC into an MSC server (MSC-S) and a media gateway (MGW) that allowed?the separation of the functions related to the control plane from the user plane. This was followed?by the standardization of the IP Multimedia CN Subsystem (IMS) in Rel-6 in 2006 that had?the vision, if not the actuality at the time, of transitioning support of all voice call-related?services into the IP domain. As noted above, this didn’t really have an impact until 2012?when there was a market imperative to support voice service seamlessly across LTE and?legacy CS networks.

The introduction in 2008 of the EPC in Rel-8, starting with a blank sheet of paper, had?more freedom to disaggregate the CN into separate server-based entities including, as?mentioned above, the HSS to manage subscriptions, the PCRF to administer service and?admission rules, and the AAA for security.

3GPP Rel-14 in 2017 saw the introduction of separation between control plane (CP) and?user plane (UP) entities in the LTE EPC called CUPS. This addresses the kind of issue that?had occurred with the introduction of GPRS, where the mobility management function?in the CN had to deal with all the users attached to the system while there was actually?very little user data. Separation of CP and UP allows independent scaling, location, and?upgrading of the functions.

Historically, telecommunication investment cycles have been slow with network?infrastructure having an expected lifespan of ten to fifteen years, and a payback time?typically of multiple years. Mobile devices have historically experienced replacement?cycles of at least two to three years with associated mobile device subsidies forming a?relatively small part of the overall CapEx of the network operator. These factors have?supported the tendency for RAN to be deployed on bespoke, high-performance hardware?that is complex for network equipment manufacturers (NEMs) to develop and is also?inflexible, being purposed to a limited set of tasks, even while softwarization was starting?in the CN. However, there have been considerable advances in generic computer server?platform technology and functional abstraction technologies such as DPDK [4], and real-time operating systems (RTOS) that are increasingly enabling common off-the-shelf?(COTS) servers to “softwarize” more aspects of the RAN, thereby eliminating the need for?a bespoke hardware platform on which to execute. Additionally, technology developments are now making the power of devices such as application-specific integrated circuits?(ASICs) and central processing units (CPUs) sufficient to address the complexity of the OFDMA?air interface. In concert, the evolution of transceiver/power amplifier (PA) design has?enabled high-power transmission of higher modulation schemes with enough fidelity (low?error vector magnitude (EVM)) to make them practical.

Virtualization is supported by disaggregation of the RAN, that is, dividing the functionality?so that it can be scaled and located independently. However, it should be noted that, to a limited extent, RAN disaggregation has been available from GSM. For example, some?implementations used a proprietary PHY-level split to allow the RF functionality to be?located separately, on an antenna mast, from baseband processing at the base of the tower.?This was developed further in 3G by the introduction of the common public radio interface (CPRI) [5] which is closer to be an open interface. This allows a centralized RAN to be deployed?where all base station functionality is centralized, and the optical fiber is used to distribute in-phase and quadrature (IQ) samples of RF baseband to remote “dumb” radio units.

The faster-than-expected introduction of LTE demonstrates the evolution of the economics?driving the business cycle. Furthermore, these developments in technology offer freedom?to accelerate this trend, enabling a more agile opportunity-focused investment cycle by?reducing the prevalence of monolithic forklift interchangeable network elements based on?bespoke hardware. Potentially this offers significant savings in CapEx and operational?expenditure (OpEx) as RAN functions may be flexibly orchestrated, i.e., created, configured,?and deleted, as software functions. The topic of virtualization and its supporting ecosystem?is addressed in more depth in Part 6.

?

1.2.5.????2017: The Game-Changer 5G

Before we jump into what drove the acceleration behind the first 3GPP 5G Standard release?in 2017, it is worth noting that other global drivers in the world economies may have influenced such development, including the rise of China as a fast market?economic leader; the need for the European and North American markets to reverse the?GDP growth stagnation since 2007; and the unstoppable expansion of the non-traditional?communication players like Google, Amazon, Facebook, Apple, Netflix, and Uber offering?new, innovative services. All these factors are the precursor of the anticipated “Artificial?Intelligence Age,” with automation and everything connected to everything as explained in?the introduction to this tutorial. It is not hard to see why a new network generation design?is needed at the heart of such a technological revolution with demanding features like?unlimited bandwidth and Ultra-Reliable Low Latency.

It seems apparent that the designers of previous generations of mobile telecommunication?standards created systems that efficiently delivered the one or two services that they were designed for. However, the systems were not designed to be easily adaptable to the needs?of new services. After deployment, the systems had to be substantially re-engineered to?meet the evolving/developing service needs of the subscriber or wait until elements of the?core network had been upgraded to allow service continuity. GSM supported voice and low-data-rate circuit-switched data but had to be re-engineered to support packet data. UMTS?enhanced capacity to support voice and increased available data rate. However, it did not?efficiently support “always-on” type services with occasional high peak rates and low latency?requirements. It was subsequently re-engineered with HSPA+ to do this. Additionally, both?GSM and UMTS air interfaces were restricted in their flexibility to adapt to the demands of?MTC and were limited to introducing mechanisms to limit network congestion.

In contrast, LTE supported efficient packet data transport for both high-bandwidth services?and small-packet voice-like services. However, adaptations to the core network were?required to allow a seamless voice service across LTE and legacy networks, and these were?not available when LTE was first deployed.

Additionally, problems were encountered when the control traffic and associated processing?did not scale at the same rate as that of the user traffic. This occurred with the adoption

of GPRS due to the prevalence of always-on type services, and with UMTS where many?adaptations were required to circumvent the control “channel heavy” nature of the UMTS?DCH channel. Adapting the systems to serve both high bandwidth and small-packet users?with a minimum of control channel overhead and latency was challenging, and arguably?non-optimal solutions were developed.

Finally, the economics of the telecommunication industry has changed. The?straightforward model of wholesale replacement of the earlier standard as a forklift?exercise has been made a less attractive proposition, as evidenced by the little-to-no?growth in operator revenue after LTE deployment in some markets. Technological and?ecosystem trends mean that this is no longer the only show in town.

An overarching trend against the backdrop of the respective waves of technology, depicted?in Figure 1, has been ever-reducing cell sizes creating a “densification” of the network. This?enables the available frequency resources to be reused over shorter distances to provide?extra capacity. Furthermore, on the one hand, handsets are constrained by battery power,?which limits the available transmit power; on the other, increasing data rates mean more?bits must be exchanged. The shorter cell range reduces propagation loss and helps the?system meet energy-per-bit/noise-floor requirements.

However, densification increases the number of elements required to be deployed and?managed, which, all things being equal, increases CapEx and OpEx. This expectation is?supported by an economic analysis by Frisiani, et al. (McKinsey) that, viewed globally,

showed that CapEx and OpEx are consuming a greater share of operator revenue leading?to a reduction in cash flow. Figure 8 shows a simple extrapolation [6] of Frisiani’s model?over the timespan used in Figure 1.1, which illustrates that the new fifth-generation Standard must enable operators to “reinvent” the way they manage network?CapEx and OpEx.

Figure 8. Global mobile operator revenue and cash-flow

The overriding driver from the analysis above is that 5G needs to be flexible to allow the?system to:

·????????Adapt to unforeseen service requirements and thereby be better able to exploit new?revenue opportunities.

·????????Potential candidates are higher-definition on-demand video, smart cities, and IoT for?vertical markets such as automotive and emergency services networks.

·????????Facilitate incremental deployments overlaid on legacy networks to tailor CapEx spend?to emerging revenue opportunity to avoid the “forklift” problem.

For this to happen, 5G has taken on several strategies to design in flexibility.?These include:

·????????Independence of RAN from CN

o???- Aiding overlay deployment on legacy systems.

·????????Control and user plane separation in CN and RAN

o???- Aiding flexible deployment and scaling of processing capability depending on?emerging service need and helping support virtualization and “softwarization.”

·????????Service-based architecture in CN

o???- Moving from a point-interface-based architecture to a service-based architecture?that utilizes capability discovery and exposure, accelerates deployment of new?services, and circumvents 3GPP standards release cycle time for interface updates.

·????????Virtualization and orchestration

o???- Facilitating rapid reconfiguration of network functionality and low-cost network?operation given that the complexity of the network, as represented by the?number of network sub-functions that need to be instantiated and interconnected,?is increased.

·????????RAN disaggregation and open interfaces

o???- Aiding flexible deployment and scaling of processing capability and helping support?virtualization and “softwarization.”

o???- Open interfaces facilitating innovation and “best-in-class” sourcing at a?disaggregated RAN sub-function level.

·????????Analytics and artificial intelligence

o???Facilitating optimization and network operations and generating new revenue?streams through monetization of subscriber data.

·????????Network slicing

o???- Facilitating flexible service definition and support for new vertical services such as?IoT and emergency services networks (ESNs).

Some of these factors are directly addressed by the 3GPP standard, while others are?addressed by industry groups that effectively “fill in the gaps” between the standard and?its physical or virtual embodiment.


?1.3. 5G Use-Case Based Service Drivers

It is clear from the history of the various generations of mobile technology set out above?that there is inherent uncertainty in how the network will be operated and what it will be used for. Furthermore, some of the requirements presented by these service?needs are contradictory. For example, IoT needs very low control overhead for minimized?battery consumption and is delay tolerant. Conversely, ESNs demand very low latency?and can tolerate relatively high battery consumption. However, previous systems focused?on the delivery of peak data rate and minimum call setup latency, which tended to “bake-in” a heavyweight control channel that subsequently limited the ability to add services?optimized for minimum power, such as MTC services.

In the absence of a means to increase prescience, the 5G designers have approached?this problem by defining a diverse-as-possible set of potential use cases that exposes?upfront many of the contradictions in system design that caused problems with earlier generations of technology. This allows the designers to adopt a flexible system design that?avoids a service-specific hard-wired approach and facilitates subsequent re-engineering to accommodate unforeseen services. Moreover, it supports the ITU design premise,?“Depending on the circumstances and the different needs in different countries, future IMT?systems should be designed in a highly modular manner so that not all features have to be?implemented in all networks.”

Figure 9. The main 5G use-case scenarios (ITU IMT Vision)

Figure 9. The main 5G use-case scenarios (ITU IMT Vision)

The primary usage scenarios are enhanced Mobile Broadband (eMBB), which follows the?historic trend to provide ever greater bandwidth; massive Machine Type Communications?(mMTC) that demands Ultra-Reliable low overheads to enable battery life over ten?years; and Ultra-Reliable Low Latency Communications (URLLC) that demands resilience?along with sub-millisecond service latency. These were designed by the ITU to support?IMT-2020; they were not designed to be exhaustive, and additional unforeseen use cases?are expected to emerge. Figure 9 summarizes these scenarios and highlights some of the?potential applications that could be supported; eMBB is toward the righthand plane of the Figure; mMTC is toward the top plane of the Figure, and URLLC is toward the front plane of?the Figure. 5G is important for operators because it allows the greatest possible freedom of?action to respond to market demands for new services with a single, flexible approach.

The approach highlights that these use cases, and the new revenue streams that they?represent, are not standalone activities that fall under the conventional umbrella of?mobile radio services; rather, they are associated with new industry verticals that have the?potential to revolutionize how whole sectors of the global economy may operate. What is?being envisaged is a wholescale re-engineering and rebooting of the global economy. The?intent is to address how people see and experience reality; how cities operate; how goods are manufactured, distributed, and sold; how crops are grown; how disasters are prevented?or managed; how society is protected and served.

The challenges that must be overcome by the single flexible standard to meet the diverse?and conflicting service requirements represented by the IMT vision are made more apparent?when the key capabilities required to support the use case scenarios are illustrated on

a radar plot; see Figure 10. The mMTC use case scenario prioritizes connection density?and, to a lesser extent, network energy efficiency, and is relatively insensitive to the other?requirements. In contrast, URLLC prioritizes latency and mobility while eMBB broadly?prioritizes the whole range of key capabilities. For example, this means that the building?blocks of the standard—such as those that define how calls are set up and torn down?and how the connection between the mobile and the network is maintained— must offer?radically different modes of operation. So must the building block that determines the

granularity with which chunks of resource are allocated to mobiles, to enable both tiny and?massive allocations to be made with efficient signaling. For example, massive allocations?may be made on up to thirty-two aggregated carriers to support eMBB, whereas single?symbol allocations must be made on a contention basis to support URLLC, as described in?Part 2 on NR.

Figure 10. The importance of key capabilities in different usage scenarios (From ITU IMT Vision)

Figure 10. The importance of key capabilities in different usage scenarios (From ITU IMT Vision)

1.4.?Independence of RAN from CN

This is quite a pragmatic objective. The initial focus of the 3GPP standardization activities?has been on completing the so-called non-standalone (NSA) variant of the network?architecture in Rel-15. This architecture enables the 5G NR to be supported by the existing?RAN and EPC [7], which is the expected initial deployment as 5G is likely to be targeted?at dense urban areas that have an existing LTE deployment. Additionally, it helps avoid?the dependencies of RAN features on CN capability. It is not an exact parallel, but it is?interesting to compare this with the example of VoLTE introduction where the seamless?handover feature, SRVCC, required a CN supporting IMS to be available. The standalone?(SA) architecture cedes control from the EPC to the 5G core network (5G CN).

Figure 11. The NG-RAN NSA option 3/3a. Option 3x was added later (ex 3GPP 38.801)

Figure 11. The NG-RAN NSA option 3/3a. Option 3x was added later (ex 3GPP 38.801)

The terminology about the potential NG architecture options was defined in the NR TR 3GPP?38.801 along with the possible NSA architecture for NR-RAN connected to the legacy EPC. In?NSA Option 3, both the CP and UP from the gNB are managed by an LTE eNB that connects?to the EPC using the S1-C and S1-U interfaces. Whereas in the NSA Option 3a and 3x, the?control signaling for the gNB is managed by the LTE eNB that connects to the EPC, but the?gNB has a direct S1-U connection to the EPC for the UP. The difference between 3a and 3x?is with user plane handling in the gNB as with Option 3x the LTE eNB can be used for user?plane transmission which is not possible for Option 3a. This is illustrated in Figure 11.

1.5.?Control and User Plane Separation (CUPS)

As previously mentioned, separation of CP and UP, introduced to the CN with the CUPS?feature in Rel-14, has the benefit of facilitating the scaling and even location of UP and CP functionality independently. This concept has been developed significantly in the new?5G CN, as we touch on below. However, with 5G, this concept has been extended into?the RAN. 3GPP has defined the most central part of the gNB as the central unit (CU) and standardized the interface between the CU-CP and CU-UP as the E1 interface. This allows a?single CU-CP entity to manage multiple distributed CU-UP entities. A CU-CP may manage?the CP for multiple distributed unit (DU) entities and a CU-UP may manage the UP for?multiple DU entities. A DU is managed by a single CU-UP but may have connections to?others to provide redundancy. This is illustrated in Figure 12.

Figure 12. CP and UP separation in NG-RAN [from 3GPP TS 38.401]

Figure 12. CP and UP separation in NG-RAN [from 3GPP TS 38.401]


1.6.?Service-Based Architecture in 5G CN

The 5G CN has extended the principle of CP-UP separation to incorporate the approach?used to construct typical web services, that is, to define the functions as services. So?rather than defining point interfaces between the different functions that require up-front definitions of protocol messages, which makes the definition of new services subject to the?long cycle-time of 3GPP standardization, the process of service discovery and utilization is?effectively self-defining and extensible. A point interface-based architecture is available for?Rel-15 and Rel-16, but the evolution path is toward the Service Based Architecture (SBA), as?illustrated in Figure 13. Rather than showing interface names connecting the functions, the?figure shows each available service with proper capitalization.


1.7.?Virtualization and Orchestration

Virtualization of 5G CN and RAN has been introduced above. Orchestration is the process?of instantiating the set of virtualized network functions (VNFs) and physical network?functions (PNFs) across the requisite pools of virtualization infrastructure and physical?hardware, to meet the required performance profile for each of the sets of network?slices. Virtualization and orchestration promise to automate a great many of the processes involved with deploying and operating cellular networks. The various approaches?to the topic and the players involved in each are considered in more depth in Part 6.


?

1.8.?Disaggregation and Open Interfaces

Disaggregation of the RAN increases the flexibility of the available deployment options. It?creates the prospect of allowing infrastructure to be constructed from the best available?global services and products, reducing time-to-market, limiting vendor lock-in, and enabling?more cost-effective roll-out solutions. In fact, some use cases are likely not achievable with?monolithic base stations due to the inability to separately scale or locate functions.

Figure 13. 3GPP 5G Core Network Service-Based Architecture (SBA)

Figure 13. 3GPP 5G Core Network Service-Based Architecture (SBA)

Potentially, RAN disaggregation makes delivery of RAN more amenable to “agile” style?development processes, in the sense of being able to divide up technology into elements?one minimal capability at a time. The Telecom Infrastructure Project (TIP) is promoting?such an agile style development where requests-for-information (RFI) to potential vendors replace overarching standardization to develop new telecom products. In principle, this may?help address the issues discussed above in connection with the mobile telecommunications?business lifecycle.

In addition, by demarking the RAN into separate functional elements, RAN disaggregation?facilitates the extension of network function virtualization (NFV) into the RAN. This creates a?virtualized deployment environment that enables a more agile delivery of incremental (and?disaggregated) 5G functionality as the vendor is no longer tied to the delivery of a fixed-sized hardware platform.

Figure 14 summarizes the functional elements of each protocol layer of a 4G/5G RAN for?both uplink and downlink; the functions are essentially generic at this level of abstraction.?The figure illustrates the more popular split points within the RAN. The split between the?PDCP and the RLC referred to as high layer split (HLS), has been standardized by 3GPP as?the F1 interface for 5G in the 38.470 Series Specifications, and the W1 interface for 4G. The?splits between the MAC and the PHY and within the PHY are referred to as lower layer?split (LLS). 3GPP studied the LLS, but standardization has been left to the market with the?Small-Cell Forum and the Telecom Infrastructure Project (TIP) promoting Split 6, and the?O-RAN operator-led forum promoting split 7-2x. The figure shows the names [8] of the?split parts of the RAN, the central unit (CU), the distributed unit (DU), and the radio unit (RU). The shading indicates functions residing in RU and DU depending on the chosen split,?which may differ for UL and DL.

Previous generations of the RAN have adopted elements of RAN disaggregation. For?example, CPRI was introduced in the 1990s, and now would be classified as an option 8?split between the PHY and RF that transports digital samples of the baseband analog?waveform at about a 20x line rate. It enabled the RF to be mounted at the masthead,?reducing energy consumption as cable loss is eliminated and heating, ventilation, air?conditioning (HVAC) requirements are reduced. It eases many elements of deployment?and troubleshooting and even RF interference analysis (apart from climbing the mast to inspect the RF). Additionally, it provided the flexibility to implement a C-RAN. However, the?explosion in data rates and numbers of antenna elements makes it prohibitively expensive?to scale to 5G.

The options for disaggregation and the resultant interfaces are addressed in more detail in?Part 3.

Figure 14. Telecom functions of the RAN showing Functional split options  (PDCP also provides its services to RRC layer, not shown here)

Figure 14. Telecom functions of the RAN showing Functional split options (PDCP also provides its services to the RRC layer, not shown here)

1.8.1.????O-RAN

RAN disaggregation and open interfaces, as mentioned in the previous Section, facilitates?flexibility and deployment options. However, it also brings more complexity.

A principal consideration for 5G is the scale and flexibility of deployment, optimization,?management, and orchestration of the network, and this is only made more pressing by?the use of open RAN. Delivering new services and managing RAN capacity will no longer?be practical if managed manually. Intelligence and automation must be integrated into all?aspects of the network lifecycle to reduce both CAPEX and OPEX. Like RAN disaggregation,?intelligence in every layer of the RAN architecture is at the core of open RAN technology.

This will allow operators to deploy a truly self-managed, zero-touch automated network.?Consider an example, where baseband capacity can become a bottleneck during an?unplanned network event – with artificial intelligence and machine learning, this event can?be detected and characterized in a short amount of time and additional capacity can be?introduced quickly and efficiently on a white-box platform to overcome that challenge.

To achieve the above-mentioned goals of an open radio access network, operators founded?the O-RAN Alliance to clearly define requirements and help build a supply chain eco-system that can foster an environment for existing and new vendors to drive innovation.

As per the charter of the O-RAN Alliance, O-RAN Alliance members and contributors have?committed to evolving radio access networks around the world. Future RANs will be built?on a foundation of virtualized network elements, white-box hardware, and standardized?interfaces that fully embrace O-RAN’s core principles of intelligence and openness.

The key principles of the O-RAN Alliance include:

·????????Lead the industry towards open, interoperable interfaces, RAN virtualization, and big?data-enabled RAN intelligence

·????????Specify APIs and interfaces, driving standards to adopt them as appropriate and?exploring sources where appropriate

·????????Maximize the use of common off-the-shelf hardware and merchant silicon, thus?minimizing proprietary hardware.


?

1.9.?Network Slicing

Network slicing is not a new concept within the mobile telecommunications world. For?example, mobile virtual network operators (MVNOs) exploit slicing in legacy networks.?Typically, this is accomplished by reserving a set of subscribers IMSI [9] for the MVNO and?slicing at the subscriber management/billing layer. Network sharing can also be considered?a precursor for slicing. For example, the MOCN [10] shares a single RAN between different?operators’ CN; and MORAN [11] shares a single RAN with separate frequency allocation per?operator and GWCN [12].

However, the control and user plane separation in 5G, particularly with the 5G CN SBA,?allows a much finer granularity of slicing. The functions in the network become logical?functions that may be instantiated in physical locations as service requirements and?capabilities demand. This is further enhanced by Network Function Virtualization (NFV)?that permits the logical functions to be instantiated on a virtualization abstraction layer?hardware supported on COTS hardware.

In this new context network slice is defined as “a logical network that provides specific?network capabilities and network characteristics,” and a network slice instance is defined?as “a set of Network Function instances and the required resources (e.g. compute, storage,?and networking resources) which form a deployed network slice.” Consequently, the slice?instance will also determine the preferred control/user plane splits, function locations, and?required telemetry to assure the SLA.

Example types include slices for emergency services networks (ESNs), mMTC, and?enterprises. Moreover, an MVNO may be provisioned as having access to a subset of slices?of the required types.

Network slicing effectively requires disaggregation of the 5G CN and RAN in the service/?tenant domain. Ideally, the slices are independent and isolated from the point of view of?SLA assurance, as this simplifies resource management and service of SLA. However, this?arrangement requires a sacrifice of efficiency. Additionally, there are limits to the isolation?that is attainable, for example, when it comes to meeting stringent latency and bandwidth?requirements on the air interface. Resource allocation to the slices is generally dynamic

and potentially contingent on priority, e.g. for ESN, and the concept of a broking service to?manage this contention has been proposed.


?

1.10.?Navigating the 3GPP Standards for Architecture

TS 22.261: 5G service requirements

TS 23.501: System Architecture for the 5G system describes the overall architecture placing?the NG-RAN in the context of the 5G core using either a reference-point interface-based?architecture and the future service exposure-based architecture

TS 23.251: Network sharing architecture and functional description

TR 36.576: Study on architecture evolution for Evolved Universal Terrestrial Radio Access?Network: Discusses RAN LLS

TS 37.324: Service Data Adaptation Protocol (SDAP); specifies SDAP for a UE with?connection to the 5G CN.

TR 38.801: Study on new radio access technology: Radio access architecture and interfaces?Define URLLC; 5G architecture options; Defines RAN functional split options

TR 38.816: Study on CU-DU lower layer split for NR

TS 38.300: NR; NR and NG-RAN overall description, Stage 2 Defines dual connectivity (DC)

TS 38.401: NG-RAN; architecture description

TS 38.410: NG-RAN; general aspects and principles

TS 38.420: NG-RAN; Xn general aspects and principles

TS 38.460: NG-RAN; E1 general aspects and principles

TS 38.470: NG-RAN; F1 general aspects and principles

TS 28.500: Management concept and architecture for NFV


?1.11. 5G Architecture Blueprint

Figure 15 illustrates a simplified SA 5G architecture overlaid on the legacy of previous?generations. The technologies summarized in previous sections allow an operator to?flexibly choose the appropriate level of aggregation/disaggregation, for example adopting?one or another C-RAN approach to balance benefit from sharing, and advanced antenna?techniques, with the stringency of fronthaul bandwidth and latency requirements. Within that?aggregation/disaggregation choice, separation of CP and UP allows the decision to be?made separately for control plane and user plane processing to allow optimal performance?and processing capacity scaling. Further, with the addition of RAN virtualization and?network slicing, such choices can be made on a per-slice basis. For example, a URLLC?service slice may combine CN and RAN functionality at the edge of the network to achieve?sub-millisecond latency, whereas an mMTC service slice may aggregate CP functionality

on a national basis. Moreover, the combination of open interfaces, network function?virtualization, and the ability to support edge computing allow for new models of RAN?operation and optimization to be explored with the potential to deploy third-party RAN?Intelligent Controller (RIC) algorithms into the RAN.

Figure 15. Simplified view of the 5G architecture evolution story

Figure 15. A simplified view of the 5G architecture evolution story

In conclusion, we quote the 3GPP “5G Stage 1 Service Requirements” to sum up the?strategies that have been applied in the definition of 5G to enhance its flexibility:

“Flexible network operations are the mainstay of the new system. The capabilities to?provide this flexibility include network slicing, network capability exposure, scalability, and?diverse mobility. Other network operations requirements address the necessary control and?data plane resource efficiencies, as well as network configurations that streamline service?delivery by optimizing routing between end-users and application servers.”

At the beginning of this chapter, we determined to set the evolution of the 5G CN and RAN?architecture in the context of the history of exponential growth in mobile subscriber volumes,?the explosion in data usage, and revenue capture by the OTT players. It became apparent that conventional network design and deployment approaches were no longer sufficient to?maintain the required levels of cash flow.

However, according to the analysis of Frisiani, et al. (McKinsey), the 5G approach to design-in flexibility offers a bright ray of sunshine that application of virtualization and?softwarization techniques has the potential to dramatically reduce CapEx and OpEx costs.

Figure 1.16 shows the global revenue and cash flow (as shown in Figure 8), including an?adjusted cash flow considering CapEx and OpEx enhancements arising from softwarization?and virtualization according to the prediction of Frisiani, et al. In their paper they suggest?that the CapEx spend ratio can be reduced from 16-17% to less than 10% of revenue and that OpEx can be reduced from 50% [13] to about 20% of revenue. In the Figure, we assume?these changes occur over a four-year period from initial deployment. The scope of the?paper addresses all aspects of an operator’s business, including customer care and sales, as?well as acquisition and operation of a virtualized RAN, so the Figures should be regarded?as indicative or even aspirational. However, notwithstanding these caveats, the graph,?adjusted for the expected benefits of softwarization and virtualization, is encouraging in?that it suggests that 5G will be able to meet its design targets at least from an economic?perspective. In the subsequent chapters, we address in more detail the potential and?challenges of the 5G system.

Figure 1.16 Global revenue and cash flow (adjusted for “softwarization” benefits)

Figure 1.16 Global revenue and cash flow (adjusted for “softwarization” benefits)


Sreenivas Adiki

Enterprise Solution Architect - Cloud Architecture |Data, Analytics & GenAI | Certified AI Practitioner |

10 个月

Great Summary and write up ! is there a pdf

回复
Rohan Nale

SEO Analyst with experience of 2 years, specialized in Offpage, Onpage and Technical SEO

3 年

Enhanced?#central_gateway?for vehicle networking Get Industry Research PDF:?https://lnkd.in/eza6YHSe

要查看或添加评论,请登录

Houman S. Kaji的更多文章

社区洞察

其他会员也浏览了