Let’s talk about 6G

Let’s talk about 6G

Do you have a minute? Great. Look, I didn’t want to have to bring this topic up, right now. I would have much rather put it off for another few years.?But as time relentlessly marches on, we are progressively being forced to have this conversation. The fact of the matter is that it’s nearly 2022 and global specifications groups have set a target of releasing a new G every decade. So even if the wireless technology lifecycle continues at no faster pace than standards bodies, network operators and equipment vendors have set to meet those goals, we can expect deployments of the next G iteration (6G) to start in just 8 years. Admittedly, it could be argued that this is simply to remain relevant in a world where mobile consumers are almost continuously inundated with new shiny Apple and Android toys. It has also historically been required to fix shortcomings when (as inevitably happens) the prior G is oversold. But there is also a new angle to 5G and future evolutions that has barely been given a thought, before now: The enterprise market.

Although smaller advancements evolve within typical GSMA release cycles under the same G umbrella - think LTE-Advanced (release 13) and control-user plane separation or CUPS (release 14) - there is evidence that things may be a little different, the next go around. Yes, operators have been known to take liberties with their marketing of these pre-G evolutions [cough - 5GE - cough] but there is a new level of urgency around 6G that we did not see in 4G or through the first 5G release.?

While the GSMA develops the technical standards, it is the radiocommunication sector of the ITU’s role to define the objectives of each G evolution. They did that for 5G back in September 2015 as ITU-R M.2083 “International Mobile Telecommunications (IMT) for 2020 and beyond” or simply (and rather cutely) “IMT vision 2020”. That was subsequently acted on in 3GPP release 15, which was released as an early drop in December 2017 – a full year ahead of schedule. Although contributions for proposals on the requirements for IMT vision for 2030 and beyond started in May 2021, there are now new forces in play. These are making it increasingly unclear if the industry will wait around for the ITU’s process to run its lengthy course.

A world divided… again

This uncertainty has been brought about in part because of the mobile network supply chain assessments that kicked off around the globe in the late twenty-teens. These activities have resulted in the Chinese government, along with regional suppliers and operators, hunkering down to get a jump on the development of new wireless technologies from other parts of the globe. While the likes of Huawei remain some of the most active participants in global standards bodies, such as the 3GPP and IETF, there seems to be no desire to wait for - or achieve - broad approval of their ideas and innovations. The “China Standards 2035” initiative began in January 2018 with an apparent goal of getting a jump start on the 6G (Vision 2030) work from the ITU. It could even have been a ploy to get ahead of 5G Advanced standardization, which is scheduled for 3GPP release 18 in the 2024 timeframe. Indeed, many believe (or even assume) we are witnessing a fundamental separation in standards track activities - something we’ve not seen since the 3G days - timed to occur even before that next major 5G update start dropping. This would certainly align with the complementary “Made in China 2025” (or MIC 2025) initiative that preceded the China Standards 2035 initiative by 6 months.

The rest of the world, however, is apparently not prepared to just sit back and watch this all go down. In North America, the Alliance for Telecommunications Industry Solutions (better known as simply ATIS) recently formed the Next G Alliance along with 65 founding and contributing members from across the telco, cloud, vendor and research community. In Europe, researchers have been working on 6G for years. Initiatives such as 6G Flagship, headed by the University of Oulu in Finland, have worked to bring together academics from across the globe. European operators, suppliers and universities have also joined to form the Hexa-X consortium, supposedly named for its mission to “…pave the way to the next generation of wireless networks (Hexa) by explorative research (X).”* While European Union has a hand in funding Hexa-X as part of its “…beyond 5G” program withing the 5G infrastructure public private partnership (5G-PPP), other governments are also financing 6G initiatives, such as the US National Science Foundation’s Resilient & Intelligent NextG systems (RINGS).

Teetering towards terahertz

So, the 6G discussion is going gangbusters but with the need to ultimately start standards work, what is most of the talk focused on? In the same way 5G centered around millimeter waves (mmWaves), almost all 6G chatter starts with terahertz (THz) – specifically the 300 GHz (so, sub-THz) to 3 THz band 6G is expected to occupy. Most people would stop me right there as that alone is rather astonishing, given the fact that 5G is being rolled out predominately in the classic low and mid bands and not the promised mmWave spectrum. This is, in part, because the technologies built to overcome the high propagation loss experienced in those frequency ranges are hitting a few snags.

Massive MIMO was supposed to be one such enabler but perhaps not surprisingly, in hindsight, doubling the number of antenna modules doubled the weight and size of the radio unit (RU). With tower weight and wind loading being exceeded, architects are having to revert to installing smaller arrays. This has also hampered the plan to simply throw large antennas up on flag poles, lamp posts and roof tops to provide broad coverage. Moreover, these beasts are not exactly viewed as eco-friendly, drawing far more power than comparable LTE RUs – up to 70% more, by some estimates. This has led to reports of antenna’s being put on standby overnight when demand is low. That alone could explain why 6G is getting this much attention this early on: If the job is to fix the current deployment issues of 5G in the mmWave range, then why not fix them for even higher bands delivering higher bandwidths.

Moving to the sub-terahertz / terahertz band has another potential advantage. In an uncharacteristically forward-looking move, the FCC created a category of experimental licenses in the 95GHz to 3Thz range, for researchers to experiment with 6G technologies. This included a huge chunk (21.2 GHz) of spectrum for unlicensed devices. The sheer size of the frequency range in question, coupled with the low propagation characteristics of radio frequencies in these ranges, could enable a large number of unlicensed devices to share this band. With a goal of spurring innovation and encouraging implementation of new *new* radio technologies it is also possible that the FCC foregoes licensing revenues and opens up even more spectrum to unlicensed use. This is especially attractive in the higher ranges where, again, signal propagation is low, so interference is naturally mitigated.

Escaping from our cell

With 6G - and maybe even public (consumer) 5G in the mmWave band - there is a need to reevaluate everything from the physical antennas to the RF technologies employed in new radio. While long talked about, it is likely time to finally adopt a new approach to antenna deployment. Cell-free massive MIMO (mMIMO) is one such option for replacing current RF design topologies. This proposal replaces the current concept of a small number of big antenna arrays servicing many subscribers in a pre-determined area with an architecture that favors a large number of small, highly distributed, access points (APs) jointly serving a few users at a time. Clusters of these APs connect into central processing units (CPUs) which, depending on the model used (likely dictated by the availability of backhaul bandwidth) performs either some or all of the packet processing.

While this all sounds similar to current 3GPP functional split propositions, such as O-RAN, the difference is clear when we start to consider the physical design and deployment of the antennas.?The same researchers at Linkoping University in Sweden who defined cell-free massive MIMO, have developed a concept of Radio Stripes that Ericsson has worked to make (in prototype form, at least) a reality. Radio Stripes represent a reimagining of the classic base station, replacing a typical base station array with antennas etched onto a tape-like strip that can be secured along the length of a building, in a hotel corridor, across a train platform or around a stadium. The application of more antenna elements will facilitate ultra-massive MIMO (UM-MIMO), which will enable far tighter beams resulting in higher gains. As previously noted, this becomes especially important when high band frequency ranges are employed and the effective signal range is diminished by… umm… oxygen, or blocked by obstacles like… err… windows. This post-cellular approach also mitigates the size and weight issues of centralized antenna designs, noted previously, and have been termed user-centric for their focus on better serving clusters of subscribers.

No alt text provided for this image

Emil Bj?rnson holds up a radio stripe prototype while presenting his concept of cell-free massive MIMO.

Messing with modulation

Complementing these new antenna technologies and RAN deployment methodologies, the adoption of sub-THz / THz wavelengths will also demand new waveform and coding schemes. The cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) technology, employed on the downstream side of new radio, enabled the use of variable subcarriers (termed numerology) and allowed 5G to efficiently employ different frequency ranges simultaneously from common antennas. Unfortunately, this multi-carrier waveform suffers from a higher peak-to-average power ratio (PAPR)… which I don’t understand at all. However, I do know that it occurs when the subcarriers are out of phase with each other and is exacerbated in the high frequency bands. That’s obviously a problem, when we are talking about THz. There are PAPR mitigation mechanisms which can be used, but these increase the modulation complexity and require the introduction of additional information to be carried by the waveform.

To resolve this, radio engineers are looking no further than the modulation technique employed on both the upstream side of LTE and new radio: Direct fourier transform spread OFDM, which is commonly abbreviated as DFT-s-OFDM. Catchy, huh! DFT-s-OFDM has a low PAPR, which is good, but also low spectral efficiency which (as I assume you can guess) is bad. ?The possible fix for that comes in the form of a signal processing method first theoreticized in 1975 by James Mazo of Bell Labs, which again supports what I’ve said in numerous previous blogs: Nothing about transmission technology is ever really new! Faster-than-Nyquist (FTN), as the name strongly suggests, enables quicker data transfers by increasing the symbol transmission rate to a level faster than that of the Nyquist rate. Henry Nuquiest’s theorem states that “…the maximum signaling rate for ISI-free reception at the matched-filter output samples, known as the Nyquist rate, is twice the channel bandwidth measured in Hz. ** FTN works by effectively multiplying the number of symbols that are transmitted within the time domain interval. There is a price for this, as always, in the form of a more complex receiver to reduce interference introduced by FTN, but easy access to small and cheap computing capacity is certainly not a problem, these days.

Considering new coding

In the same way that we must reevaluate modulation, researchers are also taking another look at channel encoding schemes for use in 6G. These are the techniques that improve signal to noise ratios (SNR) by providing a way to check and correct for errors. This is referred to as forward error correction (FEC). ?5G implements a variation of channel coding first devised in 1960 called block code low-density parity check (BC-LDPC) for data transmission channels, and polar codes for control signaling. Devised in 2009, the latter is a relative newcomer but both have a fundamental issue which makes them problematic for 6G communications scenarios. They suffer from high decoding delays that are compounded by the short code length and low code rates that are necessary for ultra-reliable low latency communications (URLLC). With latency demands getting even stricter in 6G, the high reliability and low complexity of another LDPC variant – convolutional code CC-LDPC – holds promise, but still exhibits performance problems at even moderate code lengths. Obviously, more work is required in this area. These road bumps are to be expected, however, as we march ever closer to Shannon's capacity limit of a communication channel: “…the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level.”***

A two-way street

Although every G iteration considers employing full duplex communications, 6G might be the one that finally embraces it as a standard. Simultaneously transmitting and receiving on the same frequency has the obvious advantage of making incredibly efficient usage of spectrum - effectively doubling it, of course - but mitigating the interference has always been an unsurmountable challenge. Recently, however, both research establishments and commercial entities have developed analog and baseband digital self-interference cancellation (SIC) circuity that essentially subtracts the transmitted signal from the received signal. While this sounds like a logical and promising answer to this problem and may even have the added benefit of mitigating deficiencies in higher-level protocols like TCP, there will be real-life limitations that must be considered, such as inter-cell interference.

6 plus G equals AI

With the establishment of distributed open radio access network architectures such as O-RAN and the introduction of complex mMIMO beamforming and steering, Machine learning (ML) driven artificial intelligence (AI) has already been making a name for itself in 5G. It is clear that whatever form 6G ultimately takes, it will be one that is inherently more complex than the previous generation. Supervised, unsupervised and reinforcement machine learning will definitely be required in the physical domain to support the new antenna arrays, beamforming, and channel coding. But the data, network, and application layers will also need more advanced AI. This is to enable not only predictive resource and dynamic spectrum allocation at the physical (PHY) and medium access control (MAC) layers but also fault recovery in the packet domain and management automation of everything from network functions on down. Critically, AI/ML also holds the answer to the energy optimization and power management concerns that are currently hampering mmWave 5G rollouts. ?

All this new AI will require the standards community to lean into current trends around open interfaces, decoupled controllers, and the application of general-purpose compute platforms. In the same way a network operator’s primary responsibility is to provide connectivity - however hard they try otherwise - it is also fair to suggest the hyperscalers talent at deploying highly distributed cloud capacity will become increasingly valuable. As a platform offering, their edge compute capabilities have the advantage of being both incredibly flexible but also fully managed, allowing operators to focus on the administration of their transport offerings rather than the underlying computing infrastructure on which the network functions are implemented. Hyperscalers also have a portfolio of services at the disposal of engineers and architects that can be employed when layering on AI/ML network automation or enhanced applications such as industrial IoT.

Good chat.

We should do this again – soon. ?I know it will annoy some people, but it’s clear that these are important conversations to have. Talking about 6G is not just a marketing ploy… although it is fair to say that it will not upset us marketeers! But hopefully you can see, now, where there’s an obvious need to start this discussion earlier than we have with any other wireless generation. Frankly it’s an accelerated timescale I would expect, at this stage of any technological evolution, especially as more custom hardware-centric functions are replaced with software running within (edge) clouds on general purpose compute platforms. Add to that the pressure of newly competing standardization groups and throw in the fact that entire nations are now vying to one-up each other and it’s clear that the stage is set for a race like never before.


Disclaimer:

None of the ideas expressed in this blog post are shared, supported, or endorsed in any manner by my employer.? Or probably anyone else, for that matter.


*Hexa-X The European 6G flagship project | IEEE Conference Publication | IEEE Xplore

**Running Faster than Nyquist: An Idea Whose Time May Have Come | IEEE Communications Society (comsoc.org)

***Noisy-channel coding theorem - Wikipedia

ShengHan Wu

Reliable ECC-IP provider for CommSys and Storage

10 个月
回复

要查看或添加评论,请登录

Simon Dredge的更多文章

社区洞察

其他会员也浏览了