“En-caching” the RAN – the AI way
The Insight Research Corporation
Cutting-edge insight delivered with simplicity, clarity and integrity.
RAN caching is an intuitive use-case for AI. Our report “AI and RAN – How fast will they run?”, places caching third in the list of top AI applications in the RAN.
There is seriously nothing new about caching. In computing analogy, caching is as old as computing itself. The reason caching and RAN are being uttered in the same breath is primarily MEC. MEC is a practical concept. MEC attempts to leverage the distributed nature of the RAN infrastructure in response to the explosion in mobile data generation and consumption.
Caching looks at practically every point in the RAN as a possible caching destination – it can be base stations, or RRHs, or BBUs, or femtocells, or macrocells and even user equipment.
The caching dilemma is a multipronged – what to cache, where to cache and how much to cache.
In an ideal world, one could have access to infinite storage and processing capacity interconnected with infinite throughput at zero latency. In the real world however, each of these aspects – storage, processing power, throughput and latency are finite and real.
If the content is moved closer to the edge, then the latency can be marginally reduced, but the pressure on the backhaul for replications and updates will be high.
Conversely, centralizing the content adds to the latency due to longer access paths.
Conventional solutions use a magical keyword – optimize.
Optimization has its comfort zones ?– the traffic patterns are predictable, spatial diversity is static, the number of parameters to be considered is finite. This is no longer the case in present day networks.
One has to expect that optimization is a loaded, and flexible word. It has rather glibly placed itself in the pantheon of ‘AI-accepted’ epithets.
‘Real’ 5G expects its RAN to be dynamic beast, continually morphing in response to user behavior, device type, and network conditions. Add content temporal and social features, like views and likes to that mix. 5G RAN caching needs to be commensurately supple.
领英推荐
Let us sample a few of the very specific suppleness demands on caching:
It is in the crosshairs of these questions that AI, ML and DL provide multiple pathways of salvation.
Let us see how.
AI algorithms, trained on historical user data
Not all users have the same data needs. Deep Learning, especially clustering algorithms, can group users
The dynamic nature of 5G RAN, with varying user densities and data demands, necessitates adaptive cache allocation
Let us look at convolutional neural network (CNN). CNNs are known to be inspired by the visual cortex of animals. Just like the cortex, CNNs excel at learning learn spatial hierarchies of features from input data. CNNs do this automatically, eliminating the need for manual feature engineering. As a corollary, CNNs are computationally intensive and require significant amounts of data for training. When connected in parallel, CNNs too can be used to pinpoint caching locations and registers.
Cache storage is finite. Deciding which data to retain and which to replace is crucial. Traditional caching mechanisms, like Least Recently Used (LRU), might not always be optimal for dynamic 5G environments. ML can optimize cache replacement
Do you have any more ideas that you can share about AI in RAN caching? Do share with us.
Research scholar (Edge Intelligence for 5g mobile edge computing and Tactile Internet) -Visvesvaraya PhD Scheme For Electronics and IT-Phase II GOI
8 个月Thank you. Interesting content