ML Framework in O-RAN

ML Framework in O-RAN

One of the key aspects of the Open RAN is to embed intelligence into the RAN natively. To this end, Artificial Intelligence (AI)/Machine Learning (ML) plays a crucial role in the process. Some of the goals for AI/ML within radio access networks are: decreasing the manual effort of going through large data amounts to diagnose issues and make decisions, or predicting the future to take proactive actions – thus saving time and cost. AI/ML-based algorithms may be used, e.g., in network security applications for anomaly detection, prediction of radio resources utilization, hardware failure prediction, parameters forecasting for energy-saving purposes, or conflict detection between xApps. This is being addressed from the beginning within?O-RAN ALLIANCE .

In this post, we discuss the overall framework for machine learning within O-RAN, touching upon the architectural aspects related to Open RAN.

Note: If you are interested in the Open RAN concept, check?this post . If you are interested in the details of O-RAN architecture, nodes, and interfaces, here is the?relevant post .


ML Framework within O-RAN Architecture

Fig. 1 below shows a simplified ML framework and a general procedure for the ML framework operation within O-RAN.

No alt text provided for this image
Fig. 1. ML Framework in O-RAN – General Procedure (simplified figure based on [O-RAN-ML])

Data is being collected through O-RAN interfaces (like O1, E2, or A1) from, pretty much, all of the O-RAN entities, including the O-RU, O-DU, O-CU, Near- and Non-RT RIC, but also can come from a UE, Core Network (CN) or Application Functions (AF). The collected data can be e.g., regular Performance Measurements (PM), statistics, or Enrichment Information (EI).

This data is used by ML training and inference functions:

  • ML Training Host (MTH), is a network function hosting the online and offline training of the model (typically Non-RT RIC is used for this purpose, but also Near-RT RIC in some scenarios).
  • ML Inference Host (MIH), is also a network function hosting the ML model during inference mode including model execution and online learning (Non- or Near-RT RIC can be utilized here).

The inference host provides output to an?Actor?(i.e., an entity hosting an ML-assisted solution. In this case, it can be O-DU, O-CU, Non/Near-RT RIC). Actor utilizes the results of ML Inference for the purpose of RAN performance optimization. Based on the decision, an action is taken on a?Subject?(i.e., an entity or function configured, controlled, or informed by the action).

After the action is taken, Subjects provide feedback serving as data sources for the next iteration. An important aspect of the whole framework is that any ML model needs to be trained and tested before deploying in the network (i.e., a completely untrained model will not be deployed in the network).

Based on the output of the ML model, an?ML-assisted solution?(i.e. a solution that addresses a specific use case using ML algorithms during operation)?informs the Actor to take the necessary actions toward the Subject. These could include CM (Configuration Management) changes over O1, policy management over A1, or control actions or policies over E2, depending on the location of the ML inference host and Actor.

... if you are interested in ML deployment scenarios, reach out to the full blog post: ML Framework in O-RAN - RIMEDO Labs


Types of ML Algorithms and Actor Locations within O-RAN Architecture

ML algorithms are basically divided into three main groups:

  • Supervised Learning (SL)?– utilizes labeled datasets for either?prediction?of a given quantity based on the input (e.g., traffic load based on the daytime), or?classification, i.e., assignment of the proper label to the input data (e.g., classification of a traffic load into the low, medium, high);
  • Unsupervised Learning (UL)?– deals with unlabeled datasets to discover hidden patterns, most of them are the clustering algorithms that can be used e.g., to analyze network coverage based on the RSRP reports;
  • Reinforcement Learning (RL)?– follows the concept of learning an agent how acts trough the interaction with the environment, e.g., different traffic steering policies are being tested under various traffic load conditions to learn the agent which should be chosen.

The location of the ML model components, i.e., ML training and the ML inference for a use case mostly depends on the tradeoff between communication delay related to?Option 1?and computational capabilities of Near-RT RIC –?Option 2, and considered control loop (Non-RT RIC, Near-RT RIC, and RT). Moreover, the availability and quantity of data, available through different O-RAN interfaces should also be taken into account.

... if you are interested in a summary of how different ML algorithms can be deployed within O-RAN architecture, reach out to the full blog post: ML Framework in O-RAN - RIMEDO Labs


ML Models in O-RAN Use Cases

There are various application types within the scope of RAN optimization and value prediction. Some ML algorithm types are more suited to address one such problem in this area, while others are suitable for different ones. This mapping is being analyzed within O-RAN ALLIANCE from the perspective of specific use cases, like QoE optimization, Traffic Steering, QoE-based Traffic Steering, or V2X Handover Management (detailed use case definition can be found in [O-RAN-UC]).

... if you are interested in examples of use cases as analyzed within O-RAN ALLIANCE along with the relevant ML algorithms types, deployment options, and input and output data with the functionality description, reach out to the full blog post: ML Framework in O-RAN - RIMEDO Labs

ML Model Lifecycle Implementation Example

Let’s now discuss an example of ML model lifecycle implementation within the O-RAN architecture [O-RAN-ML]. Below, is a high-level overview of the typical steps of AI/ML-based use case applications within O-RAN architecture considering Supervised Learning/Unsupervised Learning ML models.

  • First, the ML modeler uses a designer environment to create the initial ML model. The initial model is sent to training hosts for training. In this example, appropriate data sets are collected from the Near-RT RIC, O-CU, and O-DU to a?data lake?(i.e., a centralized repository to store and process large amounts of structured and unstructured data)?and passed to the ML training hosts. It is important that the first phase of training is conducted offline (based on the cached data, and e.g. an accurate network simulator), even when considering the RL approach.
  • After the successful offline training, and extensive tests, the trained model is uploaded to the ML designer catalog, thus, the final ML model is composed.
  • Next, the ML model is published to Non-RT RIC along with the associated license and metadata. In this example, Non-RT RIC creates a containerized ML application containing the necessary model artifacts.
  • Following this, Non-RT RIC deploys the ML application to the Near-RT RIC, O-DU, and O-CU using the O1 interface. Policies for the ML models are also set using the A1 interface.
  • Finally, PM (Performance Measurement) data is sent back to ML training hosts from Near-RT RIC, O-DU, and O-CU for retraining.


Summary

Artificial Intelligence definitely plays an important role in Open RAN networks. Utilizing ML-based algorithms and ML-assisted solutions allows for reducing manual effort, predicting future behavior by observing trends (e.g., predicting low utilization of resources to switch off cells for energy-saving purposes), detecting anomalies (e.g., detecting network attacks), or improving the efficiency of the system (e.g., efficiently utilize radio resources). O-RAN ALLIANCE embeds natively AI/ML into the standardization works from the scratch. This includes the creation of a dedicated AI/ML framework and definition of respective entities and procedures for it; embedding the AI/ML-related functional blocks within the RAN Intelligent Controllers; defining the A1 interface elements dedicated to the provisioning of ML models and enrichment information; defining specific use cases which would utilize AI/ML-based solutions, etc. The standard documents cover multiple options for the actual ML model training and inference deployment, with Non-RT RIC and Near-RT RIC taking a significant role.


If you are interested in AI for O-RAN security, check out this blog post:?AI for O-RAN Security – RIMEDO Labs .

If you are interested in ML-assisted solutions developed by our team:


To read the complete article including details of ML deployment scenarios, ML models in O-RAN use cases, types of ML algorithms, and actor locations within O-RAN architecture, check the blog post @ ML Framework in O-RAN - RIMEDO Labs

To check all our posts on 5G and OpenRAN-related topics see:?Blog - RIMEDO Labs


References

[O-RAN-ML] O-RAN WG2, ?AI/ML workflow description and requirements”, O-RAN.WG2.AIML-v01.03

[O-RAN-UC] O-RAN WG1, ?O-RAN Use Cases Detailed Specification”, O-RAN.WG1.Use-Cases-Detailed-Specification-v09.00


Relevant Rimedo Resources


Acknowledgment

Many thanks to Marcin Hoffmann and Pawel Kryszkiewicz for their valuable comments and suggestions for improvements to this post.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了