Ralf Kreher explained EPS Fallback mechanism in his post earlier, which is still quite popular. This post contains couple of videos that also explain this procedure.
The first is a very short and simple tutorial from Mpirical, embedded below:
The second is a slightly technical presentation explaining how 5G system can redirect the 5G VoNR capable device to the 4G system to continue for IMS based VoLTE voice call.
Over the last few months I have discussed the role of 5G in different industries as part of various projects. Some of these discussions are part of my blog posts while others aren’t.
5G is often promoted as a panacea for all industries including healthcare. This presentation and video looks not only at 5G but other connectivity options that can be used to provide solutions for healthcare. In addition, this presentation looks at different components of the mobile network and explore the role of devices in healthcare.
At 'The Things Conference' in Amsterdam in September, Roman Nemish, Co-Founder & President of TEKTELIC presented a critical view of different IoT technologies and argued that LoRaWAN is the only technology that will eventually make mass IoT possible.
The following is the intro to the talk from the conference:
IoT technology has progressed from home to city-scale applications, making it a crucial part of any operational process. IoT sensors are becoming more affordable, reliable, and easy to deploy.
The Internet of Things has already brought advancement to healthcare, retail, city infrastructure, and manufacturing with many other opportunities still open.
We are ready to explain why IoT deployment has transformed from privilege to necessity, what benefits it can bring to your business, and how you win the competition using IoT.
2 of 2: "The IoT game... is better served by smaller-sized providers without regional licenses offering broadly-equivalent technologies in unlicensed bands" - https://t.co/9QcqoiIRh0
Enterprise IoT Insights have a good take on the talk here. The following is an extract:
Nemish, president at TEKTELIC, argued that new-wave cellular IoT – in the form of NB-IoT and LTE-M, primarily – is “too expensive” for consumers and too small-margin for mobile operators; that “most IoT opportunities are 10-25 times smaller [than the kinds of deals that would] attract operator attention”. Cellular IoT has “vast potential”, he concluded, but requires a “different approach”.
In other words, there is not enough profit in (low-power) cellular IoT for mobile operators to give it proper focus – and the deals are not big enough to make them really care. The IoT game – based on finely-calculated returns on volume-deals not going much higher than 100,000 units at a time – is better served by smaller-sized providers, without regional spectrum licences, offering broadly-equivalent technologies in unlicensed bands, he implied.
But experiences with Sigfox and LoRaWAN (in some formats) – the French-born IoT twin-tech that started the whole low-power wide-area (LPWA) movement, and forced the cellular community to come up with their own alternatives – have not been much better, necessarily, the story goes. Sigfox pumped $350 million over 10 years into its technology and network, only to go into receivership at the start of 2022 with fewer than 20 million devices under management.
The problem, said Nemish, is with the business model, and not the tech. (As an aside, a takeaway from The Things Conference last week, as from the LoRaWAN World Expo in Paris in the summer, and from any number of private discussions in between, is the IoT market is mature enough to let go of its closely-held tech differences, and acknowledge that customers don’t really care so long as it works – and so the blame switches to the business model, instead.)
Nemish blamed Sigfox’s ‘failure’ on exclusive single-market contracts and cripping licensing fees; these “killed most operator business plans”, he suggested. Of course, Sigfox lives to see another day – and, it might be noted, Taiwan-based IoT house Unabiz, its new owners, have just hosted the 0GUN Alliance of Sigfox operators in France to bash-out a new operator model, and a collaborative approach to a “unified LPWAN world”.
And LoRaWAN is not exempt in the analysis, either. In Amsterdam, Nemish held up the madly-hyped Helium model for crypto-led community network building as another failed IoT business model. Again – and of course, with a critical appraisal of a LoRaWAN network by a LoRaWAN provider – the tech is not the problem, just the way it is being offered. Because Helium, he said, with $1 billion of public community funding, has “no use” after three years.
As per the slide, parent Nova Labs has “failed to sign customers, implement SLA(s), or plan network evolution”, he suggested. The community behind it, originally bedsit enthusiasts on to a good thing, are not motivated by “IoT adoption but [by] crypto-mania”, said Nemish. Just look on eBay, where 10,000 secondhand Helium miners (gateways) are being flogged, to see how its star has fallen, he said – along with its stock, with HNT trading up 12 percent at around $5 at writing, on the back of a deal for decentralised 5G with T-Mobile in the US, but down from a high of nearly $30 a few months ago.
Here's a belated apology from seasoned tech reporter @gigastacey, an early believer and promoter of @Helium.
Stacey has made $20,000 from a hotspot that served... $0.05 of IoT data.
The article highlights some heated discussions on the presentation and slides. You can read the whole article here.
The closing slide nicely summarises that IoT deployment is a marathon, not a sprint. End users are interested in solving real-world problems. Partner to develop complete IoT solutions that can be integrated simply with any IoT platform and with clearly defined API. Also have a strong engineering team to support customer integration and early deployment.
Here is the video of the talk for anyone interested:
We have looked at different approaches in this blog and the 3G4G website on reducing the power consumption (see related posts below). In a blog post some months back, Huawei highlighted how 5G can improve the battery life of UE. The blog post mentioned four approaches, we have looked at three of them on various blogs.
A UE can access network services only if it establishes a radio resource control (RRC) connection with the base station. In legacy RATs, a UE is either in the RRC_CONNECTED state (it has an RRC connection) or the RRC_IDLE state (it does not have an RRC connection). However, transitioning from the RRC_IDLE state to the RRC_CONNECTED state takes a long time, so it cannot meet the low latency requirement of some 5G services. But a UE cannot just stay in the RRC_CONNECTED state because this will consume much more UE power.
To solve this problem, 5G introduces the RRC_INACTIVE state, where the RRC connection is released but the UE context is retained (called RRC Release with Suspend), so an RRC connection can be quickly resumed when needed. This way, a UE in the RRC_INACTIVE state can access low-latency services whenever needed but consume the same amount of power as it does in the RRC_IDLE state.
DRX + WUS
Discontinuous reception (DRX) enables a UE in the RRC_CONNECTED state to periodically, instead of constantly, monitor the physical downlink control channel (PDCCH) to save power. To meet the requirements of different UE services, both short and long DRX cycles can be configured for a UE. However, when to wake up is determined by the predefined cycle, so the UE might wake up unnecessarily when there is no data scheduled.
Is there a way for a UE to wake up only when it needs to? Wake-up Signal (WUS) proposed in Release 16 is the answer. This signal can be sent before the next On Duration period (during which the UE monitors the PDCCH) so that the UE wakes up only when it receives this signal from the network. Because the length of a WUS is shorter than the On Duration Timer, using WUS to wake up a UE saves more power than using only DRX.
BWP Adaptation
In theory, working on a larger bandwidth consumes more UE power. 5G provides large bandwidths, but it is unnecessary for a UE to always work on large bandwidth. For example, if you play online mobile games on a UE, only 10 MHz of bandwidth is needed for 87% of the data transmission time. As such, Bandwidth Part (BWP) is proposed in 5G to enable UEs to work on narrower bandwidths without sacrificing user experience.
BWP adaptation enables the base station to dynamically switch between BWPs based on the UE’s traffic volume. When the traffic volume is large, a UE can work on a wide BWP, and when the traffic volume is small, the UE can work on a narrow one. BWP switching can be performed based on the downlink control information (DCI) and RRC reconfiguration messages. This ensures that a UE always works on a bandwidth that supports the traffic volume but does not consume too much power.
Maximum MIMO Layers Reduction
According to 3GPP specifications, the number of receive and transmit antennas used by a UE cannot be fewer than the maximum number of MIMO layers in the downlink and uplink, respectively. For example, when a maximum of four downlink MIMO layers are configured for a UE, the UE must enable at least four receive antennas to receive data. Therefore, if the maximum number of MIMO layers can be reduced, the UE does not have to activate as many antennas, reducing power consumption.
This can be achieved in 5G because the number of MIMO layers can be re-configured based on assistance information from UEs. After receiving a request to reduce the number of MIMO layers from a UE, the base station configures fewer MIMO layers for the UE through an RRC reconfiguration message. In this way, the UE can deactivate some antennas to save power.
T-Mobile added more cell sleep profiles (or this is just the first time i noticed it). L2100 is now turned off at night when the cell load is low. After 6am it's back on everywhere. When you load the cell it will also come back online.
Power consumption in the networks and the devices is a real challenge. While the battery capacity and charging speeds are increasing, it is also important to find ways to optimise the signalling parameters, etc. One such approach can be seen in the tweet above regarding regarding T-Mobile in The Netherlands, selectively switching off a carrier in the night and switching it back when the cell starts loading or in the morning.
We will see lot more innovations and optimisations to dynamically update the technologies, parameters, optimisations to ensure power savings wherever possible.
Since the industry realised how the 5G Network Architecture will look like, Network Slicing has been touted as the killer business case that will allow the mobile operators to generate revenue from new sources.
According to global technology intelligence firm ABI Research, 5G slicing revenue is expected to grow from US$309 million in 2022 to approximately US$24 billion in 2028, at a Compound Annual Growth Rate (CAGR) of 106%.
“5G slicing adoption falls into two main categories. One, there is no connectivity available. Two, there is connectivity, but there is not sufficient capacity, coverage, performance, or security. For the former, both private and public organizations are deploying private network slices on a permanent and ad hoc basis,” highlights Don Alusha, 5G Core and Edge Networks Senior Analyst at ABI Research. The second scenario is mostly catered by private networks today, a market that ABI Research expects to grow from US$3.6 billion to US$109 billion by 2023, at a CAGR of 45.8%. Alusha continues, “A sizable part of this market can be converted to 5G slicing. But first, the industry should address challenges associated with technology and commercial models. On the latter, consumers’ and enterprises’ appetite to pay premium connectivity prices for deterministic and tailored connectivity services remains to be determined. Furthermore, there are ongoing industry discussions on whether the value that comes from 5G slicing can exceed the cost required to put together the underlying slicing ecosystem.”
I recently published IDC's first forecast on 5G network slicing services opportunity. Slicing should be an important tool for telcos to create new services, but it still remains many years away in most markets. A very complicated undertakinghttps://t.co/GNY4xiLFBV
Earlier this year, Daryl Schoolar - Research Director at IDC tackled this topic in his blog post:
5G network slicing, part of the 3GPP standards developed for 5G, allows for the creation of multiple virtual networks across a single network infrastructure, allowing enterprises to connect with guaranteed low latency. Using principles behind software-defined network and network virtualization, slicing allows the mobile operator to provide differentiated network experience for different sets of end users. For example, one network slice could be configured to support low latency, while another slice is configured for high download speeds. Both slices would run across the same underlying network infrastructure, including base stations, transport network, and core network.
Network slicing differs from private mobile networks, in that network slicing runs on the public wide area network. Private mobile networks, even when offered by the mobile operator, use infrastructure and spectrum dedicated to the end user to isolate the customer’s traffic from other users.
5G network slicing is a perfect candidate for future business connectivity needs. Slicing provides a differentiated network experience that can better match the customers performance requirements than traditional mobile broadband. Until now, there has been limited mobile network performance customization outside of speeds. 5G network slicing is a good example of telco service offerings that meet future of connectivity requirements. However, 5G network slicing also highlights the challenges mobile operators face with transformation in their pursuit of remaining relevant.
For 5G slicing to have broad commercial availability, and to provide a variety of performance options, several things need to happen first.
Operators need to deploy 5G Standalone (SA) using the new 5G mobile core network. Currently most operators use the 5G non-standalone (NSA) architecture that relies on the LTE mobile core. It might be the end of 2023 before the majority of commercial 5G networks are using the SA mode.
Spectrum is another hurdle that must be overcome. Operators still make most of their revenue from consumers, and do not want to compromise the consumer experience when they start offering network slicing. This means operators need more spectrum. In the U.S., among the three major mobile operators, only T-Mobile currently has a nationwide 5G mid-band spectrum deployment. AT&T and Verizon are currently deploying in mid-band, but that will not be completed until 2023.
5G slicing also requires changes to the operator’s business and operational support systems (BSS/OSS). Current BSS/OSS solutions were not designed to support the increased parameters those systems were designed to support.
And finally, mobile operators still need to create the business propositions around commercial slicing services. Mobile operators need to educate businesses on the benefits of slicing and how slicing supports their different connectivity requirements. This could involve mobile operators developing industry specific partnerships to reach different business segments. All these things take time to be put into place.
Because of the enormity of the tasks needed to make 5G network slicing a commercial success, IDC currently has a very conservative outlook for this service through 2026. IDC believes it will be 2023 until there is general commercial availability of 5G network slicing. The exception is China, which is expected to have some commercial offerings in 2022 as it has the most mature 5G market. Even then, it will take until 2025 before global revenues from slicing exceeds a billion U.S. dollars. In 2026 IDC forecasts slicing revenues will be approximately $3.2 billion. However, over 80% of those revenues will come out of China.
The 'Outspoken Industry Analyst' Dean Bubley believes that Network Slicing is one of the worst strategic errors made by the mobile industry, since the catastrophic choice of IMS for communications applications. In a LinkedIn post he explains:
At best, slicing is an internal toolset that might allow telco operations or product teams (or their vendors) to manage their network resources. For instance, it could be used to separate part of a cell's capacity for FWA, and dynamically adjust that according to demand. It might be used as an "ingredient" to create a higher class of service for enterprise customers, for instance for trucks on a highway, or as part of an "IoT service" sold by MNOs. Public safety users might have an expensive, artisanal "hand-carved" slice which is almost a separate network. Maybe next-gen MVNOs.
(I'm talking proper 3GPP slicing here - not rebranded QoS QCI classes, private APNs, or something that looks like a VLAN, which will probably get marketed as "slices")
But the idea that slicing is itself a *product*, or that application developers or enterprises will "buy a slice" is delusional.
Firstly, slices will be dependent on [good] coverage and network control. A URLLC slice likely won't work reliably indoors, underground, in remote areas, on a train, on a neutral-host network, or while roaming. This has been a basic failure of every differentiated-QoS monetisation concept for many years, and 5G's often-higher frequencies make it worse, not better.
Secondly, there is no mature machinery for buying, selling, testing, supporting. price, monitoring slices. No, the 5G Network Exposure Function won't do it all. I haven't met a Slice salesperson yet, or a Slice-procurement team.
Thirdly, a "local slice" of a national 5G network will run headlong into a battle with the desire for separate private/dedicated local 5G networks, which may well be cheaper and easier. It also won't work well with the enterprise's IT/OT/IP domains, out of the box.
Also there's many challenges getting multi-operator slices, device OS links to slice APIs, slice "boundary controllers" between operators, aligning RAN and core slices, regulatory questionmarks and much more.
There are lots of discussion in the comments section that may be of interest to you, here.
My belief is that we will see lots of interesting use cases with slicing in public networks but it will be difficult to monetise. The best networks will manage to do it to create some plans with guaranteed rates and low latency. It would remain to be see whether they can successfully monetise it well enough.
For technical people and newbies, there are lots of Network Slicing resources on this blog (see related posts 👇). Here is another recent video from Mpirical:
I looked at Control and User Plane Separation (CUPS) in a tutorial, nearly five years back here. Since then most focus has been on 5G, not just on my blogs but also from the industry.
Earlier this year, NTT Docomo's Technical Journal looked at CUPS for Flexible U-Plane Processing Based on Traffic Characteristics. The following is an extract from the article:
At the initial deployment phase of 5th Generation mobile communication systems (5G), the 5G Non-Stand-Alone (NSA) architecture was widely adopted to realize 5G services by connecting 5G base stations to the existing Evolved Packet Core (EPC). As applications based on 5G become more widespread, the need for EPC to achieve higher speed and capacity communications, lower latency communications and simultaneous connection of many terminals than ever has become urgent. Specifically, it is necessary to increase the number of high-capacity gateway devices capable of processing hundreds of Gbps to several Tbps to achieve high-speed, high-capacity communications, to distribute gateway devices near base station facilities to achieve even lower latency communications, and to improve session processing performance for connecting massive numbers of terminals simultaneously.
Conventional single gateway devices have both Control Plane (C-Plane) functions to manage communication sessions and control communications, and User Plane (U-Plane) functions to handle communications traffic. Therefore, if the previously assumed balance between the number of sessions and communications capacity is disrupted, either the C-Plane or the U-Plane will have excess processing capacity. In high-speed, high-capacity communications, the C-Plane has excess processing power, and in multiple terminal simultaneous connections, the U-Plane has excess processing power because the volume of communications is small compared to the number of sessions. If the C-Plane and U-Plane can be scaled independently, these issues can be resolved, and efficient facility design can be expected. In addition, low-latency communications require distributed deployment of the U-Plane function near the base station facilities to reduce propagation delay. However, in the distributed deployment of conventional devices with integrated C-Plane and U-Plane functions, the number of sessions and communication volume are unevenly distributed among the gateway devices, resulting in a decrease in the efficiency of facility utilization. Since there is no need for distributed deployment of C-Plane functions, if the C-Plane and U-Plane functions can be separated and the way they are deployed changed according to their characteristics, the loss of facility utilization efficiency related to C-Plane processing capacity could be greatly reduced.
CUPS is an architecture defined in 3GPP TS 23.214 that separates the Serving GateWay (SGW)/Packet data network GateWay (PGW) configuration of the EPC into the C-Plane and U-Plane. The CUPS architecture is designed so that there is no difference in the interface between the existing architecture and the CUPS architecture - even with CUPS architecture deployed in SGW/PGW, opposing devices such as a Mobility Management Entity (MME), Policy and Charging Rules Function (PCRF), evolved NodeB (eNB)/ next generation NodeB (gNB), and SGWs/PGWs of other networks such as Mobile Virtual Network Operator (MVNO) and roaming are not affected. For C-Plane, SGW Control plane function (SGW-C)/PGW Control plane function (PGW-C), and for U-Plane, SGW User plane function (SGW- U)/PGW User plane function (PGW-U) are equipped with call processing functions. By introducing CUPS, C-Plane/U-Plane capacities can be expanded individually as needed. Combined SGW-C/PGW-C and Combined SGW-U/PGW-U can handle the functions of SGW and PGW in common devices. In the standard specification, in addition to SGW/PGW, the Traffic Detection Function (TDF) can also be separated into TDF-C and TDF-U, but the details are omitted in this article.
From above background, NTT DOCOMO has been planning to deploy Control and User Plane Separation (CUPS) architecture to realize the separation of C-Plane and U-Plane functions as specified in 3rd Generation Partnership Project Technical Specification (3GPP TS) 23.214. Separating the C-Plane and U-Plane functions of gateway devices with CUPS architecture makes it possible to scale the C-Plane and U-Plane independently and balance the centralized deployment of C-Plane functions with the distributed deployment of U- Plane functions, thereby enabling the deployment and development of a flexible and efficient core network. In addition to solving the aforementioned issues, CUPS will also enable independent equipment upgrades for C-Plane and U-Plane functions, and the adoption of U-Plane devices specialized for specific traffic characteristics.
In the user perspective, the introduction of CUPS can be expected to dramatically improve the user experience through the operation of facilities specializing in various requirements, and enable further increases in facilities and lower charges to pursue user benefits by improving the efficiency of core network facilities.
Regarding the CUPS architecture, a source of value for both operators and users, this article includes an overview of the architecture, additional control protocols, U-Plane control schemes based on traffic characteristics, and future developments toward a 5G Stand-Alone (5G SA) architecture.
In the previous blog post I have explained the concept of multi-band cells in LTE networks and promised to explain a bit deeper how such cells can be used in Multi-Operator RAN (MORAN) scenarios.
MORAN is characterized by the fact that all network resources except the radio carriers and the Home Subscriber Server (HSS) are shared between two or more operators.
What this means in detail can be see in Step 1 of the figure below.
The yellow Band #1 spectrum of the multi-band cell is owned by Network Operator 1 while the blue spectrum of Band #2 and Band #3 belongs to Network Operator 2.
Band #1 is the default band. This means if a UE enters the cell is always has to establish the initial RRC signaling connection on Band #1 as shown in step 1.
The spectrum owned by Network Operator 2 comes into the game as soon as a dedicated radio bearer (DRB), in the core network known as E-RAB, is established in this RRC connection.
Then we see intra-frequency (intra-cell) handover to Band #2 where the RRC signaling connection is continued. Band #3 is added for user plane transport as a secondary "cell" (the term refers to the 3GPP 36.331 RRC specification).
The reason for this behavior can be explained when looking a frequency bandwidths.
The default Band #1 is a low frequency band with a quite small bandwidth, e.g. 5 MHz. as it is typically used for providing good coverage in rural areas. Band #2 is also a lower frequency band, but Band #3 is a high frequency band with maximum bandwidth of 20 MHz. So Band #3 brings the highest capacity for user plane transport and that is the reason for the handover to the spectrum owned by Network Operator 2 and the carrier aggregation used on these frequency bands.
However, due to the higher frequency the footprint of Band #3 is lower compared to the other two frequency bands.
For UEs at the cell edge (or located in buildings while being served from the outdoor cell) this leads quite often to situations where the radio coverage of Band #3 becomes insufficient. In such cases the UE typically sends a RRC measurement event A2 (means: "The RSRP of the cell is below a certain threshold.").
If such A2 event is received by the eNB it stops the carrier aggregation transport and releases the Band #3 resources so that all user plane transport continues to run on the limited Band #2 resources as shown in step 3.
And now in the particular eNB I observed a nice algorithm starts that could be seen as a kind of zero-touch network operation although it does not need big data nor artificial intelligence.
10 seconds after the secondary frequency resources of Band #3 have been deleted they are added again to the connection, but if the UE is still at the same location the next A2 will be reported soon and carrier aggregation will be stopped again for 10 seconds and then the next cycle starts.
This automation loop is carried out endlessly until the UE changes its location or the RRC connection is terminated.
In another new whitepaper on 5G-Advanced, Nokia has detailed DCCA (DC + CA) features and enhancements from Rel-15 until Rel-18. The following is an extract from the paper:
Mobility is one of the essential components of 5G-Advanced. 3GPP has already defined a set of functionalities and features that will be a part of the 5G-Advanced Release 18 package. These functionalities can be grouped into four areas: providing new levels of experience, network extension into new areas, mobile network expansion beyond connectivity, and providing operational support excellence. Mobility enhancements in Release 18 will be an important part of the ‘Experience enhancements” block of features, with the goal of reducing interruption time and improving mobility robustness.
Fig. 2 shows a high-level schematic of mobility and dual connectivity (DC)/Carrier Aggregation (CA) related mechanisms that are introduced in the different 5G legacy releases towards 5G-Advanced in Release 18. Innovations such as Conditional Handover (CHO) and dual active protocol stack (DAPS) are introduced in Release 16. More efficient operation of carrier aggregation (CA), dual connectivity (DC), and the combination of those denoted as DCCA, as well as Multi-Radio Access Technology DC (MR-DC) are introduced through Releases 16 and 17.
For harvesting the full benefits of CA/DC techniques, it is important to have an agile framework where secondary cell(s) are timely identified and configured to the UE when needed. This is of importance for non-standalone (NSA) deployments where a carrier on NR should be quickly configured and activated to take advantage of 5G. Similarly, it is of importance for standalone (SA) cases where e.g. a UE with its Primary Cell (PCell) on NR Frequency Range 1 (FR1) wants to take additional carriers, either on FR1 and/or FR2 bands, into use. Thus, there is a need to support cases where the aggregated carriers are either from the same or difference sites. The management of such additional carriers for a UE shall be highly agile in line with the user traffic and QoS demands; quickly enabling usage of additional carriers when needed and again quickly released when no longer demanded to avoid unnecessary processing at the UE and to reduce its energy consumption. This is of particular importance for users with time-varying traffic demands (aka burst traffic conditions).
In the following, we describe how such carrier management is gradually improved by introducing enhancements for cell identification, RRM measurements and reduced reporting delays from UEs. As well as innovations related to Conditional PSCell Addition and Change (CPAC) and deactivation of secondary cell groups are outlined.
The paper goes on to discuss the following scenarios in detail for DCCA enhancements:
Early measurement reporting
Secondary cell (SCell) activation time improvements
Direct SCell activation
Temporary RS (TRS)-based SCell Activation
Conditional Secondary Node (SN) addition and change for fast access
Activation of secondary cell group
The table below summarizes the DCCA features in 5G NR
When we made our 5G Service Based Architecture (SBA) tutorial some four years back, it was based on Release-15 of the 3GPP standards. All Network Functions (NFs) simply sent discovery requests to the Network Repository Function (NRF). While this works great for trials and small scale deployments it can also lead to issues as can be seen in the slide above.
In 3GPP Release-16 the Service Communication Proxy (SCP) has now been introduced to allow the Control Plane network to handle and prioritize massive numbers of requests in real time. The SCP becomes the control point that mediates all Signalling and Control Plane messages in the network core.
SCP routing directs the flow of millions of simultaneous 5G function requests and responses for network slicing, microservice instantiation or edge compute access. It also plays a critical role in optimizing floods of discovery requests to the NRF and in overall Control Plane load balancing, traffic prioritization and message management.
A detailed whitepaper on '5G Signaling and Control Plane Traffic Depends on Service Communications Proxy (SCP)' by Strategy Analytics is available on Huawei's website here. This report was a follow on from the 'Signaling — The Critical Nerve Center of 5G Networks' webinar here.
Artificial Intelligence (AI) and Machine Learning (ML) has been touted to automate the network and simplify the identification and debug of issues that will arise with increasing network complexity. For this reason 3GPP has many different features that are already present in Release-17 but are expected to evolve further in Release-18.
I have already covered some of this topics in earlier posts. Ericsson's recent whitepaper '5G Advanced: Evolution towards 6G' also has a good summary on this topic. Here is an extract from that:
Intelligent network automation
With increasing complexity in network design, for example, many different deployment and usage options, conventional approaches will not be able to provide swift solutions in many cases. It is well understood that manually reconfiguring cellular communications systems could be inefficient and costly.
Artificial intelligence (AI) and machine learning (ML) have the capability to solve complex and unstructured network problems by using a large amount of data collected from wireless networks. Thus, there has been a lot of attention lately on utilizing AI/ML-based solutions to improve network performance and hence providing avenues for inserting intelligence in network operations.
AI model design, optimization, and life-cycle management rely heavily on data. A wireless network can collect a large amount of data as part of its normal operations. This provides a good base for designing intelligent network solutions. 5G Advanced addresses how to optimize the standardized interfaces for data collection while leaving the automation functionality, for example, training and inference up to the proprietary implementation to support full flexibility in the automation of the network.
AI/ML for RAN enhancements
Three use cases have been identified in the Release 17 study item related to RAN performance enhancement by using AI/ML techniques. Selected use cases from the Release 17 technical report will be taken into the normative phase in the next releases. The selected use cases are: 1) network energy saving; 2) load balancing; and 3) mobility optimization.
The selected use cases can be supported by enhancements to current NR interfaces, targeting performance improvements using AI/ML functionality in the RAN while maintaining the 5G NR architecture. One of the goals is to ensure vendor incentives in terms of innovation and competitiveness by keeping the AI model implementation specific. As shown in Fig.2 (on the top) an intent-based management approach can be adopted for use cases involving RAN-OAM interactions. The intent will be received by the RAN. The RAN will need to understand the intent and trigger certain functionalities as a result.
It is generally expected that AI/ML functionality can be used to improve the radio performance and/or reduced the complexity/overhead of the radio interface. 3GPP TSG RAN has selected three use cases to study the potential air interface performance improvements through AI/ML techniques, such as beam management, channel state information feedback enhancement, and positioning accuracy enhancements for different scenarios. The AI/ML-based methods may provide benefits compared to traditional methods in the radio interface. The challenge will be to define a unified AI/ML framework for the air interface by adequate AI/ML model characterization using various levels of collaboration between gNB and UE.
AI/ML in 5G core
5G Advanced will provide further enhancements of the architecture for analytics and on ML model life-cycle management, for example, to improve correctness of the models. The advancements in the architecture for analytics and data collection serve as a good foundation for AI/ML-based use cases within the different network functions (NFs). Additional use cases will be studied where NFs make use of analytics with the target to support in their decision making, for example, network data analytics functions (NWDAF)- assisted generation of UE policy for network slicing.
If you are interested in studying this topic further, check out 3GPP TR 37.817: Study on enhancement for data collection for NR and ENDC. Download the latest version from here.