Thursday 6 October 2022

Key enablers for mass IoT adoption

At 'The Things Conference' in Amsterdam in September, Roman Nemish, Co-Founder & President of TEKTELIC presented a critical view of different IoT technologies and argued that LoRaWAN is the only technology that will eventually make mass IoT possible.

The following is the intro to the talk from the conference:

IoT technology has progressed from home to city-scale applications, making it a crucial part of any operational process. IoT sensors are becoming more affordable, reliable, and easy to deploy.

The Internet of Things has already brought advancement to healthcare, retail, city infrastructure, and manufacturing with many other opportunities still open.

We are ready to explain why IoT deployment has transformed from privilege to necessity, what benefits it can bring to your business, and how you win the competition using IoT.

Enterprise IoT Insights have a good take on the talk here. The following is an extract:

Nemish, president at TEKTELIC, argued that new-wave cellular IoT – in the form of NB-IoT and LTE-M, primarily – is “too expensive” for consumers and too small-margin for mobile operators; that “most IoT opportunities are 10-25 times smaller [than the kinds of deals that would] attract operator attention”. Cellular IoT has “vast potential”, he concluded, but requires a “different approach”.

In other words, there is not enough profit in (low-power) cellular IoT for mobile operators to give it proper focus – and the deals are not big enough to make them really care. The IoT game – based on finely-calculated returns on volume-deals not going much higher than 100,000 units at a time – is better served by smaller-sized providers, without regional spectrum licences, offering broadly-equivalent technologies in unlicensed bands, he implied.

But experiences with Sigfox and LoRaWAN (in some formats) – the French-born IoT twin-tech that started the whole low-power wide-area (LPWA) movement, and forced the cellular community to come up with their own alternatives – have not been much better, necessarily, the story goes. Sigfox pumped $350 million over 10 years into its technology and network, only to go into receivership at the start of 2022 with fewer than 20 million devices under management.

The problem, said Nemish, is with the business model, and not the tech. (As an aside, a takeaway from The Things Conference last week, as from the LoRaWAN World Expo in Paris in the summer, and from any number of private discussions in between, is the IoT market is mature enough to let go of its closely-held tech differences, and acknowledge that customers don’t really care so long as it works – and so the blame switches to the business model, instead.)

Nemish blamed Sigfox’s ‘failure’ on exclusive single-market contracts and cripping licensing fees; these “killed most operator business plans”, he suggested. Of course, Sigfox lives to see another day – and, it might be noted, Taiwan-based IoT house Unabiz, its new owners, have just hosted the 0GUN Alliance of Sigfox operators in France to bash-out a new operator model, and a collaborative approach to a “unified LPWAN world”.

And LoRaWAN is not exempt in the analysis, either. In Amsterdam, Nemish held up the madly-hyped Helium model for crypto-led community network building as another failed IoT business model. Again – and of course, with a critical appraisal of a LoRaWAN network by a LoRaWAN provider – the tech is not the problem, just the way it is being offered. Because Helium, he said, with $1 billion of public community funding, has “no use” after three years.

As per the slide, parent Nova Labs has “failed to sign customers, implement SLA(s), or plan network evolution”, he suggested. The community behind it, originally bedsit enthusiasts on to a good thing, are not motivated by “IoT adoption but [by] crypto-mania”, said Nemish. Just look on eBay, where 10,000 secondhand Helium miners (gateways) are being flogged, to see how its star has fallen, he said – along with its stock, with HNT trading up 12 percent at around $5 at writing, on the back of a deal for decentralised 5G with T-Mobile in the US, but down from a high of nearly $30 a few months ago.

The article highlights some heated discussions on the presentation and slides. You can read the whole article here.

The closing slide nicely summarises that IoT deployment is a marathon, not a sprint. End users are interested in solving real-world problems. Partner to develop complete IoT solutions that can be integrated simply with any IoT platform and with clearly defined API. Also have a strong engineering team to support customer integration and early deployment.

Here is the video of the talk for anyone interested:

Related Posts

Thursday 29 September 2022

Four Ways 5G Can Improve the Battery Life of User Equipment (UE)

We have looked at different approaches in this blog and the 3G4G website on reducing the power consumption (see related posts below). In a blog post some months back, Huawei highlighted how 5G can improve the battery life of UE. The blog post mentioned four approaches, we have looked at three of them on various blogs. 

The following is from the blog post:

RRC_INACTIVE State

A UE can access network services only if it establishes a radio resource control (RRC) connection with the base station. In legacy RATs, a UE is either in the RRC_CONNECTED state (it has an RRC connection) or the RRC_IDLE state (it does not have an RRC connection). However, transitioning from the RRC_IDLE state to the RRC_CONNECTED state takes a long time, so it cannot meet the low latency requirement of some 5G services. But a UE cannot just stay in the RRC_CONNECTED state because this will consume much more UE power.

To solve this problem, 5G introduces the RRC_INACTIVE state, where the RRC connection is released but the UE context is retained (called RRC Release with Suspend), so an RRC connection can be quickly resumed when needed. This way, a UE in the RRC_INACTIVE state can access low-latency services whenever needed but consume the same amount of power as it does in the RRC_IDLE state.

DRX + WUS

Discontinuous reception (DRX) enables a UE in the RRC_CONNECTED state to periodically, instead of constantly, monitor the physical downlink control channel (PDCCH) to save power. To meet the requirements of different UE services, both short and long DRX cycles can be configured for a UE. However, when to wake up is determined by the predefined cycle, so the UE might wake up unnecessarily when there is no data scheduled.

Is there a way for a UE to wake up only when it needs to? Wake-up Signal (WUS) proposed in Release 16 is the answer. This signal can be sent before the next On Duration period (during which the UE monitors the PDCCH) so that the UE wakes up only when it receives this signal from the network. Because the length of a WUS is shorter than the On Duration Timer, using WUS to wake up a UE saves more power than using only DRX.

BWP Adaptation

In theory, working on a larger bandwidth consumes more UE power. 5G provides large bandwidths, but it is unnecessary for a UE to always work on large bandwidth. For example, if you play online mobile games on a UE, only 10 MHz of bandwidth is needed for 87% of the data transmission time. As such, Bandwidth Part (BWP) is proposed in 5G to enable UEs to work on narrower bandwidths without sacrificing user experience.

BWP adaptation enables the base station to dynamically switch between BWPs based on the UE’s traffic volume. When the traffic volume is large, a UE can work on a wide BWP, and when the traffic volume is small, the UE can work on a narrow one. BWP switching can be performed based on the downlink control information (DCI) and RRC reconfiguration messages. This ensures that a UE always works on a bandwidth that supports the traffic volume but does not consume too much power.

Maximum MIMO Layers Reduction

According to 3GPP specifications, the number of receive and transmit antennas used by a UE cannot be fewer than the maximum number of MIMO layers in the downlink and uplink, respectively. For example, when a maximum of four downlink MIMO layers are configured for a UE, the UE must enable at least four receive antennas to receive data. Therefore, if the maximum number of MIMO layers can be reduced, the UE does not have to activate as many antennas, reducing power consumption.

This can be achieved in 5G because the number of MIMO layers can be re-configured based on assistance information from UEs. After receiving a request to reduce the number of MIMO layers from a UE, the base station configures fewer MIMO layers for the UE through an RRC reconfiguration message. In this way, the UE can deactivate some antennas to save power.

Power consumption in the networks and the devices is a real challenge. While the battery capacity and charging speeds are increasing, it is also important to find ways to optimise the signalling parameters, etc. One such approach can be seen in the tweet above regarding regarding T-Mobile in The Netherlands, selectively switching off a carrier in the night and switching it back when the cell starts loading or in the morning.

We will see lot more innovations and optimisations to dynamically update the technologies, parameters, optimisations to ensure power savings wherever possible.

Related Posts

Monday 19 September 2022

Is there a compelling Business Case for 5G Network Slicing in Public Networks?

Since the industry realised how the 5G Network Architecture will look like, Network Slicing has been touted as the killer business case that will allow the mobile operators to generate revenue from new sources.

Last month ABI Research said in a press release:

According to global technology intelligence firm ABI Research, 5G slicing revenue is expected to grow from US$309 million in 2022 to approximately US$24 billion in 2028, at a Compound Annual Growth Rate (CAGR) of 106%. 

“5G slicing adoption falls into two main categories. One, there is no connectivity available. Two, there is connectivity, but there is not sufficient capacity, coverage, performance, or security. For the former, both private and public organizations are deploying private network slices on a permanent and ad hoc basis,” highlights Don Alusha, 5G Core and Edge Networks Senior Analyst at ABI Research. The second scenario is mostly catered by private networks today, a market that ABI Research expects to grow from US$3.6 billion to US$109 billion by 2023, at a CAGR of 45.8%. Alusha continues, “A sizable part of this market can be converted to 5G slicing. But first, the industry should address challenges associated with technology and commercial models. On the latter, consumers’ and enterprises’ appetite to pay premium connectivity prices for deterministic and tailored connectivity services remains to be determined. Furthermore, there are ongoing industry discussions on whether the value that comes from 5G slicing can exceed the cost required to put together the underlying slicing ecosystem.”

Earlier this year, Daryl Schoolar - Research Director at IDC tackled this topic in his blog post:

5G network slicing, part of the 3GPP standards developed for 5G, allows for the creation of multiple virtual networks across a single network infrastructure, allowing enterprises to connect with guaranteed low latency. Using principles behind software-defined network and network virtualization, slicing allows the mobile operator to provide differentiated network experience for different sets of end users. For example, one network slice could be configured to support low latency, while another slice is configured for high download speeds. Both slices would run across the same underlying network infrastructure, including base stations, transport network, and core network.

Network slicing differs from private mobile networks, in that network slicing runs on the public wide area network. Private mobile networks, even when offered by the mobile operator, use infrastructure and spectrum dedicated to the end user to isolate the customer’s traffic from other users.

5G network slicing is a perfect candidate for future business connectivity needs. Slicing provides a differentiated network experience that can better match the customers performance requirements than traditional mobile broadband. Until now, there has been limited mobile network performance customization outside of speeds. 5G network slicing is a good example of telco service offerings that meet future of connectivity requirements. However, 5G network slicing also highlights the challenges mobile operators face with transformation in their pursuit of remaining relevant.

For 5G slicing to have broad commercial availability, and to provide a variety of performance options, several things need to happen first.

  • Operators need to deploy 5G Standalone (SA) using the new 5G mobile core network. Currently most operators use the 5G non-standalone (NSA) architecture that relies on the LTE mobile core. It might be the end of 2023 before the majority of commercial 5G networks are using the SA mode.
  • Spectrum is another hurdle that must be overcome. Operators still make most of their revenue from consumers, and do not want to compromise the consumer experience when they start offering network slicing. This means operators need more spectrum. In the U.S., among the three major mobile operators, only T-Mobile currently has a nationwide 5G mid-band spectrum deployment. AT&T and Verizon are currently deploying in mid-band, but that will not be completed until 2023.
  • 5G slicing also requires changes to the operator’s business and operational support systems (BSS/OSS). Current BSS/OSS solutions were not designed to support the increased parameters those systems were designed to support.
  • And finally, mobile operators still need to create the business propositions around commercial slicing services. Mobile operators need to educate businesses on the benefits of slicing and how slicing supports their different connectivity requirements. This could involve mobile operators developing industry specific partnerships to reach different business segments. All these things take time to be put into place.

Because of the enormity of the tasks needed to make 5G network slicing a commercial success, IDC currently has a very conservative outlook for this service through 2026. IDC believes it will be 2023 until there is general commercial availability of 5G network slicing. The exception is China, which is expected to have some commercial offerings in 2022 as it has the most mature 5G market. Even then, it will take until 2025 before global revenues from slicing exceeds a billion U.S. dollars. In 2026 IDC forecasts slicing revenues will be approximately $3.2 billion. However, over 80% of those revenues will come out of China.

The 'Outspoken Industry Analyst' Dean Bubley believes that Network Slicing is one of the worst strategic errors made by the mobile industry, since the catastrophic choice of IMS for communications applications. In a LinkedIn post he explains:

At best, slicing is an internal toolset that might allow telco operations or product teams (or their vendors) to manage their network resources. For instance, it could be used to separate part of a cell's capacity for FWA, and dynamically adjust that according to demand. It might be used as an "ingredient" to create a higher class of service for enterprise customers, for instance for trucks on a highway, or as part of an "IoT service" sold by MNOs. Public safety users might have an expensive, artisanal "hand-carved" slice which is almost a separate network. Maybe next-gen MVNOs.

(I'm talking proper 3GPP slicing here - not rebranded QoS QCI classes, private APNs, or something that looks like a VLAN, which will probably get marketed as "slices")

But the idea that slicing is itself a *product*, or that application developers or enterprises will "buy a slice" is delusional.

Firstly, slices will be dependent on [good] coverage and network control. A URLLC slice likely won't work reliably indoors, underground, in remote areas, on a train, on a neutral-host network, or while roaming. This has been a basic failure of every differentiated-QoS monetisation concept for many years, and 5G's often-higher frequencies make it worse, not better.

Secondly, there is no mature machinery for buying, selling, testing, supporting. price, monitoring slices. No, the 5G Network Exposure Function won't do it all. I haven't met a Slice salesperson yet, or a Slice-procurement team.

Thirdly, a "local slice" of a national 5G network will run headlong into a battle with the desire for separate private/dedicated local 5G networks, which may well be cheaper and easier. It also won't work well with the enterprise's IT/OT/IP domains, out of the box.

Also there's many challenges getting multi-operator slices, device OS links to slice APIs, slice "boundary controllers" between operators, aligning RAN and core slices, regulatory questionmarks and much more.

There are lots of discussion in the comments section that may be of interest to you, here.

My belief is that we will see lots of interesting use cases with slicing in public networks but it will be difficult to monetise. The best networks will manage to do it to create some plans with guaranteed rates and low latency. It would remain to be see whether they can successfully monetise it well enough. 

For technical people and newbies, there are lots of Network Slicing resources on this blog (see related posts 👇). Here is another recent video from Mpirical:

Related Posts

Saturday 10 September 2022

CUPS for Flexible U-Plane Processing Based on Traffic Characteristics

I looked at Control and User Plane Separation (CUPS) in a tutorial, nearly five years back here. Since then most focus has been on 5G, not just on my blogs but also from the industry. 

Earlier this year, NTT Docomo's Technical Journal looked at CUPS for Flexible U-Plane Processing Based on Traffic Characteristics. The following is an extract from the article:

At the initial deployment phase of 5th Generation mobile communication systems (5G), the 5G Non-Stand-Alone (NSA) architecture was widely adopted to realize 5G services by connecting 5G base stations to the existing Evolved Packet Core (EPC). As applications based on 5G become more widespread, the need for EPC to achieve higher speed and capacity communications, lower latency communications and simultaneous connection of many terminals than ever has become urgent. Specifically, it is necessary to increase the number of high-capacity gateway devices capable of processing hundreds of Gbps to several Tbps to achieve high-speed, high-capacity communications, to distribute gateway devices near base station facilities to achieve even lower latency communications, and to improve session processing performance for connecting massive numbers of terminals simultaneously.

Conventional single gateway devices have both Control Plane (C-Plane) functions to manage communication sessions and control communications, and User Plane (U-Plane) functions to handle communications traffic. Therefore, if the previously assumed balance between the number of sessions and communications capacity is disrupted, either the C-Plane or the U-Plane will have excess processing capacity. In high-speed, high-capacity communications, the C-Plane has excess processing power, and in multiple terminal simultaneous connections, the U-Plane has excess processing power because the volume of communications is small compared to the number of sessions. If the C-Plane and U-Plane can be scaled independently, these issues can be resolved, and efficient facility design can be expected. In addition, low-latency communications require distributed deployment of the U-Plane function near the base station facilities to reduce propagation delay. However, in the distributed deployment of conventional devices with integrated C-Plane and U-Plane functions, the number of sessions and communication volume are unevenly distributed among the gateway devices, resulting in a decrease in the efficiency of facility utilization. Since there is no need for distributed deployment of C-Plane functions, if the C-Plane and U-Plane functions can be separated and the way they are deployed changed according to their characteristics, the loss of facility utilization efficiency related to C-Plane processing capacity could be greatly reduced.

CUPS is an architecture defined in 3GPP TS 23.214 that separates the Serving GateWay (SGW)/Packet data network GateWay (PGW) configuration of the EPC into the C-Plane and U-Plane. The CUPS architecture is designed so that there is no difference in the interface between the existing architecture and the CUPS architecture - even with CUPS architecture deployed in SGW/PGW, opposing devices such as a Mobility Management Entity (MME), Policy and Charging Rules Function (PCRF), evolved NodeB (eNB)/ next generation NodeB (gNB), and SGWs/PGWs of other networks such as Mobile Virtual Network Operator (MVNO) and roaming are not affected. For C-Plane, SGW Control plane function (SGW-C)/PGW Control plane function (PGW-C), and for U-Plane, SGW User plane function (SGW- U)/PGW User plane function (PGW-U) are equipped with call processing functions. By introducing CUPS, C-Plane/U-Plane capacities can be expanded individually as needed. Combined SGW-C/PGW-C and Combined SGW-U/PGW-U can handle the functions of SGW and PGW in common devices. In the standard specification, in addition to SGW/PGW, the Traffic Detection Function (TDF) can also be separated into TDF-C and TDF-U, but the details are omitted in this article.

From above background, NTT DOCOMO has been planning to deploy Control and User Plane Separation (CUPS) architecture to realize the separation of C-Plane and U-Plane functions as specified in 3rd Generation Partnership Project Technical Specification (3GPP TS) 23.214. Separating the C-Plane and U-Plane functions of gateway devices with CUPS architecture makes it possible to scale the C-Plane and U-Plane independently and balance the centralized deployment of C-Plane functions with the distributed deployment of U- Plane functions, thereby enabling the deployment and development of a flexible and efficient core network. In addition to solving the aforementioned issues, CUPS will also enable independent equipment upgrades for C-Plane and U-Plane functions, and the adoption of U-Plane devices specialized for specific traffic characteristics.

In the user perspective, the introduction of CUPS can be expected to dramatically improve the user experience through the operation of facilities specializing in various requirements, and enable further increases in facilities and lower charges to pursue user benefits by improving the efficiency of core network facilities.

Regarding the CUPS architecture, a source of value for both operators and users, this article includes an overview of the architecture, additional control protocols, U-Plane control schemes based on traffic characteristics, and future developments toward a 5G Stand-Alone (5G SA) architecture.

The article is available here.

Related Posts

Friday 26 August 2022

How Multiband-Cells are used for MORAN RAN Sharing

In the previous blog post I have explained the concept of multi-band cells in LTE networks and promised to explain a bit deeper how such cells can be used in Multi-Operator RAN (MORAN) scenarios. 

MORAN is characterized by the fact that all network resources except the radio carriers and the Home Subscriber Server (HSS) are shared between two or more operators. 

What this means in detail can be see in Step 1 of the figure below. 

The yellow Band #1 spectrum of the multi-band cell is owned by Network Operator 1 while the blue spectrum of Band #2 and Band #3 belongs to Network Operator 2.

Band #1 is the default band. This means if a UE enters the cell is always has to establish the initial RRC signaling connection on Band #1 as shown in step 1.

The spectrum owned by Network Operator 2 comes into the game as soon as a dedicated radio bearer (DRB), in the core network known as E-RAB, is established in this RRC connection. 

Then we see intra-frequency (intra-cell) handover to Band #2 where the RRC signaling connection is continued. Band #3 is added for user plane transport as a secondary "cell" (the term refers to the 3GPP 36.331 RRC specification). 

The reason for this behavior can be explained when looking a frequency bandwidths. 

The default Band #1 is a low frequency band with a quite small bandwidth, e.g. 5 MHz. as it is typically used for providing good coverage in rural areas. Band #2 is also a lower frequency band, but Band #3 is a high frequency band with maximum bandwidth of 20 MHz. So Band #3 brings the highest capacity for user plane transport and that is the reason for the handover to the spectrum owned by Network Operator 2 and the carrier aggregation used on these frequency bands. 

However, due to the higher frequency the footprint of Band #3 is lower compared to the other two frequency bands. 

For UEs at the cell edge (or located in buildings while being served from the outdoor cell) this leads quite often to situations where the radio coverage of Band #3 becomes insufficient. In such cases the UE typically sends a RRC measurement event A2 (means: "The RSRP of the cell is below a certain threshold."). 

If such A2 event is received by the eNB it stops the carrier aggregation transport and releases the Band #3 resources so that all user plane transport continues to run on the limited Band #2 resources as shown in step 3.

And now in the particular eNB I observed a nice algorithm starts that could be seen as a kind of zero-touch network operation although it does not need big data nor artificial intelligence. 

10 seconds after the secondary frequency resources of Band #3 have been deleted they are added again to the connection, but if the UE is still at the same location the next A2 will be reported soon and carrier aggregation will be stopped again for 10 seconds and then the next cycle starts.

This automation loop is carried out endlessly until the UE changes its location or the RRC connection is terminated. 

Related Posts:

Monday 22 August 2022

DCCA Features and Enhancements in 5G New Radio

In another new whitepaper on 5G-Advanced, Nokia has detailed DCCA (DC + CA) features and enhancements from Rel-15 until Rel-18. The following is an extract from the paper:

Mobility is one of the essential components of 5G-Advanced. 3GPP has already defined a set of functionalities and features that will be a part of the 5G-Advanced Release 18 package. These functionalities can be grouped into four areas: providing new levels of experience, network extension into new areas, mobile network expansion beyond connectivity, and providing operational support excellence. Mobility enhancements in Release 18 will be an important part of the ‘Experience enhancements” block of features, with the goal of reducing interruption time and improving mobility robustness.

Fig. 2 shows a high-level schematic of mobility and dual connectivity (DC)/Carrier Aggregation (CA) related mechanisms that are introduced in the different 5G legacy releases towards 5G-Advanced in Release 18. Innovations such as Conditional Handover (CHO) and dual active protocol stack (DAPS) are introduced in Release 16. More efficient operation of carrier aggregation (CA), dual connectivity (DC), and the combination of those denoted as DCCA, as well as Multi-Radio Access Technology DC (MR-DC) are introduced through Releases 16 and 17.

For harvesting the full benefits of CA/DC techniques, it is important to have an agile framework where secondary cell(s) are timely identified and configured to the UE when needed. This is of importance for non-standalone (NSA) deployments where a carrier on NR should be quickly configured and activated to take advantage of 5G. Similarly, it is of importance for standalone (SA) cases where e.g. a UE with its Primary Cell (PCell) on NR Frequency Range 1 (FR1) wants to take additional carriers, either on FR1 and/or FR2 bands, into use. Thus, there is a need to support cases where the aggregated carriers are either from the same or difference sites. The management of such additional carriers for a UE shall be highly agile in line with the user traffic and QoS demands; quickly enabling usage of additional carriers when needed and again quickly released when no longer demanded to avoid unnecessary processing at the UE and to reduce its energy consumption. This is of particular importance for users with time-varying traffic demands (aka burst traffic conditions).

In the following, we describe how such carrier management is gradually improved by introducing enhancements for cell identification, RRM measurements and reduced reporting delays from UEs. As well as innovations related to Conditional PSCell Addition and Change (CPAC) and deactivation of secondary cell groups are outlined.

The paper goes on to discuss the following scenarios in detail for DCCA enhancements:

  • Early measurement reporting
  • Secondary cell (SCell) activation time improvements
    • Direct SCell activation
    • Temporary RS (TRS)-based SCell Activation
  • Conditional Secondary Node (SN) addition and change for fast access
  • Activation of secondary cell group

The table below summarizes the DCCA features in 5G NR

Related Posts

Tuesday 16 August 2022

Managing 5G Signalling Storms with Service Communication Proxy (SCP)

When we made our 5G Service Based Architecture (SBA) tutorial some four years back, it was based on Release-15 of the 3GPP standards. All Network Functions (NFs) simply sent discovery requests to the Network Repository Function (NRF). While this works great for trials and small scale deployments it can also lead to issues as can be seen in the slide above.

In 3GPP Release-16 the Service Communication Proxy (SCP) has now been introduced to allow the Control Plane network to handle and prioritize massive numbers of requests in real time. The SCP becomes the control point that mediates all Signalling and Control Plane messages in the network core.

SCP routing directs the flow of millions of simultaneous 5G function requests and responses for network slicing, microservice instantiation or edge compute access. It also plays a critical role in optimizing floods of discovery requests to the NRF and in overall Control Plane load balancing, traffic prioritization and message management.

A detailed whitepaper on '5G Signaling and Control Plane Traffic Depends on Service Communications Proxy (SCP)' by Strategy Analytics is available on Huawei's website here. This report was a follow on from the 'Signaling — The Critical Nerve Center of 5G Networks' webinar here.

Related Posts:

Wednesday 10 August 2022

AI/ML Enhancements in 5G-Advanced for Intelligent Network Automation

Artificial Intelligence (AI) and Machine Learning (ML) has been touted to automate the network and simplify the identification and debug of issues that will arise with increasing network complexity. For this reason 3GPP has many different features that are already present in Release-17 but are expected to evolve further in Release-18. 

I have already covered some of this topics in earlier posts. Ericsson's recent whitepaper '5G Advanced: Evolution towards 6G' also has a good summary on this topic. Here is an extract from that:

Intelligent network automation

With increasing complexity in network design, for example, many different deployment and usage options, conventional approaches will not be able to provide swift solutions in many cases. It is well understood that manually reconfiguring cellular communications systems could be inefficient and costly.

Artificial intelligence (AI) and machine learning (ML) have the capability to solve complex and unstructured network problems by using a large amount of data collected from wireless networks. Thus, there has been a lot of attention lately on utilizing AI/ML-based solutions to improve network performance and hence providing avenues for inserting intelligence in network operations.

AI model design, optimization, and life-cycle management rely heavily on data. A wireless network can collect a large amount of data as part of its normal operations. This provides a good base for designing intelligent network solutions. 5G Advanced addresses how to optimize the standardized interfaces for data collection while leaving the automation functionality, for example, training and inference up to the proprietary implementation to support full flexibility in the automation of the network.

AI/ML for RAN enhancements

Three use cases have been identified in the Release 17 study item related to RAN performance enhancement by using AI/ML techniques. Selected use cases from the Release 17 technical report will be taken into the normative phase in the next releases. The selected use cases are: 1) network energy saving; 2) load balancing; and 3) mobility optimization.

The selected use cases can be supported by enhancements to current NR interfaces, targeting performance improvements using AI/ML functionality in the RAN while maintaining the 5G NR architecture. One of the goals is to ensure vendor incentives in terms of innovation and competitiveness by keeping the AI model implementation specific. As shown in Fig.2 (on the top) an intent-based management approach can be adopted for use cases involving RAN-OAM interactions. The intent will be received by the RAN. The RAN will need to understand the intent and trigger certain functionalities as a result.

AI/ML for physical layer enhancements

It is generally expected that AI/ML functionality can be used to improve the radio performance and/or reduced the complexity/overhead of the radio interface. 3GPP TSG RAN has selected three use cases to study the potential air interface performance improvements through AI/ML techniques, such as beam management, channel state information feedback enhancement, and positioning accuracy enhancements for different scenarios. The AI/ML-based methods may provide benefits compared to traditional methods in the radio interface. The challenge will be to define a unified AI/ML framework for the air interface by adequate AI/ML model characterization using various levels of collaboration between gNB and UE.

AI/ML in 5G core

5G Advanced will provide further enhancements of the architecture for analytics and on ML model life-cycle management, for example, to improve correctness of the models. The advancements in the architecture for analytics and data collection serve as a good foundation for AI/ML-based use cases within the different network functions (NFs). Additional use cases will be studied where NFs make use of analytics with the target to support in their decision making, for example, network data analytics functions (NWDAF)- assisted generation of UE policy for network slicing.

If you are interested in studying this topic further, check out 3GPP TR 37.817: Study on enhancement for data collection for NR and ENDC. Download the latest version from here.

Related Posts

Tuesday 2 August 2022

GSMAi Webinar: Is the Industry Moving Fast Enough on Standalone 5G?

I recently participated in a webinar, discussing one of my favourite topics, 5G Standalone (5G SA). If you do not know about 5G SA, you may want to quickly watch my short and simple video on the topic here.

Last year I blogged about GSA's 5G Standalone webinar here. That time we were discussing why 5G SA is taking time to deliver, it was sort of a similar story this time. Things are changing though and you will see a lot more of these standalone networks later this year and even early next year. 

The slides of the webinar are available here and the video is embedded below:

Here are some of my thoughts on why 5G SA is taking much longer than most people anticipated:

  • 5G SA will force operators to move to 5G core which is a completely new architecture. The transition to this is taking much longer than expected, especially if there are a lot of legacy services that needs to be supported.
  • Many operators are moving towards converged core with 4G & 5G support to simply the core. This transition is taking long.
  • For taking complete advantage of 5G architecture, cloud native implementation is required. Some operators have already started the transition to cloud native but others are lagging.
  • 5G SA speeds will be lower than NSA speeds hence some operators who don't have a lot of mid-band spectrum are delaying their 5G SA rollouts.
  • Many operators have managed to reduce their latency as they start to move to edge datacentres, hence the urgency for 5G standalone has reduced.
  • Most operators do not see any new revenue opportunities because of 5G SA, hence they want to be completely ready before rolling out 5G SA
  • Finally, you may hear a lot about not enough devices supporting 5G SA but that's not the device manufacturers views.  See this tweet from GSA ðŸ‘‡

Do you agree with my reasoning? If not, please let me know in the comments.

Related Posts

Monday 25 July 2022

Demystifying and Defining the Metaverse

There is no shortage of Metaverse papers and articles as it is the latest trend in the long list of technologies promising to change the world. Couple of months back I wrote a post about it in the 6G blog here.

IEEE hosted a Metaverse Congress with the Kickoff Session 'Demystifying and Defining the Metaverse' this month as can be seen in the Tweet above. The video embedded below covers the following talks:

  • 0:01:24 - Opening Remarks by Eva Kaili (Vice President, European Parliament)
  • 0:09:51 - Keynote - Metaverse Landscape and Outlook by Yu Yuan (President-Elect, IEEE Standards Association)
  • 0:29:30 - Keynote - Through the Store Window by Thomas Furness (“Grandfather of Virtual Reality”)
  • 0:52:30 - Keynote - XR: The origin of the Metaverse as Water-Human-Computer Interaction (WaterHCI) by Steve Mann (“Father of Wearable Computing”)
  • 1:22:17 - Keynote - A Vision of the Metaverse: AI Infused, Physically Accurate Virtual Worlds by Rev Lebaredian (VP of Omniverse & Simulation Technology, NVIDIA)

Some fantastic definitions, explanations, use cases and vision on Metaverse. The final speaker nicely summarised Metaverse as shown in this slide below.

Worth highlighting point 6 that the Metaverse is device independent. I argued about something similar when we try and link everything to 6G (like we linked everything to 5G before). We are just in the beginning phase, a lot of updates and clarifications will come in the next few years before Metaverse starts taking a final shape.

Related Posts

Monday 18 July 2022

APT 600 MHz Band Gets Approval from 3GPP

The current 600 MHz 5G band (n71) is getting an extension as 3GPP approves plan for APT 600 MHz band. Back in April, the 29th meeting of the APT Wireless Group (AWG-29) organized by the Asia Pacific Telecommunity (APT) concluded with the final approval of the new APT 600 MHz band plan that hoped to open an additional 40+40 MHz prime UHF spectrum. A similar approach back in 2013 resulted in the 45+45 MHz in the 700 MHz band, known in 3GPP as n28.

3GPP TSG RAN 96 (all docs here) approved a new work item to standardize the APT 600 MHz band plan which was initially proposed by the ITU-APT Foundation of India (IAFI).

RP-221778 (revision of RP-221062), provides a detailed justification for this new band. Quoting from the document:

The 470-694 MHz frequency range is allocated to the broadcasting service and mobile service on a co-primary basis in ITU Region 3. The frequency band 470-698 MHz, or parts thereof, was identified by WRC-15 in 7 countries in Region 3 through new footnote No. 5.296A for use by those administrations as listed wishing to implement terrestrial IMT systems. In addition, there is interest from other significant markets to do the same. Elsewhere, USA, Mexico and several other countries in ITU Region 2 also identified this band for IMT through footnotes 5.295 and 5.308A. It is noted that resolves 2 of revised Resolution 224 (Rev.WRC-19) to encourage administrations to take into account results of the existing relevant ITU Radio communication Sector studies, when implementing IMT applications/systems in the frequency bands 694-862 MHz in Region 1, in the frequency band 470-806 MHz in Region 2, in the frequency band 790-862 MHz in Region 3, in the frequency band 470-698 MHz, or portions thereof, for those administrations mentioned in No. 5.296A, and in the frequency band 698 790 MHz, or portions thereof, for those administrations mentioned in No. 5.313A.

Spectrum below 1 GHz is expectedly well suited for mobile broadband applications.  In particular, the unique propagation characteristics of the bands below 1 GHz allow for wider area coverage, which in turn requires fewer infrastructures and facilitates service delivery to rural or sparsely populated areas. In this regard, the 700MHz ecosystem is growing swiftly: there are over 34 commercial networks deployments.  The APT700 band plan coming out from Region 3 played a huge role in its success globally. Outside of APAC, countries in Region 2 have adopted or plan to adopt the APT700 band plan (3GPP band 28) for LTE system deployments. The lower duplexer of APT700 plan has also been adopted for Region 1 since the conclusion of WRC-15.

As the utilisation of the 700MHz spectrum increases over time, it is desirable to look at additional spectrum that could be considered as a companion besides 3GPP Band 28. Therefore, the use of parts of the 600MHz band for the mobile broadband service would provide a vital means of delivering high quality, wide area broadband services including in rural areas and deep inside buildings. The timely availability of frequency arrangements is essential for the development of IMT specifications and standards and the early consideration by Administrations in the footnotes referred to above of suitable frequency arrangements. 

The APT region is very diverse and consists of highly developed and developing countries and some with extremely large and rural population base. The sub 1 GHz bands is well suited for the later.

During the last year or so, 3GPP RAN 4 has completed a study item on the feasibility of various duplex filter options for use in this band. The results of this study are documented in TR 38.860. This study was sent to the AWG in an LS RP-212629 in Sep 2021 with a request to provide guidance on a preferred band plan and information on regulatory aspects for the normative work to begin. The AWG 28 meeting has considered the request of the 3GPP and has provided a response to this LS. In this response the LS has indicated a preference for option B1 (full band) and has also requested for the work to begin immediately with a view to completion by Dec 2022. Additionally, the answers to the regulatory questions sought by the 3GPP have now been provided via a reply LS RP 221045.

The band plan for the option B1 that has a single duplexer or full band- is shown in Table 1 below.

The Tx-Rx is "reverse-duplex"; in other words, the downlink frequency band is below the duplex gap while the uplink frequency band is above the duplex gap. This arrangement is opposite to conventional notation; however, for this band, it provides the benefit of aligning the uplink band adjacent to 3GPP band 28 thereby minimizing interference conditions at the 703 MHz boundary.

Accordingly, the companies listed here request 3GPP to start normative work on the following option. 

  • Option B1 with a single duplexer 

For anyone interested in studying this further might want to refer to 3GPP TR 38.860: Study on Extended 600 MHz NR band.

Related Posts

Tuesday 5 July 2022

5G and Cyber Security

Dr. Seppo Virtanen is an Associate Professor in Cyber Security Engineering and Vice Head of Department of Computing, the University of Turku, Finland. At 5G Hack The Mall 2022, he presented a talk on Cybersecurity and 5G. 

In the talk he covered the following topics:

  • Cybersecurity and Information Security
  • The CIA (Confidentiality, Integrity and Availability) Model
    • Achieving the goals of the CIA model
  • Intrusion and Detection
    • Intrusion detection, mitigation and aftercare
  • Smart Environments
    • Abstraction levels
    • Cybersecurity in smart environments
    • Cyber security concerns in smart environments
    • Security concerns in Smart Personal Spaces
    • Security concerns in Smart Rooms and Buildings
    • Security concerns of a participant in a smart environment
    • Cyber Security Concerns in Smart Environments
  • Cyber Security in the 5G context
  • Drivers for 5G security
  • Securing 5G

This video embedded below is a nice introduction to cybersecurity and how it overlaps with 5G:

Related Posts:

Tuesday 28 June 2022

3GPP Explains TSG CT Work on UAS Connectivity, Identification and Tracking

Drones, technically Unmanned Aerial Vehicles/Systems or UAVs/UASs, have been a subject of interest for a very long time due to the wide variety of use cases they can offer. In the recent issue of 3GPP Highlights newsletter, Lena Chaponniere, 3GPP Working Group CT1 Vice-Chair has written an article about TSG CT work on UAS Connectivity, Identification and Tracking. Interestingly, the 3GPP abbreviation for UAS is slightly different, Uncrewed Aerial Systems.

Quoting from the newsletter: 

One of the defining drivers of 5G is the expansion beyond traditional mobile broadband to provide solutions meeting the needs of vertical industries.

A very good example of 3GPP rising up to this challenge is the work done in Release 17 to use cellular connectivity to support Uncrewed Aerial Systems (UAS), thereby enabling this vertical to benefit from the ubiquitous coverage, high reliability, QoS, robust security, and seamless mobility provided by the 3GPP system.

A key component of this work took place in CT Working Groups, which under the leadership of Sunghoon Kim (CT Work Item rapporteur) and Waqar Zia (rapporteur of new specifications TS 29.255 and TS 29.256) developed the necessary protocols and APIs to meet the service requirements specified in 3GPP SA1 and the architectural enhancements specified in 3GPP SA2, as part of the Release 17 Work Item on ‘ID_UAS’.

The key functions of the 3GPP architecture for ID_UAS are depicted in the following figure:

The work in CT Working Groups focused on specifying support for the following features:

UAV remote identification: The CAA (Civil Aviation Administration)-Level UAV ID was introduced in the 3GPP system. It is a globally unique, electronically and physically readable, and tamper resistant identification which allows the receiving entity to address the correct USS for retrieval of UAV information and can be assigned solely by the USS, via means outside the scope of 3GPP, or assigned by the USS with assistance from 3GPP system, whereby the USS delegates the role of “resolver” of the CAA-Level UAV ID to the UAS NF.

AV USS authentication and authorization (UUAA): The first step for the owner of the UAV is to register the UAV with the USS, via a procedure outside the scope of 3GPP, which can take place offline or using internet connectivity. During this procedure, the CAA-level UAV ID is configured in the UAV and the aviationlevel information (e.g. UAV serial number, pilot information, UAS operator, etc.) is provided to the USS.

The UE at the UAV then registers with the 3GPP system by using existing procedures for 3GPP primary authentication, with the MNO credentials stored in the USIM.

After successful authentication of the UE, the UUAA procedure is performed, to enable the 3GPP Core Network to verify that the UAV has successfully registered with the USS. In 5GS, this procedure can take place during the 3GPP registration, or during the establishment of a PDU session for UAS services.

For the former, CT1 extended the registration procedure in TS 24.501 to enable the UE to indicate its CAA-Level UAV ID into a new container (Service-level-AA container) included in the Registration Request message, which triggers the AMF to initiate UUAA with the USS by invoking the Nnef_Authentication service toward the UAS NF, as specified by CT4 in new specification TS 29.256, and the UAS NF to invoke the Naf_Authentication service toward the USS, as specified by CT3 in new specification TS 29.255.

For the latter, CT1 extended the PDU session establishment procedure in TS 24.501 to enable the UE to indicate its CAA-Level UAV ID via the Service-level-AA container included in the PDU Session Establishment Request message, which triggers the SMF to initiate UUAA with the USS via the UAS NF by invoking the services mentioned above. In order to enable exchanging the authentication messages between the UE and the USS, CT1 specified a new Session Management procedure in TS 24.501, in which the SMF sends a Service-level Authentication Command to the UE in a Downlink NAS Transport message. The UE replies to this command with a Service-level Authentication Complete carried in an Uplink NAS Transport message. In EPS, the UUAA procedure takes place during PDN connection establishment, and the information exchanged to that end between the UAV and the PGW is carried in the Service-level-AA container included in the ePCO

C2 communication over cellular connectivity: C2 communication over cellular connectivity consists of the UAV establishing a user plane connection to receive C2 messages from a UAVC, or to report telemetry data to a UAVC. Authorization for C2 communication by the USS is required and includes authorization for pairing of the UAV with a UAVC, as well as flight authorization for the UAV.

C2 communication authorization may be performed:

  • during the UUAA procedure (if UUAA is carried out at PDU session/PDN connection establishment) when the UAV requests establishment of a PDU Session/PDN connection for both UAS services and C2 communication
  • during PDU session modification/UE requested bearer resource modification when the UAV requests to use an existing PDU session/PDN connection for C2 communication
  • during a new PDU session/PDN connection establishment, if the UAV requests to use a separate PDU Session/PDN connection for C2 communication

To support this, CT1 extended the PDU session establishment and modification procedures in TS 24.501 to enable inclusion of the CAA-level UAV ID and an application layer payload containing information for UAVC pairing and for UAV flight authorization in the Service-level-AA container carried in the PDU Session Establishment Request and PDU Session Modification Request messages. The ePCO Information Element in TS 24.008 was also extended to enable it to include the above-mentioned information.

UAV location reporting and tracking: UAV location reporting and tracking was specified by CT3 and CT4 by re-using the existing Nnef_EventExposure service specified in TS 29.522 with the UAS NF acting as NEF/SCEF and interacting with other network functions (e.g. GMLC and AMF/MME) to support UAV tracking. The following tracking modes were specified:

  • UAV location reporting mode: the USS subscribes to the UAS NF UAV to be notified of the location of the UAV, and can indicate the required location accuracy and whether the request is for immediate reporting or deferred reporting (e.g. periodic reporting)
  • UAV presence monitoring mode: the USS subscribes for the event report of UAV moving in or out of a given geographic area
  • List of Aerial UEs in a geographic area: the USS requests the UAS NF for reporting a list of the UAVs in given geographic area and served by the PLMN.

The PDF of newsletter is available here.

Related Posts

Thursday 16 June 2022

What is a Multi-Band Cell?

Multi-band cells became very popular in modern RAN environment and beside many benefits they also come with some challenges for performance measurement and radio network optimization.

A multi-band cell consists of a default band that shall be used by UEs for initial cell selection and a set of additional frequency band carriers that typically become involved as soon as a dedicated radio bearer (DRB) for payload transmission is established in the radio connection.

The exact configuration of a multi-band cell including all available frequency bands is broadcasted in SIB 1 as shown in the example below.

Different from legacy RAN deployments where – to take the example of a LTE cell – a pair of PCI/eARFCN (Physical Cell Identity/eUTRAN Absolute Radio Frequency Number) always matches a particular ECGI (eUTRAN Cell Global Identity) the multi-band cell has many different PCI/eARFCN combinations belonging to a single ECGI as you can see in the next figure.

Now performance measurement (PM) counters for e.g. call drops are typically counted on the cell ID (ECGI) and thus, in case of mulit-band cells do not reveal on which frequency a radio link failure occurred.

However, knowing the frequency is essential to optimize the radio network and minimize connectivity problems. More detailed information must be collected to find out which of the different frequency bands performs well and which need improvement.

This becomes even more interesting if multi-band cells are used in MORAN RAN sharing scenarios.

In my next blog post I will have a closer look at this special deployment.

Related Posts: