Thursday, 21 August 2025

Understanding L1/L2 Triggered Mobility (LTM) Procedure in 3GPP Release 18

In an earlier post we looked at the 3GPP Release 18 Description and Summary of Work Items. One of the key areas was Further NR mobility enhancements, where a new feature called L1/L2-triggered mobility (LTM) has been introduced. This procedure aims to reduce mobility latency and improve handover performance in 5G-Advanced.

Mobility has always been one of the most important areas in cellular networks. The ability of a user equipment (UE) to move between cells without losing service is essential for reliability and performance. Traditional handover procedures in 4G and 5G rely on Layer 3 (L3) signalling, which is robust but can result in high signalling overhead and connection interruption times of 50 to 90 milliseconds. While most consumer services can tolerate this, advanced use cases with strict latency demands cannot.

3GPP Release 18 takes a significant step forward by introducing the L1/L2 Triggered Mobility (LTM) procedure. Instead of relying only on L3 signalling, LTM shifts much of the handover process down to Layer 1 (physical) and Layer 2 (MAC), making it both faster and more efficient. The goal is to reduce interruption to around 20 to 30 milliseconds, a level that can better support applications in ultra-reliable low latency communication, extended reality and mobility automation.

The principle behind LTM is straightforward. The UE is preconfigured with candidate target cells by the network. These configurations can be provided in two ways: either as a common reference with small delta updates for each candidate or as complete configurations. Keeping the configuration of multiple candidates allows the UE to switch more quickly without requiring another round of reconfiguration after each move.

Measurements are then performed at lower layers. The UE reports reference signal measurements and time and phase information to the network. Medium Access Control (MAC) control elements are used to activate or deactivate target cell states, including transmission configuration indicator (TCI) states. This ensures the UE is already aware of beam directions and reference signals in the target cells before the actual switch.

A particularly important innovation in LTM is the concept of pre-synchronisation. Both downlink and uplink pre-synchronisation can take place while the UE is still connected to the serving cell. For downlink, the network instructs the UE to align with a candidate cell’s beams. For uplink, the UE can transmit a random-access preamble towards a target cell, and the network calculates a timing advance (TA) value. This TA is stored and delivered only at the moment of execution, allowing the UE to avoid a new random access procedure. In cases where TA is already known or equal to the serving cell, the handover becomes RACH-less, eliminating a significant source of delay.

The final step is the LTM cell switch command. This MAC control element carries the chosen target configuration, TA value and TCI state indication. Since synchronisation has already been achieved, the UE can break the old connection and resume data transfer almost immediately in the new cell.

Compared to earlier attempts such as Dual Active Protocol Stack (DAPS) handover, which required maintaining two simultaneous connections and faced practical limitations, LTM offers a more scalable solution. It can be applied across frequency ranges, including higher bands above 7 GHz where beamforming is critical, and it works for both intra-DU and inter-DU mobility within a gNB.

The Release 18 specification restricts LTM to intra-gNB mobility, but work has already begun in Release 19 to expand it further. Future enhancements are expected to cover inter-gNB mobility and to refine measurement reporting for even greater efficiency.

Looking beyond 5G Advanced, new concepts are being explored for 6G. At the Brooklyn 6G Summit 2024, MediaTek introduced the idea of L1/L2 Triggered Predictive Mobility (LTPM), where predictive intelligence could play a role in mobility decisions. While this is still at an early research stage, it points to how mobility management will continue to evolve.

For now, the introduction of LTM marks a practical and important milestone. By reducing handover latency significantly, it brings the network closer to meeting the demanding requirements of next generation services while maintaining efficiency in signalling and resource use.

Related Posts

Friday, 8 August 2025

Is 6G Our Last Chance to Make Antennas Great Again?

At the CW TEC 2025 conference hosted by Cambridge Wireless, veteran wireless engineer Moray Rumney delivered a presentation that challenged the direction the mobile industry has taken. With decades of experience and a sharp eye for what matters, he highlighted a growing and largely ignored problem: the steady decline in the efficiency of antennas in mobile devices.

The evolution of mobile technology has delivered remarkable achievements. From the early days of GSM to the promises of 5G and the ambition of 6G, the industry has continually pushed for higher speeds, more features and greater spectral efficiency. Yet along the way, something essential has been lost. While much of the focus has been on network-side innovation and baseband complexity, the performance of the user device antenna has deteriorated to the point where it is now undermining the potential benefits of these advancements.

According to Moray, antenna performance in smartphones has declined by around 15 decibels since the transition from external antennas in 2G to today’s smartphones. That level of loss has a profound impact. A poor antenna reduces both transmitted and received signal strength. On the uplink side, this means users need to push more power to the network, which drains battery life faster. On the downlink, it forces the network to compensate with stronger transmissions, increasing inter-cell interference and lowering cell-edge throughput. Ultimately, this undermines the overall efficiency and quality of mobile networks. Cell edge performance and indoor coverage is much degraded.

The root of the problem lies in modern smartphone design priorities. Over the years, devices have become slimmer, more stylish and packed with more features. In this pursuit of sleekness, antennas have been compromised. External antennas gave way to internal ones, squeezed into tight spaces surrounded by metal and glass. The visual appeal of the phone has taken precedence over its radio performance. On a technical level, the explosion in the number of supported bands and the increased use of multi-antenna transceivers optimized for high performance in excellent conditions, has reduced the available space for each antenna, reducing the antenna gain accordingly.

This issue was particularly pronounced during the LTE era, where the standards bodies failed to define any radiated performance requirements. Handset performance is based  on conducted power, which can appear satisfactory in laboratory conditions. However, once the signal passes through the device's real antenna, the result is often a significant loss. Real-world radiated performance does not match lab conducted measurements.

One of Moray's more memorable illustrations compared the situation to a tube of toothpaste. The conducted performance, which all devices meet, is like a full tube of toothpaste, but with years passing before radiated requirements were finally defined for a few bands in 5G, products with inferior radiated performance were released to the market, which put downward pressure on the radiated requirements that were finally agreed – like squeezing out all the toothpaste. What is left today is a small residue of what used to be. Once compromised, it is extremely difficult to reverse this trend.

He also pointed out a structural problem in how mobile standards are developed. The focus is disproportionately placed on baseband processing and theoretical possibilities, rather than on end-user experience and what actually gets deployed. As new generations arrive, more complexity is added, yet basic aspects like antenna efficiency are overlooked. Testing practices further entrench the problem, as the use of a 50-ohm connector during lab testing limits the scope for real antenna improvements, preventing designers from achieving optimal matching and performance.

Despite all the talk of 6G and beyond, the reality on the ground is less impressive. The UK currently ranks 59th in global mobile speed tests. This is not because of a lack of advanced standards or spectrum, but because of poor deployment decisions and device-related issues like inefficient antennas. It is not a technology gap but a failure to focus on basics that truly matter to users.

Moray argued that significant progress could be made without waiting for 6G. Regulatory bodies could introduce minimum standards for antenna performance, as was once attempted in Denmark. Device certification could include antenna efficiency ratings, encouraging manufacturers to prioritise performance. Networks could enforce stricter indoor coverage targets, and pricing models could be rethought to reduce the strain caused by low-value, high-volume traffic.

He also called attention to battery life, another casualty of inefficient antennas and poor design decisions. Users now routinely carry power banks to get through the day. This is hardly a sign of progress, especially considering the environmental impact of producing and charging these extra devices.

In conclusion, while the industry continues to chase ambitious visions for future generations of mobile technology, there is an urgent need to fix the basics. Antennas are not an exciting topic, but they are fundamental. Without efficient antennas, all the investment in infrastructure, spectrum and software optimisation is wasted. It is time for the industry to refocus, reassess and revalue the importance of the one component every user relies on, but rarely sees.

It really is time to make antennas great again.

Moray’s presentation is embedded below and is available to download from here.

Related Posts