Thursday, 24 July 2025

L4S and the Future of Real-Time Performance in 5G and Beyond

As mobile networks continue to evolve to support increasingly immersive and responsive services, the importance of consistent low latency has never been greater. Whether it is cloud gaming, extended reality, remote machine operation or real-time collaboration, all these applications rely on the ability to react instantly to user input. The slightest delay can affect the user experience, making the role of the network even more critical.

While 5G has introduced major improvements in radio latency and overall throughput, many time-critical applications are still affected by a factor that is often overlooked - queuing delay. This occurs when packets build up in buffers before they are forwarded, creating spikes in delay and jitter. Traditional methods for congestion control, such as those based on packet loss, are too slow to react, especially in mobile environments where network conditions can change rapidly.

Low Latency, Low Loss and Scalable Throughput (L4S), is a new network innovation designed to tackle this challenge. It is an Internet protocol mechanism developed through the Internet Engineering Task Force, and has recently reached standardisation. L4S focuses on preventing queuing delays by marking packets early when congestion is building, instead of waiting until buffers overflow and packets are dropped. The key idea is to use explicit signals within the network to guide congestion control at the sender side.

Applications that support L4S are able to reduce their sending rate quickly when congestion starts to appear. This is done by using ECN, or Explicit Congestion Notification, which involves marking rather than dropping packets. The result is a smooth and continuous flow of data, where latency remains low and throughput remains high, even in changing network conditions.

One of the significant benefits of L4S is its ability to support a wide range of real-time services at scale. Ericsson highlights how edge-based applications such as cloud gaming, virtual reality and drone control need stable low-latency connections alongside high bitrates. While over-the-top approaches to congestion control may work for general streaming, they struggle in mobile environments. This is due to variability in channel quality and radio access delays, which can cause sudden spikes in latency. L4S provides a faster and more direct way to detect congestion within the radio network, enabling better performance for these time-sensitive applications.

To make this possible, mobile networks need to support L4S in a way that keeps its traffic separate from traditional data flows. This involves using dedicated queues for L4S traffic to ensure it is not delayed behind bulk data transfers. In 5G, this is implemented through dedicated quality-of-service flows, allowing network elements to detect and handle L4S traffic differently. For example, if a mobile user is playing a cloud-based game, the network can identify this traffic and place it on an L4S-optimised flow. This avoids interference from other applications, such as file downloads or video streaming.

Nokia's approach further explains how L4S enables fair sharing of bandwidth between classic and L4S traffic without compromising performance. A dual-queue system allows both types of traffic to coexist while preserving the low-latency characteristics of L4S. This is especially important in scenarios where both legacy and L4S-capable applications are in use. In simulations and trials, the L4S mechanism has shown the ability to maintain very low delay even when the link experiences sudden reductions in capacity, which is common in mobile and Wi-Fi networks.

One of the important aspects of L4S is that it requires support both from the application side and within the network. On the application side, rate adaptation based on L4S can be implemented within the app itself, often using modern transport protocols such as QUIC or TCP extensions. Many companies, including device makers and platform providers, are already trialling support for this approach.

Within the network, L4S depends on the ability of routers and radio access equipment to read and mark ECN bits correctly. In mobile networks, the radio access network is typically the key bottleneck where marking should take place. This ensures that congestion is detected at the right point in the path, allowing for quicker response and improved performance.

Although L4S is distinct from ultra-reliable low-latency communication, it can complement those use cases where guaranteed service is needed in controlled environments. What makes L4S more versatile is its scalability and suitability for open internet and large-scale public network use. It can work across both fixed and mobile access networks, providing a common framework for interactive services regardless of access technology.

With L4S in place, it becomes possible to offer new kinds of applications that were previously limited by latency constraints. This includes lighter and more wearable XR headsets that can offload processing to the cloud, or port automation systems that rely on remote control of heavy equipment. Even everyday experiences, such as video calls or online gaming, stand to benefit from a more responsive and stable network connection.

Ultimately, L4S offers a practical and forward-looking approach to delivering the consistent low latency needed for the next generation of digital experiences. By creating a tighter feedback loop between the network and the application, and by applying congestion signals in a more intelligent way, L4S helps unlock the full potential of 5G and future networks.

This introductory video by CableLabs is a good starting point for anyone willing to dig deeper in the topic. This LinkedIn post by Dean Bubley and the comments are also worth a read.

PS: Just noticed that T-Mobile USA have announced earlier this week that they are the first to unlock L4S in wireless . You can read their blog post here and a promotional video is available in the Tweet below ðŸ‘‡

Tuesday, 1 July 2025

The Evolution of 3GPP 5G Network Slice and Service Types (SSTs)

The concept of network slicing has been one of the standout features in 5G (no pun intended). It allows operators to offer logically isolated networks over shared infrastructure, each tailored for specific applications or services. These slices are identified using a combination of the Slice/Service Type (SST) and an optional Slice Differentiator (SD), together forming what is called a Single Network Slice Selection Assistance Information (S-NSSAI).

To ensure global interoperability and support for roaming scenarios, 3GPP standardises a set of SST values. These are intended to provide common ground across public land mobile networks for the most prevalent slice types. Over the course of different 3GPP releases, the list of standardised SST values has grown to reflect emerging use cases and evolving requirements.

The foundation was laid in Release 15, where the first three SST values were introduced. SST 1 represents enhanced Mobile Broadband (eMBB), suitable for high throughput services like video streaming, large file downloads and augmented reality. SST 2 refers to Ultra-Reliable and Low-Latency Communications (URLLC), designed for time-sensitive applications such as factory automation, remote surgery and smart grids. SST 3 is for Massive Internet of Things (mIoT - earlier referred to as mMTC), tailored for large-scale deployments of low-power sensors in use cases such as smart metering and logistics.

The first major extension came with Release 16, which introduced SST 4 for Vehicle-to-Everything (V2X) services. This slice type addresses the requirements of connected vehicles, particularly in terms of ultra low latency, high reliability and localised communication. It was the first time a vertical-specific slice type was defined.

With Release 17, the slicing framework was extended further to include SST 5, defined for High-Performance Machine-Type Communications (HMTC). This slice is aimed at industrial automation and use cases that require highly deterministic and reliable communication patterns between machines. It enhances the original URLLC profile by refining it for industrial-grade requirements.

Recognising the growing importance of immersive services, Release 18 added SST 6, defined for High Data Rate and Low Latency Communications (HDLLC). This slice targets extended reality, cloud gaming and other applications that simultaneously demand low delay and high bandwidth. It goes beyond what enhanced Mobile Broadband or URLLC individually offer by addressing the combination of both extremes. The documentation refers to this as being suitable for extended reality and media services, underlining the increasing focus on immersive technologies and their networking needs.

Finally, Release 19 introduced SST 7 for Guaranteed Bit Rate Streaming Services (GBRSS). This new slice supports services where continuous, guaranteed throughput is essential. It is particularly relevant for live broadcasting, high-definition streaming, or virtual presence applications where quality cannot degrade over time.

This gradual and deliberate expansion of standardised SSTs highlights how 5G is not a one-size-fits-all solution. Instead, it is a dynamic platform that adapts to the needs of different industries. As use cases grow more sophisticated and diverse, having standardised slice types helps ensure compatibility, simplify device and network configuration, and promote innovation.

It is also worth noting that these SST values are not mandatory for every operator to implement. A network can choose to support a subset based on its service strategy. For example, a public network may prioritise SSTs 1 and 3, while a private industrial deployment might focus on SST 5 or 7.

With slicing increasingly central to how 5G will be "monetised" and deployed, expect this list to keep growing in future releases. Each new SST tells a story about where the telecoms ecosystem is heading.

Related Posts