Tuesday, 23 September 2025

5G+ and 5GA Icon (Pictogram) in New Smartphones

As 5G matures, new icons are appearing on smartphones to distinguish faster or more advanced connections. Some of the latest 5G smartphones around the world have started showing new icons such as 5G+ and 5GA. Interestingly, in Japan these are referred to as pictograms.

A long time ago, we looked at how Swisscom described its 5G rollout as 5G-wide and 5G-fast. Today, Swisscom uses the 5G+ icon to represent what it previously called 5G-fast. In its annual report, Swisscom explained:

5G (and 5G+) is the latest generation of mobile technology. Compared to 3G and 4G, it provides even more capacity, very short response times, and higher bandwidths. 5G technology plays a major role in supporting the digitalisation of the Swiss economy and industry. Swisscom differentiates between 5G-fast (narrower coverage up to 2 Gbit/s and more) and 5G-wide (Switzerland-wide 5G coverage with up to 1 Gbit/s). 5G-fast is also known as 5G+. Both variants are more efficient than their predecessor technologies with respect to energy consumption and use of electromagnetic fields.

Japan has only recently transitioned to using 5G+. A Google-translated page from NTT Docomo explains it as follows:

In areas where 5G communication is possible, the RAT display on standby will be "5G." On the other hand, during communication, the RAT display will be "5G+" for 5G communication using wideband 5G frequencies (3.7 GHz, 4.5 GHz, 28 GHz), "5G" for 5G communication using 4G frequencies, and "4G+" for LTE communication.

There are also footnotes clarifying that the display depends on the device, the bands supported, and the area of use.

From this, my understanding is that in newer devices the 5G+ icon is primarily used to indicate speed and capability, regardless of whether the connection is Standalone (SA) or Non-Standalone (NSA) 5G. KDDI is following the same approach, as explained on its own support pages.

Last year we looked at what iPhone icons meant. In iOS 18, 5G+ indicated that the phone was connected to mmWave. In iOS 19 this hasn’t really changed, although I have been told that it depends on the operator whether they choose to display 5G+ when the device is camped on higher-speed mid-band 5G.

Samsung Galaxy smartphones display two or three types of icons, as shown in the picture at the top. While the meanings are not entirely clear, Samsung’s user guide for Android 15 explains them as:

  • Filled square: “5G network connected”, which I interpret as being connected to a 5G Standalone network.
  • Transparent or outlined square: “LTE network connected in LTE network that includes the 5G network.”, which I interpret as 5G NSA.  
  • I did not find a reference to the unboxed 5G icon in this manual.

Finally, the OnePlus 13 in India has started displaying the 5GA icon. Since Jio only operates a 5G Standalone network, it is possible they have upgraded the network and device to use the Release 18 ASN with some new features. This allows them to market it as 5G-Advanced, thereby justifying the 5GA icon.

If you have noticed something different in your country or region, or have another interpretation, I would love to hear more.

Related Posts:

Thursday, 11 September 2025

Dummy Loads in RF Testing for Dummies

I have spent many years working in the Test and Measurement industry and have also worked as a hands on engineer testing solutions, and as a field engineer testing various solution pre and post deployment. Over the years I have used various attenuators and dummy loads. It was nice to finally look at the different types of dummy loads and understand how they work in this R&S video.

So what exactly is a dummy load? At its core, it is a special kind of termination designed to absorb radio frequency energy safely. Instead of letting signals radiate into the air, a dummy load converts the RF power into heat. Think of it as an antenna that never actually transmits anything. This makes it invaluable when testing transmitters because you can run them at full power without interfering with anyone else’s spectrum.

Ordinary terminations are widely used in test setups but they are usually only good for low power. If you need to deal with more than about a watt of power, that is where dummy loads come in. Depending on their design, they can handle anything from a few watts to many kilowatts. To survive this, dummy loads use cooling methods. The most common are dry loads with large heatsinks that shed heat into the air. For higher powers, wet loads use liquids such as water or oil to absorb and move heat away more efficiently. Some combine both air and liquid cooling to push the limits even further.

Good dummy loads are not just about heat management. They also need to provide a stable impedance match, usually 50 ohms, across a wide frequency range. This minimises reflections and ensures accurate testing. Many dummy loads cover frequencies up to several gigahertz with low standing wave ratios. Ultra broadband designs, such as the Rohde & Schwarz UBL100, go up to 18 GHz and can safely absorb power levels in the kilowatt range

Some dummy loads even add extra features. A sampling port allows you to monitor the input signal at a reduced level. Interlock protection can shut down a connected transmitter if the load gets too hot. These touches make dummy loads more versatile and safer in real-world use.

In day-to-day testing, dummy loads help not only to protect transmitters but also to get accurate measurements. By acting as a perfectly matched, non-radiating antenna, they give engineers confidence that they are measuring the true transmitter output. They can also be used to quickly check feedlines and connectors by substituting them in place of an antenna.

Rohde & Schwarz have put together a useful explainer video that covers all of this in a simple, visual way. You can watch it below to get a clear overview of dummy loads and why they matter so much in RF testing.

Related Posts:

Monday, 1 September 2025

Software Efficiency Matters as Much as Hardware for Sustainability

When we talk about making computing greener, the conversation often turns to hardware. Data centres have become far more efficient over the years. Power supply units that once wasted 40% of energy now operate above 90% efficiency. Cooling systems that once consumed several times the power of the servers themselves have been dramatically improved. The hardware people have delivered.

But as Bert Hubert argues in his talk “Save the world, write more efficient code”, software has been quietly undoing many of those gains. Software bloat has outpaced hardware improvements. What once required careful optimisation is now often solved by throwing more cloud resources at the problem. That keeps systems running, but at a significant energy cost.

The hidden footprint of sluggish software

Sluggish systems are not just an annoyance. Every loading spinner, every second a user waits, often means CPUs are running flat out somewhere in the chain. At scale, those wasted cycles add up to megawatthours of electricity. Studies suggest that servers are responsible for around 4% of global CO₂ emissions, on par with the entire aviation industry. That is not a small share, and it makes efficient software a climate issue.

Hubert points out that the difference between badlly written code, reasonable code, and highly optimised code can easily span a factor of 100 in computing requirements. He demonstrates this with a simple example: generating a histogram of Dutch house numbers from a dataset of 9.9 million addresses.

  • A naïve Python implementation took 12 seconds and consumed over 500 joules of energy per run.
  • A straightforward database query reduced this to around 20 joules.
  • Using DuckDB, a database optimised for analytics, the same task dropped to just 2.5 joules and completed in milliseconds.

The user experience also improved dramatically. What once required a long wait became effectively instantaneous.

From data centres to “data sheds”

The point is not just academic. If everyone aimed for higher software efficiency, Hubert suggests, many data centres could be shrunk to the size of a shed. Unlike hardware, where efficiency can be bought, software efficiency has to be designed and built. It requires time, effort and, crucially, management permission to prioritise performance over simply shipping features.

Netflix provides a striking example. Its custom Open Connect appliances deliver around 45,000 video streams at under 10 milliwatts per user. By investing heavily in efficiency, they proved that optimised software and hardware together can deliver enormous gains.

The cloud and client-side challenge

The shift to the cloud has created perverse incentives. In the past, if your code was inefficient, the servers would crash and force a rewrite. Now, organisations can simply spin up more cloud instances. That makes it too easy to ignore software waste and too tempting to pass the costs into ever-growing cloud bills. Those costs are not only financial, but also environmental.

On the client side, the problem is subtler but still real. While loading sluggish web apps may not burn as much power as a data centre, the sheer number of devices adds up. Hubert measured that opening LinkedIn on a desktop consumed around 45 joules. Scaled to hundreds of millions of users, even modest inefficiencies start to look like power plants.

Sometimes the situation is worse. Hubert found that simply leaving open.spotify.com running in a browser kept his machine burning an additional 45 watts continuously, due to a rogue worker thread. With hundreds of millions of users, that single design choice could represent hundreds of megawatts of wasted power globally.

Building greener software

The lesson is clear. Early sluggishness never goes away. If a system is slow with only a handful of users, it will be catastrophically wasteful at scale. The time to demand efficiency is at the start of a project.

There are also practical steps engineers and organisations can take:

  • Measure energy use during development, not just performance.
  • Audit client-side behaviour for long-lived applications.
  • Incentivise teams to improve efficiency, not just to ship quickly.
  • Treat large cloud bills as a proxy for emissions as well as costs.

As Hubert says, we may only be able to influence 4% of global energy use through software. But that is the same impact as the aviation industry. Hardware engineers have done their part. Now it is time for software engineers to step up.

You can watch Bert Hubert’s full talk below, where he shares both entertaining stories and sobering measurements that show why greener software is not only possible but urgently needed. The PDF of slides is here and his LinkedIn discussion here.

Related Posts:

Thursday, 21 August 2025

Understanding L1/L2 Triggered Mobility (LTM) Procedure in 3GPP Release 18

In an earlier post we looked at the 3GPP Release 18 Description and Summary of Work Items. One of the key areas was Further NR mobility enhancements, where a new feature called L1/L2-triggered mobility (LTM) has been introduced. This procedure aims to reduce mobility latency and improve handover performance in 5G-Advanced.

Mobility has always been one of the most important areas in cellular networks. The ability of a user equipment (UE) to move between cells without losing service is essential for reliability and performance. Traditional handover procedures in 4G and 5G rely on Layer 3 (L3) signalling, which is robust but can result in high signalling overhead and connection interruption times of 50 to 90 milliseconds. While most consumer services can tolerate this, advanced use cases with strict latency demands cannot.

3GPP Release 18 takes a significant step forward by introducing the L1/L2 Triggered Mobility (LTM) procedure. Instead of relying only on L3 signalling, LTM shifts much of the handover process down to Layer 1 (physical) and Layer 2 (MAC), making it both faster and more efficient. The goal is to reduce interruption to around 20 to 30 milliseconds, a level that can better support applications in ultra-reliable low latency communication, extended reality and mobility automation.

The principle behind LTM is straightforward. The UE is preconfigured with candidate target cells by the network. These configurations can be provided in two ways: either as a common reference with small delta updates for each candidate or as complete configurations. Keeping the configuration of multiple candidates allows the UE to switch more quickly without requiring another round of reconfiguration after each move.

Measurements are then performed at lower layers. The UE reports reference signal measurements and time and phase information to the network. Medium Access Control (MAC) control elements are used to activate or deactivate target cell states, including transmission configuration indicator (TCI) states. This ensures the UE is already aware of beam directions and reference signals in the target cells before the actual switch.

A particularly important innovation in LTM is the concept of pre-synchronisation. Both downlink and uplink pre-synchronisation can take place while the UE is still connected to the serving cell. For downlink, the network instructs the UE to align with a candidate cell’s beams. For uplink, the UE can transmit a random-access preamble towards a target cell, and the network calculates a timing advance (TA) value. This TA is stored and delivered only at the moment of execution, allowing the UE to avoid a new random access procedure. In cases where TA is already known or equal to the serving cell, the handover becomes RACH-less, eliminating a significant source of delay.

The final step is the LTM cell switch command. This MAC control element carries the chosen target configuration, TA value and TCI state indication. Since synchronisation has already been achieved, the UE can break the old connection and resume data transfer almost immediately in the new cell.

Compared to earlier attempts such as Dual Active Protocol Stack (DAPS) handover, which required maintaining two simultaneous connections and faced practical limitations, LTM offers a more scalable solution. It can be applied across frequency ranges, including higher bands above 7 GHz where beamforming is critical, and it works for both intra-DU and inter-DU mobility within a gNB.

The Release 18 specification restricts LTM to intra-gNB mobility, but work has already begun in Release 19 to expand it further. Future enhancements are expected to cover inter-gNB mobility and to refine measurement reporting for even greater efficiency.

Looking beyond 5G Advanced, new concepts are being explored for 6G. At the Brooklyn 6G Summit 2024, MediaTek introduced the idea of L1/L2 Triggered Predictive Mobility (LTPM), where predictive intelligence could play a role in mobility decisions. While this is still at an early research stage, it points to how mobility management will continue to evolve.

For now, the introduction of LTM marks a practical and important milestone. By reducing handover latency significantly, it brings the network closer to meeting the demanding requirements of next generation services while maintaining efficiency in signalling and resource use.

Related Posts

Friday, 8 August 2025

Is 6G Our Last Chance to Make Antennas Great Again?

At the CW TEC 2025 conference hosted by Cambridge Wireless, veteran wireless engineer Moray Rumney delivered a presentation that challenged the direction the mobile industry has taken. With decades of experience and a sharp eye for what matters, he highlighted a growing and largely ignored problem: the steady decline in the efficiency of antennas in mobile devices.

The evolution of mobile technology has delivered remarkable achievements. From the early days of GSM to the promises of 5G and the ambition of 6G, the industry has continually pushed for higher speeds, more features and greater spectral efficiency. Yet along the way, something essential has been lost. While much of the focus has been on network-side innovation and baseband complexity, the performance of the user device antenna has deteriorated to the point where it is now undermining the potential benefits of these advancements.

According to Moray, antenna performance in smartphones has declined by around 15 decibels since the transition from external antennas in 2G to today’s smartphones. That level of loss has a profound impact. A poor antenna reduces both transmitted and received signal strength. On the uplink side, this means users need to push more power to the network, which drains battery life faster. On the downlink, it forces the network to compensate with stronger transmissions, increasing inter-cell interference and lowering cell-edge throughput. Ultimately, this undermines the overall efficiency and quality of mobile networks. Cell edge performance and indoor coverage is much degraded.

The root of the problem lies in modern smartphone design priorities. Over the years, devices have become slimmer, more stylish and packed with more features. In this pursuit of sleekness, antennas have been compromised. External antennas gave way to internal ones, squeezed into tight spaces surrounded by metal and glass. The visual appeal of the phone has taken precedence over its radio performance. On a technical level, the explosion in the number of supported bands and the increased use of multi-antenna transceivers optimized for high performance in excellent conditions, has reduced the available space for each antenna, reducing the antenna gain accordingly.

This issue was particularly pronounced during the LTE era, where the standards bodies failed to define any radiated performance requirements. Handset performance is based  on conducted power, which can appear satisfactory in laboratory conditions. However, once the signal passes through the device's real antenna, the result is often a significant loss. Real-world radiated performance does not match lab conducted measurements.

One of Moray's more memorable illustrations compared the situation to a tube of toothpaste. The conducted performance, which all devices meet, is like a full tube of toothpaste, but with years passing before radiated requirements were finally defined for a few bands in 5G, products with inferior radiated performance were released to the market, which put downward pressure on the radiated requirements that were finally agreed – like squeezing out all the toothpaste. What is left today is a small residue of what used to be. Once compromised, it is extremely difficult to reverse this trend.

He also pointed out a structural problem in how mobile standards are developed. The focus is disproportionately placed on baseband processing and theoretical possibilities, rather than on end-user experience and what actually gets deployed. As new generations arrive, more complexity is added, yet basic aspects like antenna efficiency are overlooked. Testing practices further entrench the problem, as the use of a 50-ohm connector during lab testing limits the scope for real antenna improvements, preventing designers from achieving optimal matching and performance.

Despite all the talk of 6G and beyond, the reality on the ground is less impressive. The UK currently ranks 59th in global mobile speed tests. This is not because of a lack of advanced standards or spectrum, but because of poor deployment decisions and device-related issues like inefficient antennas. It is not a technology gap but a failure to focus on basics that truly matter to users.

Moray argued that significant progress could be made without waiting for 6G. Regulatory bodies could introduce minimum standards for antenna performance, as was once attempted in Denmark. Device certification could include antenna efficiency ratings, encouraging manufacturers to prioritise performance. Networks could enforce stricter indoor coverage targets, and pricing models could be rethought to reduce the strain caused by low-value, high-volume traffic.

He also called attention to battery life, another casualty of inefficient antennas and poor design decisions. Users now routinely carry power banks to get through the day. This is hardly a sign of progress, especially considering the environmental impact of producing and charging these extra devices.

In conclusion, while the industry continues to chase ambitious visions for future generations of mobile technology, there is an urgent need to fix the basics. Antennas are not an exciting topic, but they are fundamental. Without efficient antennas, all the investment in infrastructure, spectrum and software optimisation is wasted. It is time for the industry to refocus, reassess and revalue the importance of the one component every user relies on, but rarely sees.

It really is time to make antennas great again.

Moray’s presentation is embedded below and is available to download from here.

Related Posts

Thursday, 24 July 2025

L4S and the Future of Real-Time Performance in 5G and Beyond

As mobile networks continue to evolve to support increasingly immersive and responsive services, the importance of consistent low latency has never been greater. Whether it is cloud gaming, extended reality, remote machine operation or real-time collaboration, all these applications rely on the ability to react instantly to user input. The slightest delay can affect the user experience, making the role of the network even more critical.

While 5G has introduced major improvements in radio latency and overall throughput, many time-critical applications are still affected by a factor that is often overlooked - queuing delay. This occurs when packets build up in buffers before they are forwarded, creating spikes in delay and jitter. Traditional methods for congestion control, such as those based on packet loss, are too slow to react, especially in mobile environments where network conditions can change rapidly.

Low Latency, Low Loss and Scalable Throughput (L4S), is a new network innovation designed to tackle this challenge. It is an Internet protocol mechanism developed through the Internet Engineering Task Force, and has recently reached standardisation. L4S focuses on preventing queuing delays by marking packets early when congestion is building, instead of waiting until buffers overflow and packets are dropped. The key idea is to use explicit signals within the network to guide congestion control at the sender side.

Applications that support L4S are able to reduce their sending rate quickly when congestion starts to appear. This is done by using ECN, or Explicit Congestion Notification, which involves marking rather than dropping packets. The result is a smooth and continuous flow of data, where latency remains low and throughput remains high, even in changing network conditions.

One of the significant benefits of L4S is its ability to support a wide range of real-time services at scale. Ericsson highlights how edge-based applications such as cloud gaming, virtual reality and drone control need stable low-latency connections alongside high bitrates. While over-the-top approaches to congestion control may work for general streaming, they struggle in mobile environments. This is due to variability in channel quality and radio access delays, which can cause sudden spikes in latency. L4S provides a faster and more direct way to detect congestion within the radio network, enabling better performance for these time-sensitive applications.

To make this possible, mobile networks need to support L4S in a way that keeps its traffic separate from traditional data flows. This involves using dedicated queues for L4S traffic to ensure it is not delayed behind bulk data transfers. In 5G, this is implemented through dedicated quality-of-service flows, allowing network elements to detect and handle L4S traffic differently. For example, if a mobile user is playing a cloud-based game, the network can identify this traffic and place it on an L4S-optimised flow. This avoids interference from other applications, such as file downloads or video streaming.

Nokia's approach further explains how L4S enables fair sharing of bandwidth between classic and L4S traffic without compromising performance. A dual-queue system allows both types of traffic to coexist while preserving the low-latency characteristics of L4S. This is especially important in scenarios where both legacy and L4S-capable applications are in use. In simulations and trials, the L4S mechanism has shown the ability to maintain very low delay even when the link experiences sudden reductions in capacity, which is common in mobile and Wi-Fi networks.

One of the important aspects of L4S is that it requires support both from the application side and within the network. On the application side, rate adaptation based on L4S can be implemented within the app itself, often using modern transport protocols such as QUIC or TCP extensions. Many companies, including device makers and platform providers, are already trialling support for this approach.

Within the network, L4S depends on the ability of routers and radio access equipment to read and mark ECN bits correctly. In mobile networks, the radio access network is typically the key bottleneck where marking should take place. This ensures that congestion is detected at the right point in the path, allowing for quicker response and improved performance.

Although L4S is distinct from ultra-reliable low-latency communication, it can complement those use cases where guaranteed service is needed in controlled environments. What makes L4S more versatile is its scalability and suitability for open internet and large-scale public network use. It can work across both fixed and mobile access networks, providing a common framework for interactive services regardless of access technology.

With L4S in place, it becomes possible to offer new kinds of applications that were previously limited by latency constraints. This includes lighter and more wearable XR headsets that can offload processing to the cloud, or port automation systems that rely on remote control of heavy equipment. Even everyday experiences, such as video calls or online gaming, stand to benefit from a more responsive and stable network connection.

Ultimately, L4S offers a practical and forward-looking approach to delivering the consistent low latency needed for the next generation of digital experiences. By creating a tighter feedback loop between the network and the application, and by applying congestion signals in a more intelligent way, L4S helps unlock the full potential of 5G and future networks.

This introductory video by CableLabs is a good starting point for anyone willing to dig deeper in the topic. This LinkedIn post by Dean Bubley and the comments are also worth a read.

PS: Just noticed that T-Mobile USA have announced earlier this week that they are the first to unlock L4S in wireless . You can read their blog post here and a promotional video is available in the Tweet below 👇

Tuesday, 1 July 2025

The Evolution of 3GPP 5G Network Slice and Service Types (SSTs)

The concept of network slicing has been one of the standout features in 5G (no pun intended). It allows operators to offer logically isolated networks over shared infrastructure, each tailored for specific applications or services. These slices are identified using a combination of the Slice/Service Type (SST) and an optional Slice Differentiator (SD), together forming what is called a Single Network Slice Selection Assistance Information (S-NSSAI).

To ensure global interoperability and support for roaming scenarios, 3GPP standardises a set of SST values. These are intended to provide common ground across public land mobile networks for the most prevalent slice types. Over the course of different 3GPP releases, the list of standardised SST values has grown to reflect emerging use cases and evolving requirements.

The foundation was laid in Release 15, where the first three SST values were introduced. SST 1 represents enhanced Mobile Broadband (eMBB), suitable for high throughput services like video streaming, large file downloads and augmented reality. SST 2 refers to Ultra-Reliable and Low-Latency Communications (URLLC), designed for time-sensitive applications such as factory automation, remote surgery and smart grids. SST 3 is for Massive Internet of Things (mIoT - earlier referred to as mMTC), tailored for large-scale deployments of low-power sensors in use cases such as smart metering and logistics.

The first major extension came with Release 16, which introduced SST 4 for Vehicle-to-Everything (V2X) services. This slice type addresses the requirements of connected vehicles, particularly in terms of ultra low latency, high reliability and localised communication. It was the first time a vertical-specific slice type was defined.

With Release 17, the slicing framework was extended further to include SST 5, defined for High-Performance Machine-Type Communications (HMTC). This slice is aimed at industrial automation and use cases that require highly deterministic and reliable communication patterns between machines. It enhances the original URLLC profile by refining it for industrial-grade requirements.

Recognising the growing importance of immersive services, Release 18 added SST 6, defined for High Data Rate and Low Latency Communications (HDLLC). This slice targets extended reality, cloud gaming and other applications that simultaneously demand low delay and high bandwidth. It goes beyond what enhanced Mobile Broadband or URLLC individually offer by addressing the combination of both extremes. The documentation refers to this as being suitable for extended reality and media services, underlining the increasing focus on immersive technologies and their networking needs.

Finally, Release 19 introduced SST 7 for Guaranteed Bit Rate Streaming Services (GBRSS). This new slice supports services where continuous, guaranteed throughput is essential. It is particularly relevant for live broadcasting, high-definition streaming, or virtual presence applications where quality cannot degrade over time.

This gradual and deliberate expansion of standardised SSTs highlights how 5G is not a one-size-fits-all solution. Instead, it is a dynamic platform that adapts to the needs of different industries. As use cases grow more sophisticated and diverse, having standardised slice types helps ensure compatibility, simplify device and network configuration, and promote innovation.

It is also worth noting that these SST values are not mandatory for every operator to implement. A network can choose to support a subset based on its service strategy. For example, a public network may prioritise SSTs 1 and 3, while a private industrial deployment might focus on SST 5 or 7.

With slicing increasingly central to how 5G will be "monetised" and deployed, expect this list to keep growing in future releases. Each new SST tells a story about where the telecoms ecosystem is heading.

Related Posts

Tuesday, 10 June 2025

Cloud Native Telco Transformation Insights from T-Systems

The journey to becoming a cloud-native telco is not just a buzzword exercise. It requires a full-scale transformation of networks, business models, and operating cultures. At Mobile Europe’s Becoming a Cloud-Native Telco virtual event, Richard Simon, CTO at T-Systems International, outlined how telcos are grappling with this change, sharing insights from both successes and ongoing challenges.

Cloud-native is not just about adopting containers or orchestrating with Kubernetes. Richard Simon described it as a maturity model that demands strategic vision, architectural readiness, and cultural shift. The telco industry, long rooted in proprietary systems, is gradually moving towards software-defined infrastructure. Network Function Virtualisation (NFV) remains foundational, enabling operators to decouple traditional monolithic services and deliver them in a modular, digital-native manner—whether in private data centres or public clouds.

The industry is also seeing the rise of platform engineering. This evolution builds on DevOps and site reliability engineering (SRE) to create standardised internal developer platforms. These platforms reduce cognitive load for developers, increase consistency in toolchains and workflows, and enable a shift-left approach for operations and security. It is a critical step towards making innovation scalable and repeatable within telcos.

The cloud-native ecosystem has exploded in scope, with the CNCF landscape illustrating the diversity and maturity of open source components now in use. Telcos, once cautious about community-led projects, are now not only consuming but also contributing to open source. This openness is pivotal for achieving agility and interoperability in a multivendor environment.

With the increasing complexity of hybrid and multicloud strategies, avoiding vendor lock-in has become essential. Richard highlighted how telcos are optimising costs, improving resilience, and aligning workloads with the most suitable cloud environments. Multicloud is no longer a theoretical construct. It is operational reality. But with it comes the need for new thinking around cloud economics.

Cloud financial operations are no longer limited to cost tracking. They now include strategic frameworks (FinOps), real-time cost management, and a growing focus on application profiling. Profiling looks at how software consumes cloud resources, guiding developers to write more efficient code and enabling cost-effective deployments.

While generative AI dominates headlines, the pace of adoption in telco is deliberate. Richard pointed out that while investment in GenAI is widespread, only a small fraction of deployments have reached production. Most telcos are still in proof-of-concept or pilot phases, reflecting the technical and regulatory complexity involved.

Unlike some sectors, telcos face strict compliance requirements. GenAI inference stages—when customer data is processed—raise concerns around data sovereignty and privacy. As a result, telcos are exploring how to balance innovation with responsibility. Some are experimenting with fine-tuning foundational models internally, while others prefer to consume GenAI as a service, depending on use cases ranging from network automation to document processing.

GenAI is a high-performance computing (HPC) workload. Training large models requires significant infrastructure, making decisions around build versus buy critical. Richard outlined three tiers of AI adoption: infrastructure-as-a-service for DIY approaches, foundation-model-as-a-service for fine-tuning, and software-as-a-service for fully hosted solutions. Each tier comes with trade-offs in control, cost, and complexity.

Looking ahead, three themes stood out in Richard’s conclusions.

First, the era of AI agents is beginning. These autonomous systems, capable of reasoning and acting across complex tasks, will be the next experimentation frontier. Pilots in 2025 and 2026 will pave the way for a broader agentic AI economy.

Second, cloud economics continues to evolve. Operators must invest in cost visibility and governance rather than reactively scaling back cloud usage. The emergence of profiling and observability tooling is helping align cost with performance and business value.

Third, sovereignty is rising in importance. Telcos must ensure control over data and infrastructure, not only to comply with regional regulations but also to maintain intellectual property and operational resilience. Sovereign cloud models, abstracted control planes, and localised inference infrastructure are becoming strategic imperatives.

His complete talk is embedded below:

The cloud-native journey is not linear. It requires operators to architect for modularity, align with open ecosystems, and stay grounded in real-world economics. As Richard Simon’s keynote showed, the transformation is well underway, but its success will depend on how telcos integrate cloud, AI, and sovereignty into a coherent and adaptable strategy.

Related Posts

Tuesday, 20 May 2025

A Beginner’s Guide to the 5G Air Interface

Some of you may know that I manage quite a few LinkedIn groups, and I often come across advanced presentations and videos on LTE and 5G. From time to time, people reach out asking where they can access a free beginner-level course on the 5G Air Interface. With that in mind, I was pleased to see that Mpirical have shared a recorded webinar on YouTube titled Examining the 5G Air Interface, which seems ideal for anyone looking for a basic introduction.

Philip Nugent, Senior Technical Trainer at Mpirical, explains some of the key terms, concepts and capabilities of 5G New Radio. The webinar provides a high level overview of several important topics including 5G frequency bands and ranges, massive MIMO and beamforming, protocols and resources, and a look at the typical operation of a 5G device. It concludes with a short trainer Q&A session.

The video is embedded below:

Related Posts

Thursday, 8 May 2025

3GPP Release 18 Signal level Enhanced Network Selection (SENSE) for Smarter Network Selection in Stationary IoT

As 5G evolves and the number of deployed IoT devices increases globally, efficient and reliable network selection becomes ever more critical. Particularly for stationary devices deployed in remote, deep-indoor or roaming environments, traditional selection mechanisms have struggled to provide robust connectivity. This has led to operational challenges, especially for use cases involving low-power or hard-to-reach sensors. In response, 3GPP Release 18 introduces a new capability under the SA2 architecture work, Signal level Enhanced Network Selection (SENSE), designed to tackle this exact issue.

In today’s cellular systems, when a User Equipment (UE), including IoT modules, switches on or recovers from a loss of coverage, it performs automatic network selection. This typically prioritises networks based on preferences such as PLMN priority lists and broadcast cell selection criteria, while largely ignoring the actual signal strength at the device’s location. This approach works reasonably well for mobile consumer devices that can adapt through user movement or manual intervention. However, for stationary IoT UEs, which are often unmanned and deployed permanently in locations with limited or fluctuating radio conditions, this method can result in persistent suboptimal connectivity.

The issue becomes most evident when a device latches onto a visited PLMN (VPLMN) with higher priority despite poor signal quality. The UE might remain connected to this weak network, struggling to maintain bearer sessions or repeatedly failing data transfers. These failures often go undetected by the operator's monitoring systems and may require expensive manual intervention in the field. The cumulative impact of such maintenance activities adds significantly to operational expenditure, especially in mass-scale IoT deployments.

SENSE aims to fix this problem by making signal level an integral part of the automatic network selection and reselection process. Rather than simply following preconfigured priority rules, UEs enabled with SENSE will now assess the received signal quality during network selection. This allows them to favour networks that offer stronger and more stable radio conditions, even if they have lower priority, when such conditions are essential for reliable connectivity.

The capability is particularly targeted at stationary IoT UEs that support NB-IoT, EC-GSM-IoT, or LTE Cat-M1/M2. These devices are often used in applications such as water level monitoring, power grid sensors, and remote metering, installations where physical access post-deployment may be difficult or even infeasible.

To implement SENSE, the Home PLMN (HPLMN) can configure the UE to apply Operator Controlled Signal Thresholds (OCST) for each supported access technology. These thresholds are stored within the USIM and define the minimum signal quality required for a network to be considered viable. The OCST settings can be provisioned before deployment or updated later via standard NAS signalling mechanisms, including the Steering of Roaming (SoR) feature.

When a SENSE-enabled UE attempts to select a network, it checks whether the signal level from any candidate network meets or exceeds the configured OCST for its supported radio access technologies. If it does, the UE proceeds to register with that PLMN. If no suitable network meets the signal thresholds, the UE falls back to the legacy selection process, which excludes signal strength as a factor. This dual-iteration method ensures backward compatibility while enabling more robust performance where SENSE is supported.

Additionally, SENSE influences periodic network reselection. If the average signal quality from a registered PLMN drops below the OCST threshold over time, the UE will proactively seek alternative PLMNs whose signals meet the configured criteria. This continuous evaluation helps avoid long-term connectivity issues that may otherwise remain unnoticed.

SENSE is not intended to disrupt roaming steering or PLMN preferences altogether. Instead, it introduces a smart, context-aware filter that empowers the UE to make better decisions when radio conditions are poor. By integrating signal level awareness early in the selection logic, operators gain a powerful new tool to reduce failure rates and minimise costly field maintenance.

As the IoT landscape expands across industries and geographies, features like SENSE will play a vital role in supporting dependable, scalable and autonomous deployments. In Release 18, 3GPP has taken a meaningful step towards improving network availability for devices that need to just work, no matter where they are.

Related Posts