Showing posts with label Videos. Show all posts
Showing posts with label Videos. Show all posts

Tuesday, 20 January 2026

Telecom Security Realities from 2025 and Lessons for 2026

Telecom security rarely stands still. Each year brings new technologies, new attack paths, and new operational realities. Yet 2025 was not defined by dramatic new exploits or spectacular network failures. Instead, it became a year that highlighted how persistent, patient and methodical modern telecom attackers have become.

The recent SecurityGen Year-End Telecom Security Webinar offered a detailed look back at what the industry experienced during 2025. The session pulled together research findings, real world incidents and practical lessons from across multiple domains, including legacy signalling, eSIM ecosystems, VoLTE vulnerabilities and the emerging world of satellite-based mobile connectivity.

For anyone working in mobile networks, the message was clear. The threats are evolving, but many of the core problems remain stubbornly familiar.

A Year of Stealth Rather Than Spectacle

One of the most important themes from the webinar was that 2025 did not bring a wave of highly visible disruptive telecom attacks. Instead, it was characterised by quiet, low profile intrusions that often went undetected for long periods.

Operators around the world reported that attackers increasingly favoured living-off-the-land techniques. Rather than deploying noisy malware, intruders looked for ways to gain legitimate access to core systems and remain hidden. Lawful interception platforms, subscriber databases such as HLR and HSS, and internal management platforms were all targeted.

The primary objective in many cases was intelligence collection. Attackers were interested in call data, subscriber information and network topology rather than immediate disruption. This shift in motivation makes detection far more difficult, as there are often few obvious signs of compromise.

At the same time, automation has become a defining feature on both sides of the security battle. Operators are investing heavily in AI and machine learning to identify abnormal behaviour. Attackers are doing exactly the same, using automation to scale phishing campaigns and to accelerate exploit development.

Despite all this technology, basic security discipline continues to be a major challenge. A significant proportion of incidents still originate from human error, poor operational practices or simple failure to apply patches. The industry continues to invest billions in cybersecurity, but much of that effort is consumed by reporting and compliance activities rather than direct threat mitigation.

eSIM Security Comes into Sharp Focus

The transition from physical SIM cards to eSIM and remote provisioning is one of the most significant structural changes in the mobile industry. It offers clear benefits in terms of flexibility and user experience. However, the webinar highlighted that it also introduces entirely new security concerns.

Traditional SIM security models relied heavily on physical control. Fraudsters needed access to large numbers of real SIM cards to operate at scale. With eSIM, many of those physical constraints disappear. Remote provisioning expands the number of parties involved in the connectivity chain, including resellers and intermediaries who may not always operate under strict regulatory oversight.

During 2025 several major SIM farm operations were dismantled by law enforcement. These infrastructures contained tens of thousands of active SIM cards and were used for large scale fraud, smishing campaigns and automated account creation. While such operations existed long before eSIM, the technology has the potential to make them even easier to deploy and manage.

Research discussed in the session pointed to additional concerns. Analysis of travel eSIM services revealed issues such as cross-border routing of management traffic, excessive levels of control granted to resellers, and lifecycle management weaknesses that could potentially be abused by attackers. In some cases, resellers were found to have capabilities similar to full mobile operators, but without equivalent governance or transparency.

The conclusion was not that eSIM is inherently insecure. The technology itself uses strong encryption and robust mechanisms. The problem lies in the wider ecosystem of trust boundaries, partners and processes that surround it. Securing eSIM therefore requires cooperation between operators, vendors, regulators and service providers.

SS7 Remains a Persistent Weak Point

Few topics in telecom security generate as much ongoing concern as SS7. Despite being a technology from a previous era, it remains deeply embedded in global mobile infrastructure. The webinar dedicated significant attention to why SS7 continues to be exploited in 2025 and why it is likely to remain a problem for many years to come.

Throughout the year, media reports and research papers continued to demonstrate practical abuses of SS7 signalling. Attackers probed networks, attempted to bypass signalling firewalls and looked for new ways to manipulate protocol behaviour. Techniques such as parameter manipulation and protocol parsing tricks were highlighted as methods that can sometimes evade existing protections.

One particularly interesting demonstration showed how SS7 messages could be used as a covert channel for data exfiltration. By embedding information inside otherwise legitimate signalling transactions, attackers can potentially move data across networks without triggering traditional security alarms.

Perhaps the most striking point raised was how little progress has been made in eliminating SS7 dependencies. Analysis of global network deployments showed that only a handful of countries operate mobile networks entirely without SS7. Everywhere else, the protocol remains a foundational element of roaming and interconnect.

As a result, even operators that have invested heavily in 4G and 5G security can still be undermined by weaknesses in this legacy layer. The uncomfortable reality is that SS7 vulnerabilities will continue to be exploited well into 2026 and beyond.

VoLTE and Modern Core Network Risks

While legacy protocols remain a problem, modern technologies are not immune. VoLTE infrastructure in particular was identified as an increasingly attractive target.

VoLTE relies on complex interactions between signalling systems, IP multimedia subsystems and subscriber databases. Weaknesses in configuration or interconnection can open the door to call interception, fraud or denial of service. Several real world incidents during 2025 demonstrated that attackers are actively exploring these paths.

The move toward fully virtualised and cloud-native mobile cores also introduces new operational challenges. Telecom networks now resemble large IT environments, complete with the same risks around misconfiguration, insecure APIs and exposed management interfaces.

The Emerging Security Challenge of 5G Satellites

One of the most forward-looking parts of the webinar focused on non-terrestrial networks and direct-to-device satellite connectivity. What was once a concept for the distant future is rapidly becoming a commercial reality.

Satellite integration promises to extend 5G coverage to remote areas, oceans and disaster zones. However, it also changes the security model in fundamental ways. Satellites can act either as simple relay systems or as active components of the mobile radio access network. In both cases, new threat vectors emerge.

Potential issues discussed included the risk of denial of service against shared satellite resources, difficulties in applying traditional radio security controls in space-based equipment, and the possibility of more precise user tracking due to the way satellite systems handle location information.

Experts from the space cybersecurity community explained how vulnerabilities in mission control software and ground segment infrastructure could be exploited. Much of this software was originally designed for isolated environments and is only now being connected to wider networks and the internet.

As telecom networks expand beyond the boundaries of the Earth, security responsibilities extend with them. Operators will need to think not only about terrestrial threats but also about risks originating from space-based components.

The Human Factor and the Skills Gap

Technology was only part of the story. Another recurring theme was the global shortage of skilled telecom cybersecurity professionals.

Studies referenced in the session suggested that millions of additional specialists are needed worldwide, yet only a fraction of that demand can currently be filled. Many security teams are overwhelmed by the sheer volume of alerts and data they must process.

This shortage has real consequences. When teams are stretched thin, patching is delayed, anomalies are missed and complex investigations become difficult to sustain. The panel emphasised that throwing more tools at the problem is not enough. Organisations must focus on training, automation and smarter operational processes.

Automation and AI-driven analysis were presented as essential enablers. Given the scale of modern mobile networks, it is simply not feasible for human analysts to monitor every signalling protocol, every core interface and every emerging technology manually.

Preparing for 2026

Looking ahead, the experts agreed on several broad trends. Attacks on legacy systems such as SS7 will continue. Fraudsters will increasingly target eSIM provisioning processes. VoLTE and 5G core components will face growing scrutiny. Satellite-based connectivity will introduce new and unfamiliar security questions.

Perhaps most importantly, the line between traditional telecom security and general cybersecurity will continue to blur. Mobile networks are now large, distributed IT platforms, and they inherit all the complexities that come with that transformation.

Operators, regulators and vendors must therefore adopt a holistic view. Investment must go beyond compliance reporting and focus on practical defences, real time monitoring and collaborative intelligence sharing.

Final Reflections

The SecurityGen webinar provided a valuable snapshot of an industry at a crossroads. Telecom networks are becoming more advanced and more capable, but also more complex and interconnected than ever before.

2025 demonstrated that attackers do not always need new vulnerabilities. Often they succeed simply by exploiting old weaknesses in smarter ways. The challenge for 2026 is to close those gaps while also preparing for the technologies that are only just beginning to emerge.

For those involved in telecom security, the full discussion is well worth watching. The complete webinar recording can be viewed below:

Related Posts:

Tuesday, 25 November 2025

IET Lecture by Prof. Andy Sutton: Point to Point Microwave Radio Systems

Point to point microwave radio systems have been with us for more than eighty years, yet they rarely attract much attention in an era where fibre dominates network planning and satellite systems continue to develop at pace. At a recent IET Anglian Coastal Local Network event, Prof. Andy Sutton delivered an excellent lecture that brought these fixed radio links back into the spotlight. His talk explored the history, engineering and future of microwave and millimetre wave links, reminding us why they remain essential for transmission networks in the UK and around the world.

The story begins with the national microwave radio network of the 1970s, with the BT Tower at its centre. These early deployments supported long links across the country and laid the foundation for many of the design principles still used today. While the landscape has changed significantly, the fundamentals of fixed radio communication continue to be shaped by spectrum availability, propagation characteristics and careful engineering.

Microwave links depend on a wide range of bands, from the lower 6 GHz region through to 80 GHz E-band. The choice of frequency affects everything from link length to susceptibility to atmospheric absorption. As Andy explained, a link designer must consider not just free space path loss, but also Fresnel zone clearance, rainfall intensity and antenna characteristics. The slides included a worked example that showed the impact of frequency and distance on the radius of the Fresnel zone and highlighted the need for adequate clearance to maintain availability over time.

The talk moved on to modern access radio systems, where compact rooftop nodes and all-outdoor radios have become common. These systems rely on careful use of vertical and horizontal polarisations, often enabled through XPIC technology. XPIC allows separate data streams to coexist on the same frequency using orthogonal polarisations, effectively doubling link capacity when conditions allow. This is paired with adaptive coding and modulation, which enables the radio to shift modulation schemes according to link quality. The result is a more resilient and efficient link compared to older fixed-modulation systems.

Capacity planning is a balancing act that involves radio channel bandwidth, modulation choice and the number of aggregated carriers. Wider channels and higher order modulation support multi-gigabit throughput, although this introduces penalties in transmit power and receiver sensitivity. The trade-offs are central to radio design and determine the type of equipment used, whether through a separate indoor and outdoor unit or an integrated all-outdoor system.

Andy also covered the practical elements of radio link planning, such as antenna selection, path profiling, waveguide losses and typical link budget calculations. A link planning example using a 32 GHz radio demonstrated the relationship between transmit power, antenna gain, free space loss and fade margin for a target availability of 99.99 percent. The discussion tied together the theoretical foundations with real-world engineering and illustrated how access radios are designed for street-level backhaul scenarios.

The lecture then moved to millimetre wave systems, particularly E-band radios that operate around 70 and 80 GHz. These links offer enormous capacity over shorter distances and are increasingly used for dense urban backhaul and enterprise connectivity. The slides included examples of network topologies showing how microwave and fibre can be combined to meet different deployment objectives.

A substantial part of the presentation focused on trunk or core microwave radio systems. These high-capacity, high-availability links support long distances and historically formed the backbone of national networks. Although demand for trunk links has reduced as fibre has spread, they still exist in challenging environments. In the UK, many trunk links remain operational in Scotland and island regions where terrain and geography limit fibre deployment. The lecture covered branching networks, duplexers, waveguide installations and space diversity techniques, all of which contribute to the reliability of long-haul links.

Looking ahead, research continues into new frequency bands, wider channels, higher modulation schemes and improved radio hardware. These advances will support even greater capacities, with millimetre wave links expected to reach 100 Gbps over short distances. Microwave radio may no longer be the headline technology it once was, but the field continues to push boundaries and remains an essential part of modern communication networks.

Andy’s lecture was a comprehensive tour of the past, present and future of point to point microwave systems. For anyone working in transmission, mobile networks or wireless engineering, it served as a valuable reminder of the depth of innovation in this area and its continued relevance in the broader ecosystem.

If you would like to explore the material in more detail, the slides from the event are available here and the video can be seen here. Both are well worth a look.

Related Posts:

Tuesday, 4 November 2025

AIoT and A-IoT

Our industry loves acronyms. In fact, sometimes it feels as if half our job is simply keeping up with them, while the other half is explaining them to everyone else. A recent example I saw referenced D2D for satellites, but expanded it as Device to Device instead of Direct to Device. Today, two similar acronyms are gaining momentum and are likely to become far more mainstream: AIoT and A-IoT.

Artificial Intelligence (AI) and the Internet of Things (IoT) are two of the key technological pillars of the modern digital world. IoT connects billions of devices, from sensors and cameras to industrial machinery, all producing vast amounts of useful data. AI enables these devices and systems to learn from this data, recognise patterns, predict outcomes, and act autonomously.

When these technologies come together, we get the Artificial Intelligence of Things, or AIoT. In simple terms, AIoT allows connected devices to analyse the data they generate and make decisions without always relying on central systems.

The intelligence in AIoT can sit in different places. Cloud based AI offers extensive processing power and the ability to leverage wider datasets. Edge AI processes data closer to where it is generated, enabling faster and more context aware decision making while reducing bandwidth use and protecting data privacy. Increasingly, lightweight machine learning models allow intelligence directly on devices themselves, enabling instant reactions without constant network access. This evolution transforms IoT devices from passive data collectors into proactive decision makers.

The benefits are significant. AIoT increases automation, improves efficiency, enhances reliability, and enables predictive maintenance, energy optimisation, autonomous navigation, and smarter logistics. It also supports sustainability initiatives, for instance by improving energy and water use monitoring or enabling more intelligent control of municipal utilities. In short, AIoT forms a key part of the digital transformation strategies emerging across industries.

To get a better sense of how AIoT could shape our everyday lives, I have embedded a couple of older Ericsson videos below that imagine a future where intelligence is seamlessly built into everything.

For anyone interested in going deeper into this topic, Transforma Insights and Supermicro have good explainers. While 3GPP continues to work on AI, ML and IoT, AIoT as a concept is largely implementation driven rather than a standardised feature in itself.

In contrast, 3GPP is actively defining a different acronym: A-IoT, short for Ambient IoT.

Ambient IoT represents a major shift in connected device design. Instead of relying on batteries or frequent charging, Ambient IoT devices operate using energy harvested from their surroundings. This can include radio signals, light, heat, or motion. The technology supports both passive operation, where devices backscatter incoming RF signals, and active operation, where they harvest enough power to generate and transmit signals independently.

Unlike traditional IoT devices, Ambient IoT units are extremely low power, low cost, and very simple in design. They have a shorter range and lower data throughput than conventional wireless technologies, but they excel in scenarios where massive numbers of tiny, battery-free sensors can be deployed and left to operate with minimal maintenance.

This makes Ambient IoT well suited to applications such as environmental sensing, supply chain tracking, inventory monitoring, smart agriculture, and intelligent labelling. It also opens opportunities in consumer environments, from smart packaging to indoor positioning. With the right network support, these devices can operate indefinitely, enabling sustainable, large-scale sensing networks.

Ambient IoT is already included in 5G Advanced Release 19. For those interested in learning more, 3GPP has a detailed overview, Oppo has produced an excellent white paper, and LG Uplus has published a forward looking document exploring Ambient IoT in the context of 6G.

Both AIoT and Ambient IoT represent the next phase of connected intelligence. AIoT pushes computation and decision making closer to where data originates, while Ambient IoT removes power barriers and enables pervasive, maintenance-free connectivity. Together, they will support systems that are scalable, energy efficient and context aware.

As these technologies mature, we can expect a world where devices are not only always connected, but also constantly learning, adapting, and operating independently with minimal energy demands. The future of connectivity lies in this balance between intelligence and efficiency, and both AIoT and Ambient IoT will play a crucial role in shaping it.

Related Posts

Thursday, 11 September 2025

Dummy Loads in RF Testing for Dummies

I have spent many years working in the Test and Measurement industry and have also worked as a hands on engineer testing solutions, and as a field engineer testing various solution pre and post deployment. Over the years I have used various attenuators and dummy loads. It was nice to finally look at the different types of dummy loads and understand how they work in this R&S video.

So what exactly is a dummy load? At its core, it is a special kind of termination designed to absorb radio frequency energy safely. Instead of letting signals radiate into the air, a dummy load converts the RF power into heat. Think of it as an antenna that never actually transmits anything. This makes it invaluable when testing transmitters because you can run them at full power without interfering with anyone else’s spectrum.

Ordinary terminations are widely used in test setups but they are usually only good for low power. If you need to deal with more than about a watt of power, that is where dummy loads come in. Depending on their design, they can handle anything from a few watts to many kilowatts. To survive this, dummy loads use cooling methods. The most common are dry loads with large heatsinks that shed heat into the air. For higher powers, wet loads use liquids such as water or oil to absorb and move heat away more efficiently. Some combine both air and liquid cooling to push the limits even further.

Good dummy loads are not just about heat management. They also need to provide a stable impedance match, usually 50 ohms, across a wide frequency range. This minimises reflections and ensures accurate testing. Many dummy loads cover frequencies up to several gigahertz with low standing wave ratios. Ultra broadband designs, such as the Rohde & Schwarz UBL100, go up to 18 GHz and can safely absorb power levels in the kilowatt range

Some dummy loads even add extra features. A sampling port allows you to monitor the input signal at a reduced level. Interlock protection can shut down a connected transmitter if the load gets too hot. These touches make dummy loads more versatile and safer in real-world use.

In day-to-day testing, dummy loads help not only to protect transmitters but also to get accurate measurements. By acting as a perfectly matched, non-radiating antenna, they give engineers confidence that they are measuring the true transmitter output. They can also be used to quickly check feedlines and connectors by substituting them in place of an antenna.

Rohde & Schwarz have put together a useful explainer video that covers all of this in a simple, visual way. You can watch it below to get a clear overview of dummy loads and why they matter so much in RF testing.

Related Posts:

Monday, 1 September 2025

Software Efficiency Matters as Much as Hardware for Sustainability

When we talk about making computing greener, the conversation often turns to hardware. Data centres have become far more efficient over the years. Power supply units that once wasted 40% of energy now operate above 90% efficiency. Cooling systems that once consumed several times the power of the servers themselves have been dramatically improved. The hardware people have delivered.

But as Bert Hubert argues in his talk “Save the world, write more efficient code”, software has been quietly undoing many of those gains. Software bloat has outpaced hardware improvements. What once required careful optimisation is now often solved by throwing more cloud resources at the problem. That keeps systems running, but at a significant energy cost.

The hidden footprint of sluggish software

Sluggish systems are not just an annoyance. Every loading spinner, every second a user waits, often means CPUs are running flat out somewhere in the chain. At scale, those wasted cycles add up to megawatthours of electricity. Studies suggest that servers are responsible for around 4% of global CO₂ emissions, on par with the entire aviation industry. That is not a small share, and it makes efficient software a climate issue.

Hubert points out that the difference between badlly written code, reasonable code, and highly optimised code can easily span a factor of 100 in computing requirements. He demonstrates this with a simple example: generating a histogram of Dutch house numbers from a dataset of 9.9 million addresses.

  • A naïve Python implementation took 12 seconds and consumed over 500 joules of energy per run.
  • A straightforward database query reduced this to around 20 joules.
  • Using DuckDB, a database optimised for analytics, the same task dropped to just 2.5 joules and completed in milliseconds.

The user experience also improved dramatically. What once required a long wait became effectively instantaneous.

From data centres to “data sheds”

The point is not just academic. If everyone aimed for higher software efficiency, Hubert suggests, many data centres could be shrunk to the size of a shed. Unlike hardware, where efficiency can be bought, software efficiency has to be designed and built. It requires time, effort and, crucially, management permission to prioritise performance over simply shipping features.

Netflix provides a striking example. Its custom Open Connect appliances deliver around 45,000 video streams at under 10 milliwatts per user. By investing heavily in efficiency, they proved that optimised software and hardware together can deliver enormous gains.

The cloud and client-side challenge

The shift to the cloud has created perverse incentives. In the past, if your code was inefficient, the servers would crash and force a rewrite. Now, organisations can simply spin up more cloud instances. That makes it too easy to ignore software waste and too tempting to pass the costs into ever-growing cloud bills. Those costs are not only financial, but also environmental.

On the client side, the problem is subtler but still real. While loading sluggish web apps may not burn as much power as a data centre, the sheer number of devices adds up. Hubert measured that opening LinkedIn on a desktop consumed around 45 joules. Scaled to hundreds of millions of users, even modest inefficiencies start to look like power plants.

Sometimes the situation is worse. Hubert found that simply leaving open.spotify.com running in a browser kept his machine burning an additional 45 watts continuously, due to a rogue worker thread. With hundreds of millions of users, that single design choice could represent hundreds of megawatts of wasted power globally.

Building greener software

The lesson is clear. Early sluggishness never goes away. If a system is slow with only a handful of users, it will be catastrophically wasteful at scale. The time to demand efficiency is at the start of a project.

There are also practical steps engineers and organisations can take:

  • Measure energy use during development, not just performance.
  • Audit client-side behaviour for long-lived applications.
  • Incentivise teams to improve efficiency, not just to ship quickly.
  • Treat large cloud bills as a proxy for emissions as well as costs.

As Hubert says, we may only be able to influence 4% of global energy use through software. But that is the same impact as the aviation industry. Hardware engineers have done their part. Now it is time for software engineers to step up.

You can watch Bert Hubert’s full talk below, where he shares both entertaining stories and sobering measurements that show why greener software is not only possible but urgently needed. The PDF of slides is here and his LinkedIn discussion here.

Related Posts:

Tuesday, 10 June 2025

Cloud Native Telco Transformation Insights from T-Systems

The journey to becoming a cloud-native telco is not just a buzzword exercise. It requires a full-scale transformation of networks, business models, and operating cultures. At Mobile Europe’s Becoming a Cloud-Native Telco virtual event, Richard Simon, CTO at T-Systems International, outlined how telcos are grappling with this change, sharing insights from both successes and ongoing challenges.

Cloud-native is not just about adopting containers or orchestrating with Kubernetes. Richard Simon described it as a maturity model that demands strategic vision, architectural readiness, and cultural shift. The telco industry, long rooted in proprietary systems, is gradually moving towards software-defined infrastructure. Network Function Virtualisation (NFV) remains foundational, enabling operators to decouple traditional monolithic services and deliver them in a modular, digital-native manner—whether in private data centres or public clouds.

The industry is also seeing the rise of platform engineering. This evolution builds on DevOps and site reliability engineering (SRE) to create standardised internal developer platforms. These platforms reduce cognitive load for developers, increase consistency in toolchains and workflows, and enable a shift-left approach for operations and security. It is a critical step towards making innovation scalable and repeatable within telcos.

The cloud-native ecosystem has exploded in scope, with the CNCF landscape illustrating the diversity and maturity of open source components now in use. Telcos, once cautious about community-led projects, are now not only consuming but also contributing to open source. This openness is pivotal for achieving agility and interoperability in a multivendor environment.

With the increasing complexity of hybrid and multicloud strategies, avoiding vendor lock-in has become essential. Richard highlighted how telcos are optimising costs, improving resilience, and aligning workloads with the most suitable cloud environments. Multicloud is no longer a theoretical construct. It is operational reality. But with it comes the need for new thinking around cloud economics.

Cloud financial operations are no longer limited to cost tracking. They now include strategic frameworks (FinOps), real-time cost management, and a growing focus on application profiling. Profiling looks at how software consumes cloud resources, guiding developers to write more efficient code and enabling cost-effective deployments.

While generative AI dominates headlines, the pace of adoption in telco is deliberate. Richard pointed out that while investment in GenAI is widespread, only a small fraction of deployments have reached production. Most telcos are still in proof-of-concept or pilot phases, reflecting the technical and regulatory complexity involved.

Unlike some sectors, telcos face strict compliance requirements. GenAI inference stages—when customer data is processed—raise concerns around data sovereignty and privacy. As a result, telcos are exploring how to balance innovation with responsibility. Some are experimenting with fine-tuning foundational models internally, while others prefer to consume GenAI as a service, depending on use cases ranging from network automation to document processing.

GenAI is a high-performance computing (HPC) workload. Training large models requires significant infrastructure, making decisions around build versus buy critical. Richard outlined three tiers of AI adoption: infrastructure-as-a-service for DIY approaches, foundation-model-as-a-service for fine-tuning, and software-as-a-service for fully hosted solutions. Each tier comes with trade-offs in control, cost, and complexity.

Looking ahead, three themes stood out in Richard’s conclusions.

First, the era of AI agents is beginning. These autonomous systems, capable of reasoning and acting across complex tasks, will be the next experimentation frontier. Pilots in 2025 and 2026 will pave the way for a broader agentic AI economy.

Second, cloud economics continues to evolve. Operators must invest in cost visibility and governance rather than reactively scaling back cloud usage. The emergence of profiling and observability tooling is helping align cost with performance and business value.

Third, sovereignty is rising in importance. Telcos must ensure control over data and infrastructure, not only to comply with regional regulations but also to maintain intellectual property and operational resilience. Sovereign cloud models, abstracted control planes, and localised inference infrastructure are becoming strategic imperatives.

His complete talk is embedded below:

The cloud-native journey is not linear. It requires operators to architect for modularity, align with open ecosystems, and stay grounded in real-world economics. As Richard Simon’s keynote showed, the transformation is well underway, but its success will depend on how telcos integrate cloud, AI, and sovereignty into a coherent and adaptable strategy.

Related Posts

Tuesday, 20 May 2025

A Beginner’s Guide to the 5G Air Interface

Some of you may know that I manage quite a few LinkedIn groups, and I often come across advanced presentations and videos on LTE and 5G. From time to time, people reach out asking where they can access a free beginner-level course on the 5G Air Interface. With that in mind, I was pleased to see that Mpirical have shared a recorded webinar on YouTube titled Examining the 5G Air Interface, which seems ideal for anyone looking for a basic introduction.

Philip Nugent, Senior Technical Trainer at Mpirical, explains some of the key terms, concepts and capabilities of 5G New Radio. The webinar provides a high level overview of several important topics including 5G frequency bands and ranges, massive MIMO and beamforming, protocols and resources, and a look at the typical operation of a 5G device. It concludes with a short trainer Q&A session.

The video is embedded below:

Related Posts

Thursday, 24 April 2025

An Introduction to OSS/BSS in Mobile Networks

When discussing mobile networks, much of the focus tends to be on radio access technologies, spectrum, or core network evolution. However, two often overlooked yet critical components in the operational backbone of any telecom network are Operations Support Systems (OSS) and Business Support Systems (BSS).

While not typically defined in 3GPP specifications, OSS and BSS play a vital role in keeping the network functional and the business viable.

OSS is responsible for network-facing tasks such as monitoring, configuration, fault management, and performance optimisation, commonly summarised using the FCAPS model: Fault, Configuration, Accounting, Performance, and Security. It ensures the network operates smoothly and supports the delivery of services.

BSS, on the other hand, deals with customer-facing functions like CRM, order handling, billing, and revenue management. It ensures that customers can purchase, use, and be billed accurately for services,  forming the foundation for business growth and customer satisfaction.

To help introduce this important topic, we’ve created a short video explainer that outlines:

  • The basic architecture of OSS and BSS
  • Their roles within a multi-vendor network
  • Key interfaces such as EMS, NMS, and TMN
  • Why OSS/BSS are critical for digital transformation and operational efficiency

The video is embedded below and the slides are available here:

While OSS/BSS may not be headline features in 5G or 6G discussions, they remain the unsung heroes that ensure networks are operational, customers are happy, and services are profitable.

Let us know your thoughts in the comments or on social media. We're always keen to learn from those who work closely with these systems.

Related Posts

Thursday, 17 April 2025

Towers, Masts and Poles: The Backbone of Telecom Infrastructure

We often walk past them without a second glance—towers, masts, and poles that quietly support the vast web of our modern telecommunications networks. But behind these unassuming structures lies a fascinating history and a critical role in enabling everything from phone calls to television broadcasts.

In a brilliant lecture hosted by the IET, Professor Nigel Linge (with support from Professor Andy Sutton) takes us on a journey through the evolution of telecom infrastructure. Starting from ancient beacons and Napoleonic-era semaphores to the iconic BT Tower and long wave radio transmitters, the talk connects the dots across centuries of innovation.

The lecture touches on early telegraphy using bare copper wires strung on porcelain insulators, the dawn of voice telephony, Marconi’s pioneering wireless transmissions, and the growth of regional radio and TV broadcasting in the UK. It also highlights how microwave relays and horn-reflector antennas became vital to long-distance communication, with the BT Tower serving as a key hub in the national network.

Whether it’s the humble telegraph pole or the towering masts on hilltops, each structure plays a part in delivering connectivity. This presentation offers a timely reminder of the physical foundations of our digital world—often overlooked, yet essential to our everyday lives.

Watch the full lecture below:

You can also read an article by them detailing many things covered in the lecture here.

Related Posts:

Friday, 11 April 2025

Understanding ETSI’s Industry Specification Groups (ISGs) and Why They Matter

The European Telecommunications Standards Institute (ETSI) is a leading standards development organisation (SDO) recognised for producing globally applicable standards for ICT, including fixed, mobile, radio, converged, broadcast, and internet technologies. Based in Europe but with worldwide influence, ETSI provides an open and inclusive environment for industry players to collaborate on the development of future technologies.

A recent overview presentation of ETSI by Jan Ellsberger, ETSI's Director General, is available on the 3GPP website here.

ETSI's Industry Specification Groups (ISGs) are collaborative groups formed within ETSI to address emerging and often pre-standardisation topics in a flexible, fast, and open manner. They provide a platform for industry players, including companies, research organisations, and other stakeholders, to work together on technical specifications outside the constraints of formal standardisation processes.

Key Features of ISGs include:

  • Focus on innovation: ISGs often tackle new or rapidly evolving technologies, such as Network Functions Virtualisation (NFV), Quantum Key Distribution (QKD), and AI.
  • Open participation: Participation is open to ETSI members and non-members, although non-members pay a fee.
  • Faster timelines: ISGs are designed to deliver results quickly, often within 12–24 months, making them well-suited for fast-moving domains.
  • Flexible structure: They are less formal than ETSI Technical Committees, which allows more agile collaboration.

ISGs produce documents such as:

  • Group Specifications (GS) – technical specifications that can later be taken up by formal standardisation bodies.
  • Group Reports (GR) – informative reports including use cases, frameworks, or recommendations.

ISGs help shape the direction of future standards and industry practices by offering an open, collaborative environment for technical consensus. They often bridge the gap between research and standardisation.

Dr Howard Benn, a mobile industry veteran with contributions spanning from GSM to 5G, has created a short video on ETSI’s ISGs, embedded below:

Related Posts:

Friday, 17 January 2025

Lessons from ANRW ’24: AI and Cloud in 5G/6G Systems

The ACM, IRTF & ISOC Applied Networking Research Workshops (ANRW) offer a vibrant forum for researchers, vendors, network operators, and the Internet standards community to exchange emerging results in applied networking research. To foster collaboration across these diverse groups, ANRW events are co-located with IETF standards meetings, typically held annually in July. These workshops prioritise interactive discussions and engagement, complementing traditional paper presentations.

ANRW '24, held on 23 July 2024 at the Hyatt Regency Vancouver, brought together industry leaders and academics to share insights on advancing networking technologies. Among the standout sessions was a keynote presentation by Sharad Agarwal, Senior Principal Researcher at Microsoft. His keynote titled, "Lessons I Learned in Leveraging AI+ML for 5G/6G Systems", highlighted pivotal themes influencing telecom and networking.

Sharad distilled his experiences into three key lessons, each underscored by examples of research and systems developed to address specific challenges in the telecom industry:

  1. Leverage Cloud Scale to Overcome Limitations of Deployed Protocols: He emphasised that the scale of cloud computing is critical to managing the massive demands of modern telecom networks. For instance, systems like TIPSY (Traffic Ingress Prediction SYstem) demonstrate how AI and ML can predict traffic ingress points across thousands of peering links, helping to avoid bottlenecks and ensure optimal traffic distribution.
  2. Custom Learning Algorithms vs. Off-the-Shelf Solutions: While bespoke algorithms offer higher precision for niche applications, their complexity and deployment challenges often outweigh their benefits. Sharad argued for balancing innovation with practicality, advocating for leveraging pre-built AI and ML models wherever possible to streamline integration.
  3. Mitigate Risks of AI Hallucinations through Careful System Design: Acknowledging the risks posed by unreliable AI outputs, he stressed the importance of robust system design. Using LLexus, an AI-driven incident management system, as an example, Sharad highlighted techniques like iterative plan generation, validation rules, and human auditing as essential safeguards against AI errors.

The talk also delved into broader trends shaping the telecom landscape:

  • Cloudification of Telecom Infrastructure: The shift from hardware-based to software-based network functions, underpinned by cloud-native principles, has revolutionised telco infrastructure. This transformation facilitates rapid upgrades, reduces costs, and introduces new opportunities for AI-driven analytics.
  • Challenges in Performance and Reliability: Ensuring high throughput, low latency, and carrier-grade reliability in cloudified networks remains a significant hurdle. Innovations like PAINTER and LLexus demonstrate how AI and ML are being applied to optimise these aspects.
  • Emerging Business Models and Private Deployments: The integration of new radio technologies and virtualised network functions is driving novel revenue streams, such as private 5G/6G networks for mission-critical applications like factory automation.

Finally, Sharad’s keynote underscored how AI, ML, and cloud computing are reshaping the telecom industry, particularly in the era of 5G and the forthcoming 6G. By leveraging the scale of cloud infrastructure, balancing algorithmic complexity, and designing systems with resilience against AI pitfalls, the industry is poised to meet its ambitious goals of high bandwidth, low latency, and unparalleled reliability.

The video of his talk is embedded below and the slides are available here:

Related Posts:

Thursday, 19 December 2024

Evolution and Impact of Cellular Location Services (LCS)

Location Services (LCS) have been standardized by 3GPP across all major generations of cellular technology, including 2G (GSM), 3G (UMTS), 4G (LTE), and 5G. These services enable applications to determine the geographical location of mobile devices, facilitating crucial functions such as emergency calls, navigation, and location-based advertising. The consistent adoption of standardized protocols ensures interoperability, scalability, and reliability, empowering mobile operators and device manufacturers to implement location services in a globally consistent manner.

The evolution of LCS technology has seen remarkable advancements with each generation of cellular networks. Early implementations in 2G and 3G relied on basic techniques such as Cell-ID, Timing Advance, and triangulation, which offered limited accuracy and were suitable only for rudimentary use cases. 

The introduction of LTE in 3GPP Release 9 marked a significant improvement, integrating support for regulatory services like emergency call localization and commercial applications such as mapping. LTE networks commonly employ global navigation satellite systems (GNSS), like GPS, to determine locations. However, alternative methods using the LTE air interface are crucial in scenarios where GNSS signals are obstructed, such as indoors or in dense urban environments. An LTE network can support horizontal positioning accuracy of 50m for 80% of mobiles and a vertical positioning accuracy of 5m and an end-to-end latency of 30 seconds.


In 5G, the introduction of high-bandwidth, low-latency communication and new architectural enhancements allows for even more accurate and responsive location services. These improvements support critical use cases like autonomous vehicles, smart cities, and industrial IoT applications. 

5G networks have further improved LCS with high-bandwidth, low-latency communication and architectural enhancements. These innovations enable critical applications like autonomous vehicles, smart cities, and industrial IoT. In Release 15, 5G devices support legacy LTE location protocols through the Gateway Mobile Location Centre (GMLC). From Release 16, the Network Exposure Function (NEF) streamlines location requests for modern applications. A 5G network is expected to deliver a horizontal positioning accuracy of 3m indoors and 10m outdoors, a vertical positioning accuracy of 3m in both environments and an end-to-end latency of one second.

The standardization efforts of 3GPP have ensured that location services meet stringent requirements for accuracy, privacy, and security. Emergency services, for instance, benefit from these standards through Enhanced 911 (E911) in the United States and similar mandates globally, which require precise location reporting for mobile callers. Furthermore, standardization fosters innovation by providing a common foundation on which developers can create new location-based services and applications. As cellular networks continue to evolve, 3GPP’s standardized LCS will remain a cornerstone in bridging connectivity with the physical world, enabling smarter, safer, and more connected societies.

Mpirical recently shared a video exploring the concepts and drivers of Location Services (LCS). It's embedded below:

If you want to learn more about LCS, check out Mpirical's training course on this topic which seeks to provide an end to end exploration of the techniques and technologies involved, including the driving factors, standardization, requirements, architectural elements, protocols and protocol stacks, 2G-5G LCS operation and location finding techniques (overview and specific examples).

Mpirical is a leading provider of telecoms training, specializing in mobile and wireless technologies such as 5G, LTE, and IoT. They boast a course catalogue of wide ranging topics and technologies for all levels, with each course thoughtfully broken down into intuitive learning modules. 

Related Posts

Tuesday, 10 December 2024

Tutorial Session on Non-Terrestrial Networks (NTNs) and 3GPP Standards from 5G to 6G

Over five years ago, we introduced the concept of Non-Terrestrial Networks (NTN) in our NTN tutorial and wrote IEEE ComSoc article, "The Role of Non-Terrestrial Networks (NTN) in Future 5G Networks." Since then, the landscape has seen remarkable transformations with advancements in standards, innovations in satellite connectivity, and progress in real-world applications.

The 2024 Global Forum on Connecting the World from the Skies, held on November 25–26, served as a pivotal platform for stakeholders across the spectrum; policymakers, industry leaders, and technical experts. Jointly organized by the International Telecommunication Union (ITU) and Saudi Arabia’s Communications, Space & Technology Commission (CST), the event underscored NTNs' growing importance in advancing global connectivity.

A key highlight of the forum was Tutorial Session 2, delivered by Gino Masini, Principal Researcher, Standardization at Ericsson. The session, titled "Non-Terrestrial Networks and 3GPP Standards from 5G to 6G," provided an in-depth look at the evolution of NTNs and their integration into mobile networks.

Key Takeaways from the Session included:

  • 3GPP Standardization Milestones:
    • Release 17: NTN integration began, paving the way for seamless 5G coverage.
    • Release 18: Enhanced features and capabilities, focusing on improved satellite-terrestrial convergence.
    • Release 19 (Ongoing): Lays the foundation for natively integrated NTN frameworks in 6G.
  • Unified Networks in 6G: A focus on radio access network architecture demonstrated how NTN can evolve from a supporting role to becoming an intrinsic component of future 6G systems.
  • Industry Impact: The session highlighted how convergence between satellite and terrestrial networks is no longer aspirational but a tangible reality, fostering a truly unified global connectivity ecosystem.

With NTNs now integral to 3GPP's vision, the groundwork has been laid for scalable satellite connectivity that complements terrestrial networks. The insights shared at the forum emphasize the importance of collaboration across industry and standards organizations to unlock the full potential of NTNs in both 5G and 6G.

For those interested, the full tutorial slides and session video are embedded below.

Gino has kindly shared the slides that can be downloaded from here.

Related Posts

Friday, 15 November 2024

RAN, AI, AI-RAN and Open RAN

The Japanese MNO Softbank is taking an active role in trying to bring AI to RAN. In a research story published recently, they explain that AI-RAN integrates AI into mobile networks to enhance performance and enable low-latency, high-security services via distributed AI data centres. This innovative infrastructure supports applications like real-time urban safety monitoring and optimized network throughput. Through the AI-RAN Alliance, SoftBank collaborates with industry leaders to advance technology and create an ecosystem for AI-driven societal and industrial solutions.

This video provides a nice short explanation of what AI-RAN means:

SoftBank's recent developments in AI-RAN technology further its mission to integrate AI with mobile networks, highlighted by the introduction of "AITRAS." This converged solution leverages NVIDIA's Grace Hopper platform and advanced orchestrators to unify vRAN and AI applications, enabling efficient and scalable networks. By collaborating with partners like Red Hat and Fujitsu, SoftBank aims to commercialize AI-RAN globally, addressing the demands of next-generation connectivity. Together, these initiatives align with SoftBank's vision of transforming telecommunications infrastructure to power AI-driven societies. Details are available on SoftBank's page here.

Last month, theNetworkingChannel hosted a webinar looking at 'AI-RAN and Open RAN: Exploring Convergence of AI-Native Approaches in Future Telecommunication Technologies'. The slides have not been shared and the details of the speakers are available here. The webinar is embedded below:

NVIDIA has a lot more technical details available on their blog post here.

Related Posts