Showing posts with label 5G Americas. Show all posts
Showing posts with label 5G Americas. Show all posts

Monday, September 19, 2022

Is there a compelling Business Case for 5G Network Slicing in Public Networks?

Since the industry realised how the 5G Network Architecture will look like, Network Slicing has been touted as the killer business case that will allow the mobile operators to generate revenue from new sources.

Last month ABI Research said in a press release:

According to global technology intelligence firm ABI Research, 5G slicing revenue is expected to grow from US$309 million in 2022 to approximately US$24 billion in 2028, at a Compound Annual Growth Rate (CAGR) of 106%. 

“5G slicing adoption falls into two main categories. One, there is no connectivity available. Two, there is connectivity, but there is not sufficient capacity, coverage, performance, or security. For the former, both private and public organizations are deploying private network slices on a permanent and ad hoc basis,” highlights Don Alusha, 5G Core and Edge Networks Senior Analyst at ABI Research. The second scenario is mostly catered by private networks today, a market that ABI Research expects to grow from US$3.6 billion to US$109 billion by 2023, at a CAGR of 45.8%. Alusha continues, “A sizable part of this market can be converted to 5G slicing. But first, the industry should address challenges associated with technology and commercial models. On the latter, consumers’ and enterprises’ appetite to pay premium connectivity prices for deterministic and tailored connectivity services remains to be determined. Furthermore, there are ongoing industry discussions on whether the value that comes from 5G slicing can exceed the cost required to put together the underlying slicing ecosystem.”

Earlier this year, Daryl Schoolar - Research Director at IDC tackled this topic in his blog post:

5G network slicing, part of the 3GPP standards developed for 5G, allows for the creation of multiple virtual networks across a single network infrastructure, allowing enterprises to connect with guaranteed low latency. Using principles behind software-defined network and network virtualization, slicing allows the mobile operator to provide differentiated network experience for different sets of end users. For example, one network slice could be configured to support low latency, while another slice is configured for high download speeds. Both slices would run across the same underlying network infrastructure, including base stations, transport network, and core network.

Network slicing differs from private mobile networks, in that network slicing runs on the public wide area network. Private mobile networks, even when offered by the mobile operator, use infrastructure and spectrum dedicated to the end user to isolate the customer’s traffic from other users.

5G network slicing is a perfect candidate for future business connectivity needs. Slicing provides a differentiated network experience that can better match the customers performance requirements than traditional mobile broadband. Until now, there has been limited mobile network performance customization outside of speeds. 5G network slicing is a good example of telco service offerings that meet future of connectivity requirements. However, 5G network slicing also highlights the challenges mobile operators face with transformation in their pursuit of remaining relevant.

For 5G slicing to have broad commercial availability, and to provide a variety of performance options, several things need to happen first.

  • Operators need to deploy 5G Standalone (SA) using the new 5G mobile core network. Currently most operators use the 5G non-standalone (NSA) architecture that relies on the LTE mobile core. It might be the end of 2023 before the majority of commercial 5G networks are using the SA mode.
  • Spectrum is another hurdle that must be overcome. Operators still make most of their revenue from consumers, and do not want to compromise the consumer experience when they start offering network slicing. This means operators need more spectrum. In the U.S., among the three major mobile operators, only T-Mobile currently has a nationwide 5G mid-band spectrum deployment. AT&T and Verizon are currently deploying in mid-band, but that will not be completed until 2023.
  • 5G slicing also requires changes to the operator’s business and operational support systems (BSS/OSS). Current BSS/OSS solutions were not designed to support the increased parameters those systems were designed to support.
  • And finally, mobile operators still need to create the business propositions around commercial slicing services. Mobile operators need to educate businesses on the benefits of slicing and how slicing supports their different connectivity requirements. This could involve mobile operators developing industry specific partnerships to reach different business segments. All these things take time to be put into place.

Because of the enormity of the tasks needed to make 5G network slicing a commercial success, IDC currently has a very conservative outlook for this service through 2026. IDC believes it will be 2023 until there is general commercial availability of 5G network slicing. The exception is China, which is expected to have some commercial offerings in 2022 as it has the most mature 5G market. Even then, it will take until 2025 before global revenues from slicing exceeds a billion U.S. dollars. In 2026 IDC forecasts slicing revenues will be approximately $3.2 billion. However, over 80% of those revenues will come out of China.

The 'Outspoken Industry Analyst' Dean Bubley believes that Network Slicing is one of the worst strategic errors made by the mobile industry, since the catastrophic choice of IMS for communications applications. In a LinkedIn post he explains:

At best, slicing is an internal toolset that might allow telco operations or product teams (or their vendors) to manage their network resources. For instance, it could be used to separate part of a cell's capacity for FWA, and dynamically adjust that according to demand. It might be used as an "ingredient" to create a higher class of service for enterprise customers, for instance for trucks on a highway, or as part of an "IoT service" sold by MNOs. Public safety users might have an expensive, artisanal "hand-carved" slice which is almost a separate network. Maybe next-gen MVNOs.

(I'm talking proper 3GPP slicing here - not rebranded QoS QCI classes, private APNs, or something that looks like a VLAN, which will probably get marketed as "slices")

But the idea that slicing is itself a *product*, or that application developers or enterprises will "buy a slice" is delusional.

Firstly, slices will be dependent on [good] coverage and network control. A URLLC slice likely won't work reliably indoors, underground, in remote areas, on a train, on a neutral-host network, or while roaming. This has been a basic failure of every differentiated-QoS monetisation concept for many years, and 5G's often-higher frequencies make it worse, not better.

Secondly, there is no mature machinery for buying, selling, testing, supporting. price, monitoring slices. No, the 5G Network Exposure Function won't do it all. I haven't met a Slice salesperson yet, or a Slice-procurement team.

Thirdly, a "local slice" of a national 5G network will run headlong into a battle with the desire for separate private/dedicated local 5G networks, which may well be cheaper and easier. It also won't work well with the enterprise's IT/OT/IP domains, out of the box.

Also there's many challenges getting multi-operator slices, device OS links to slice APIs, slice "boundary controllers" between operators, aligning RAN and core slices, regulatory questionmarks and much more.

There are lots of discussion in the comments section that may be of interest to you, here.

My belief is that we will see lots of interesting use cases with slicing in public networks but it will be difficult to monetise. The best networks will manage to do it to create some plans with guaranteed rates and low latency. It would remain to be see whether they can successfully monetise it well enough. 

For technical people and newbies, there are lots of Network Slicing resources on this blog (see related posts 👇). Here is another recent video from Mpirical:

Related Posts

Tuesday, February 1, 2022

Bug hunting in 5G Networks and Devices

Pentests or Penetration testing is ethical hacking that is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system. They are performed to identify weaknesses or vulnerabilities, including the potential for unauthorized parties to gain access to the system's features and data, as well as strengths, enabling a full risk assessment to be completed.

Sébastien Dudek, Founder and Security Engineer at PentHertz did a presentation at No Hat conference 2021. The outline of his talk says:

Expected to be released in 2021, we only see the early stage of 5G-NR connectivity in rare places around the world and we cannot talk yet about "real 5G" as current installations are put on the Non-Standalone mode (NSA) using 4G infrastructures. But in the meantime, it is important to get prepared for this upcoming technology and ways we can practically simulate real-world attacks in the future, with Standalone (SA) mode-capable devices and networks. In this presentation, we will see how to conduct practical security assignments on future 5G SA devices and networks, and how to investigate the protocol stack. To begin the presentation, we briefly present the differences with 2G-5G in terms of security applied to security assessment contexts, i.e. the limit we are left with, and how to circumvent them. Then we see how a 5G-NR security testbed looks like, and discuss what type of bugs are interesting to spot. Third, we make more sense about some attacks on devices by showing attacks that could be performed on the core side from the outside. Finally, we briefly introduce how we could move forward by looking at the 5G protocol stack and the state of the current mean.

Slides are available here and the video is embedded below:

A post on their website also looks at penetration of standalone 5G core. The post contains a video as well which can also be directly accessed here.

A new white paper from 5G Americas provides nearly annual updates around the topic of security in wireless cellular networks. The current edition addresses emerging challenges and opportunities, making recommendations for securing 5G networks in the context of the evolution to cloud-based and distributed networks. 

Additionally, the white paper provides insight into securing 5G in private, public, and hybrid cloud deployment models. Topics such as orchestration, automation, cloud-native security, and application programming interface (API) security are addressed. The transition from perimeter-based security to a zero-trust architecture to protect assets and data from external and internal threats is also discussed.

Related Posts

Tuesday, November 17, 2020

5G Non IP Data Delivery and Lightweight M2M (LwM2M) over NIDD

Earlier this year, MediaTek had announced that its MT2625 NB-IoT chip has been validated for LwM2M over NIDD on SoftBank Corp.’s cellular network across Japan. This achievement marks the first global commercial readiness of LwM2M over NIDD; a secure, ultra-efficient IoT communications technique that is being adopted by operators worldwide. The benefits of LwM2M over NIDD include security improvements, cost-efficient scalability and reduced power consumption.

LwM2M over NIDD is a combination of the communication technology "NIDD (Non-IP Data Delivery)" that does not use an IP address in LTE communication NB-IoT for IoT and the device management protocol "LwM2M (Lightweight M2M)" advocated by the Open Mobile Alliance. It's been a while since I wrote about Open Mobile Alliance on this blog. OMA SpecWorks is the successor brand to the Open Mobile Alliance. You can read all about it here.


OMA SpecWorks’ LightweightM2M is a device management protocol designed for sensor networks and the demands of a machine-to-machine (M2M) environment. With LwM2M, OMA  SpecWorks has responded to demand in the market for a common standard for managing lightweight and low power devices on a variety of networks necessary to realize the potential of IoT. The LwM2M protocol, designed for remote management of M2M devices and related service enablement, features a modern architectural design based on REST, defines an extensible resource and data model and builds on an efficient secure data transfer standard called the Constrained Application Protocol (CoAP). LwM2M has been specified by a group of industry experts at the OMA SpecWorks Device Management Working Group and is based on protocol and security standards from the IETF.

You can get all the LwM2M resources here and the basic specs of 'Lightweight M2M 1.1: Managing Non-IP Devices in Cellular IoT Networks' here.
The 5G Americas whitepaper 'Wireless Technology Evolution Towards 5G: 3GPP Release 13 to Release 15 and Beyond' details how Current Architecture for 3GPP Systems for IOT Service Provision and Connectivity to External Application Servers. It also talks about Rel-13 Cellular IoT EPS Optimizations which provide improved support of small data transfer over control plane and user plane. Control Plane CIoT EPS Optimization transports user data (measurements, ID, status, etc.) via MME by encapsulating user data in NAS PDUs and reduces the total number of control plane messages when handling a short data transaction. Control Plane CIoT EPS optimization, designed for small infrequent data packets, can also be used for larger data bursts depending in UE Radio capability.

User data transported using the Control Plane CIoT EPS Optimization, has special characteristics, as different mobility anchor and termination nodes.

Therefore, the Preferred Network Behavior signaling must include information on:
  • Whether Control Plane CIoT EPS optimization is supported
  • Whether User Plane CIoT EPS optimization is supported
  • Whether Control Plane CIoT EPS optimization is preferred or whether User Plane CIoT EPS optimization is preferred
These optimizations have enabled:
  • Non-IP Data Delivery (NIDD) for both: mobile originated and mobile terminated communications, by using SCEF (Service Capability Exposure Function) or SGi tunneling. However, it has to be taken into account that Non-IP PDUs may be lost and its sequence is not guaranteed
  • For IP data, the UE and MME may perform header compression based on Robust Header Compression (ROHC) framework
  • NB-IoT UE can attach but not activate any PDN connection
  • High latency communication handled by the buffering of downlink data (in the Serving GW or the MME)
  • SMS transfer
  • EPS Attach, TA Update and EPS Detach procedures for NB-IoT only UEs, with SMS service request
  • Procedures for connection suspend and resume are added
  • Support for transfer of user plane data without the need for using the Service Request procedure to establish Access Stratum context in the serving eNodeB and UE
When selecting an MME for a UE that is using the NB-IoT RAT, and/or for a UE that signals support for CIoT EPS Optimizations in RRC signaling, the eNodeB’s MME selection algorithm shall select an MME taking into account its Release 13 NAS signaling protocol.

Mpirical has a nice short video explaining 5G Non IP Data Delivery. It is embedded below.

IoT has not taken off as expected and prophesised for years. While the OMASpecWorks is doing some fantastic work by defining simplified approach for IoT deployment, its current member list doesn't have enough operators to drive the uptake required for its spec adoption. They would argue that it doesn't matter how many members there are as the NIDD approach is completely optional and over-the-top. Let's wait and see how it progresses.

Related Posts:

Friday, October 23, 2020

Positioning Techniques for 5G NR in 3GPP Release-16

I realised that I have not looked at Positioning techniques a lot in our blogs so this one should be a good summary of the latest positioning techniques in 5G.

Qualcomm has a nice short summary hereRelease 16 supports multi-/single-cell and device-based positioning, defining a new positioning reference signal (PRS) used by various 5G positioning techniques such as roundtrip time (RTT), angle of arrival/departure (AoA/AoD), and time difference of arrival (TDOA). Roundtrip time (RTT) based positioning removes the requirement of tight network timing synchronization across nodes (as needed in legacy techniques such as TDOA) and offers additional flexibility in network deployment and maintenance. These techniques are designed to meet initial 5G requirements of 3 and 10 meters for indoor and outdoor use cases, respectively. In Release 17, precise indoor positioning functionality will bring sub-meter accuracy for industrial IoT use cases.

I wrote about the 5G Americas white paper titled, "The 5G Evolution: 3GPP Releases 16-17" highlighting new features in 5G that will define the next phase of 5G network deployments across the globe. The following is from that whitepaper:

Release-15 NR provides support for RAT-independent positioning techniques and Observed Time Difference Of Arrival (OTDOA) on LTE carriers. Release 16 extends NR to provide native positioning support by introducing RAT-dependent positioning schemes. These support regulatory and commercial use cases with more stringent requirements on latency and accuracy of positioning.25 NR enhanced capabilities provide valuable, enhanced location capabilities. Location accuracy and latency of positioning schemes improve by using wide signal bandwidth in FR1 and FR2. Furthermore, new schemes based on angular/spatial domain are developed to mitigate synchronization errors by exploiting massive antenna systems.

The positioning requirements for regulatory (e.g. E911) and commercial applications are described in 3GPP TR 38.855. For regulatory use cases, the following are the minimum performance requirements:

  • Horizontal positioning accuracy better than 50 meters for 80% of the UEs.
  • Vertical positioning accuracy better than 5 meters for 80% of the UEs.
  • End-to-end latency less than 30 seconds.

For commercial use cases, for which the positioning requirements are more stringent, the following are the starting-point performance targets

  • Horizontal positioning accuracy better than 3 meters (indoors) and 10 meters (outdoors) for 80% of the UEs.
  • Vertical positioning accuracy better than 3 meters (indoors and outdoors) for 80% of the UEs.
  • End-to-end latency less than 1 second.

Figure 3.11 above shows the RAT-dependent NR positioning schemes being considered for standardization in Release 16:

  • Downlink time difference of arrival (DL-TDOA): A new reference signal known as the positioning reference signal (PRS) is introduced in Release 16 for the UE to perform downlink reference signal time difference (DL RSTD) measurements for each base station’s PRSs. These measurements are reported to the location server.
  • Uplink time difference of arrival (UL-TDOA): The Release-16 sounding reference signal (SRS) is enhanced to allow each base station to measure the uplink relative time of arrival (UL-RTOA) and report the measurements to the location server.
  • Downlink angle-of-departure (DL-AoD): The UE measures the downlink reference signal receive power (DL RSRP) per beam/gNB. Measurement reports are used to determine the AoD based on UE beam location for each gNB. The location server then uses the AoDs to estimate the UE position.
  • Uplink angle-of-arrival (UL-AOA): The gNB measures the angle-of-arrival based on the beam the UE is located in. Measurement reports are sent to the location server.
  • Multi-cell round trip time (RTT): The gNB and UE perform Rx-Tx time difference measurement for the signal of each cell. The measurement reports from the UE and gNBs are sent to the location server to determine the round trip time of each cell and derive the UE position.
  • Enhanced cell ID (E-CID). This is based on RRM measurements (e.g. DL RSRP) of each gNB at the UE. The measurement reports are sent to the location server.

UE-based measurement reports for positioning:

  • Downlink reference signal reference power (DL RSRP) per beam/gNB
  • Downlink reference signal time difference (DL RSTD)
  • UE RX-TX time difference

gNB-based measurement reports for positioning:

  • Uplink angle-of-arrival (UL-AoA)
  • Uplink reference-signal receive power (UL-RSRP)
  • UL relative time of arrival (UL-RTOA)
  • gNB RX-TX time difference

NR adopts a solution similar to that of LTE LPPa for Broadcast Assistance Data Delivery, which provides support for A-GNSS, RTK and OTDOA positioning methods. PPP-PTK positioning will extend LPP A-GNSS assistance data message based on compact “SSR messages” from QZSS interface specifications. UE-based RAT-dependent DL-only positioning techniques are supported, where the positioning estimation will be done at the UE-based on assistance data provided by the location server.


Rohde&Schwarz have a 5G overview presentation here. This picture from that presentation is a good summary of the 3GPP Release-16 5G NR positioning techniques. This nice short video on "Release 16 Location Based Services Requirements" complements it very well. 


Related Posts:

Saturday, April 4, 2020

5G eXtended Reality (5G-XR) in 5G System (5GS)


We have been meaning to make a tutorial on augmented reality (AR), virtual reality (VR), mixed reality (MR) and extended reality (XR) for a while but we have only managed to do it. Embedded below is video and slides for the tutorial and also a playlist of different use cases on XR from around the world.

If you are not familiar with the 5G Service Based Architecture (SBA) and 5G Core (5GC), best to check this earlier tutorial before going further. A lot of comments are generally around Wi-Fi instead of 5G being used for indoors and we completely agree. 3GPP 5G architecture is designed to cater for any access in addition to 5G access. We have explained it here and here. This guest post also nicely explains Network Convergence of Mobile, Broadband and Wi-Fi.





XR use cases playlist



A lot of info on this topic is from Qualcomm, GSMA, 3GPP and 5G Americas whitepaper, all of them in the links in the slides.


Related Posts:

Sunday, January 19, 2020

2-step RACH Enhancement for 5G New Radio (NR)

5G Americas recently published a white paper titled, "The 5G Evolution: 3GPP Releases 16-17" highlighting new features in 5G that will define the next phase of 5G network deployments across the globe. It's available here. One of the sections in that details the 2-step RACH enhancement that is being discussed for a while in 3GPP. The 2-step process would supercede the 4-step process today and would reduce the lartency and optimise the signalling.


Here are the details from the 5G Americas whitepaper:

RACH stands for Random Access Channel, which is the first message from UE to eNB when it is powered on. In terms of Radio Access Network implementation, handling RACH design can be one of the most important / critical portions.
The contention-based random-access procedure from Release 15 is a four-step procedure, as shown in Figure 3.12. The UE transmits a contention-based PRACH preamble, also known as Msg1. After detecting the preamble, the gNB responds with a random-access response (RAR), also known as Msg2. The RAR includes the detected preamble ID, a time-advance command, a temporary C-RNTI (TC-RNTI), and an uplink grant for scheduling a PUSCH transmission from the UE known as Msg3. The UE transmits Msg3 in response to the RAR including an ID for contention resolution. Upon receiving Msg3, the network transmits the contention resolution message, also known as Msg4, with the contention resolution ID. The UE receives Msg4, and if it finds its contention-resolution ID it sends an acknowledgement on a PUCCH, which completes the 4-step random access procedure.

The four-step random-access procedure requires two round-trip cycles between the UE and the base station, which not only increases the latency but also incurs additional control-signaling overhead. The motivation of two-step RACH is to reduce latency and control-signaling overhead by having a single round trip cycle between the UE and the base station. This is achieved by combining the preamble (Msg1) and the scheduled PUSCH transmission (Msg3) into a single message (MsgA) from the UE, known as MsgA. Then by combining the random-access respond (Msg2) and the contention resolution message (Msg4) into a single message (MsgB) from the gNB to UE, see Figure 3.13. Furthermore, for unlicensed spectrum, reducing the number of messages transmitted from the UE and the gNB, reduces the number of LBT (Listen Before Talk) attempts.

Design targets for two-step RACH:

  • A common design for the three main uses of 5G, i.e. eMBB, URLLC and mMTC in licensed and unlicensed spectrum.
  • Operation in any cell size supported in Release 15, and with or without a valid uplink time alignment (TA).
  • Applicable to different RRC states, i.e. RRC_INACTIVE, RRC_CONNECTED and RRC_IDLE states.
  • All triggers for four-step RACH apply to two-step RACH including, Msg3-based SI request and contention-based beam failure recovery (CB BFR).

As described earlier, MsgA consists of a PRACH preamble and a PUSCH transmission, known as MsgA PRACH and MsgA PUSCH respectively. The MsgA PRACH preambles are separate from the four-step RACH preambles, but can be transmitted in the same PRACH Occasions (ROs) as the preambles of fourstep RACH, or in separate ROs. The PUSCH transmissions are organized into PUSCH Occasions (POs) which span multiple symbols and PRBs with optional guard periods and guard bands between consecutive POs. Each PO consists of multiple DMRS ports and DMRS sequences, with each DMRS port/DMRS sequence pair known as PUSCH resource unit (PRU). two-step RACH supports at least one-to-one and multiple-to-one mapping between the preambles and PRUs.

After the UE transmits MsgA, it waits for the MsgB response from the gNB. There are three possible outcomes:

  1. gNB doesn’t detect the MsgA PRACH ➡ No response is sent back to the UE ➡ The UE retransmits MsgA or falls back to four-step RACH starting with a Msg1 transmission.
  2. gNB detects MsgA preamble but fails to successful decode MsgA PUSCH ➡ gNB sends back a fallbackRAR to the UE with the RAPID (random-access preamble ID) and an uplink grant for the MsgA PUSCH retransmission ➡ The UE upon receiving the fallbackRAR, falls back to four-step RACH with a transmission of Msg3 (retransmission of the MsgA PUSCH).
  3. gNB detects MsgA and successfully decodes MsgA PUSCH ➡ gNB sends back a successRAR to the UE with the contention resolution ID of MsgA ➡ The reception of the successRAR successfully completes the two-step RACH procedure.

As described earlier, MsgB consists of the random-access response and the contention-resolution message. The random-access response is sent when the gNB detects a preamble but cannot successfully decode the corresponding PUSCH transmission. The contention resolution message is sent after the gNB successfully decodes the PUSCH transmission. MsgB can contain backoff indication, fallbackRAR and/or successRAR. A single MsgB can contain the successRAR of one or more UEs. The fallbackRAR consists of the RAPID: an uplink grant to retransmit the MsgA PUSCH payload and time-advance command. The successRAR consists of at least the contention resolution ID, the C-RNTI and the TA command.

For more details on this feature, see 3GPP RP-190711, “2-step RACH for NR” (Work-item description)

Monday, May 1, 2017

Variety of 3GPP IoT technologies and Market Status - May 2017



I have seen many people wondering if so many different types of IoT technologies are needed, 3GPP or otherwise. The story behind that is that for many years 3GPP did not focus too much on creating an IoT variant of the standards. Their hope was that users will make use of LTE Cat 1 for IoT and then later on they created LTE Cat 0 (see here and here).

The problem with this approach was that the market was ripe for a solution to a different types of IoT technologies that 3GPP could not satisfy. The table below is just an indication of the different types of technologies, but there are many others not listed in here.


The most popular IoT (or M2M) technology to date is the humble 2G GSM/GPRS. Couple of weeks back Vodafone announced that it has reached a milestone of 50 million IoT connections worldwide. They are also adding roughly 1 million new connections every month. The majority of these are GSM/GPRS.

Different operators have been assessing their strategy for IoT devices. Some operators have either switched off or are planning to switch off they 2G networks. Others have a long term plan for 2G networks and would rather switch off their 3G networks to refarm the spectrum to more efficient 4G. A small chunk of 2G on the other hand would be a good option for voice & existing IoT devices with small amount of data transfer.

In fact this is one of the reasons that in Release-13 GSM is being enhanced for IoT. This new version is known as Extended Coverage – GSM – Internet of Things (EC-GSM-IoT ). According to GSMA, "It is based on eGPRS and designed as a high capacity, long range, low energy and low complexity cellular system for IoT communications. The optimisations made in EC-GSM-IoT that need to be made to existing GSM networks can be made as a software upgrade, ensuring coverage and accelerated time to-market. Battery life of up to 10 years can be supported for a wide range use cases."

The most popular of the non-3GPP IoT technologies are Sigfox and LoRa. Both these technologies have gained significant ground and many backers in the market. This, along with the gap in the market and the need for low power IoT technologies that transfer just a little amount of data and has a long battery life motivated 3GPP to create new IoT technologies that were standardised as part of Rel-13 and are being further enhanced in Rel-14. A summary of these technologies can be seen below


If you look at the first picture on the top (modified from Qualcomm's original here), you will see that these different IoT technologies, 3GPP or otherwise address different needs. No wonder many operators are using the unlicensed LPWA IoT technologies as a starting point, hoping to complement them by 3GPP technologies when ready.

Finally, looks like there is a difference in understanding of standards between Ericsson and Huawei and as a result their implementation is incompatible. Hopefully this will be sorted out soon.


Market Status:

Telefonica has publicly said that Sigfox is the best way forward for the time being. No news about any 3GPP IoT technologies.

Orange has rolled out LoRa network but has said that when NB-IoT is ready, they will switch the customers on to that.

KPN deployed LoRa throughout the Netherlands thereby making it the first country across the world with complete coverage. Haven't ruled out NB-IoT when available.

SK Telecom completed nationwide LoRa IoT network deployment in South Korea last year. It sees LTE-M and LoRa as Its 'Two Main IoT Pillars'.

Deutsche Telekom has rolled out NarrowBand-IoT (NB-IoT) Network across eight countries in Europe (Germany, the Netherlands, Greece, Poland, Hungary, Austria, Slovakia, Croatia)

Vodafone is fully committed to NB-IoT. Their network is already operational in Spain and will be launching in Ireland and Netherlands later on this year.

Telecom Italia is in process of launching NB-IoT. Water meters in Turin are already sending their readings using NB-IoT.

China Telecom, in conjunction with Shenzhen Water and Huawei launched 'World's First' Commercial NB-IoT-based Smart Water Project on World Water Day.

SoftBank is deploying LTE-M (Cat-M1) and NB-IoT networks nationwide, powered by Ericsson.

Orange Belgium plans to roll-out nationwide NB-IoT & LTE-M IoT Networks in 2017

China Mobile is committed to 3GPP based IoT technologies. It has conducted outdoor trials of NB-IoT with Huawei and ZTE and is also trialing LTE-M with Ericsson and Qualcomm.

Verizon has launched Industry’s first LTE-M Nationwide IoT Network.

AT&T will be launching LTE-M network later on this year in US as well as Mexico.

Sprint said it plans to deploy LTE Cat 1 technology in support of the Internet of Things (IoT) across its network by the end of July.

Further reading:

Sunday, November 6, 2016

LTE, 5G and V2X

3GPP has recently completed the Initial Cellular V2X standard. The following from the news item:

The initial Cellular Vehicle-to-Everything (V2X) standard, for inclusion in the Release 14, was completed last week - during the 3GPP RAN meeting in New Orleans. It focuses on Vehicle-to-Vehicle (V2V) communications, with further enhancements to support additional V2X operational scenarios to follow, in Release 14, targeting completion during March 2017.
The 3GPP Work Item Description can be found in RP-161894.
V2V communications are based on D2D communications defined as part of ProSe services in Release 12 and Release 13 of the specification. As part of ProSe services, a new D2D interface (designated as PC5, also known as sidelink at the physical layer) was introduced and now as part of the V2V WI it has been enhanced for vehicular use cases, specifically addressing high speed (up to 250Kph) and high density (thousands of nodes).

...


For distributed scheduling (a.k.a. Mode 4) a sensing with semi-persistent transmission based mechanism was introduced. V2V traffic from a device is mostly periodic in nature. This was utilized to sense congestion on a resource and estimate future congestion on that resource. Based on estimation resources were booked. This technique optimizes the use of the channel by enhancing resource separation between transmitters that are using overlapping resources.
The design is scalable for different bandwidths including 10 MHz bandwidth.
Based on these fundamental link and system level changes there are two high level deployment configurations currently defined, and illustrated in Figure 3.
Both configurations use a dedicated carrier for V2V communications, meaning the target band is only used for PC5 based V2V communications. Also in both cases GNSS is used for time synchronization.
In “Configuration 1” scheduling and interference management of V2V traffic is supported based on distributed algorithms (Mode 4) implemented between the vehicles. As mentioned earlier the distributed algorithm is based on sensing with semi-persistent transmission. Additionally, a new mechanism where resource allocation is dependent on geographical information is introduced. Such a mechanism counters near far effect arising due to in-band emissions.
In “Configuration 2” scheduling and interference management of V2V traffic is assisted by eNBs (a.k.a. Mode 3) via control signaling over the Uu interface. The eNodeB will assign the resources being used for V2V signaling in a dynamic manner.

5G Americas has also published a whitepaper on V2X Cellular Solutions. From the press release:

Vehicle-to-Everything (V2X) communications and solutions enable the exchange of information between vehicles and much more - people (V2P), such as bicyclists and pedestrians for alerts, vehicles (V2V) for collision avoidance, infrastructure (V2I) such as roadside devices for timing and prioritization, and the network (V2N) for real time traffic routing and other cloud travel services. The goal of V2X is to improve road safety, increase the efficiency of traffic, reduce environmental impacts and provide additional traveler information services. 5G Americas, the industry trade association and voice of 5G and LTE for the Americas, today announced the publication of a technical whitepaper titled V2X Cellular Solutions that details new connected car opportunities for the cellular and automotive industries.




The whitepaper describes the benefits that Cellular V2X (C-V2X) can provide to support the U.S. Department of Transportation objectives of improving safety and reducing vehicular crashes. Cellular V2X can also be instrumental in transforming the transportation experience by enhancing traveler and traffic information for societal goals.

C-V2X is part of the 3GPP specifications in Release 14. 3GPP announced the completion of the initial C-V2X standard in September 2016. There is a robust evolutionary roadmap for C-V2X towards 5G with a strong ecosystem in place. C-V2X will be a key technology enabler for the safer, more autonomous vehicle of the future.

The whitepaper is embedded below:




Related posts:
Further Reading:



Sunday, October 16, 2016

Inside 3GPP Release-13 - Whitepaper by 5G Americas


The following is from the 5G Americas press release:

The summary offers insight to the future of wireless broadband and how new requirements and technological goals will be achieved. The report updates Release 13 (Rel-13) features that are now completed at 3GPP and were not available at the time of the publication of a detailed 5G Americas report, Mobile Broadband Evolution Towards 5G: 3GPP Release 12 & Release 13 and Beyond in June 2015.
The 3GPP standards have many innovations remaining for LTE to create a foundation for 5G.  Rel-12, which was finalized in December 2014, contains a vast array of features for both LTE and HSPA+ that bring greater efficiency for networks and devices, as well as enable new applications and services. Many of the Rel-12 features were extended into Rel-13.  Rel-13, functionally frozen in December 2015 and completed in March 2016, continues to build on these technical capabilities while adding many robust new features.
Jim Seymour, Principal Engineer, Mobility CTO Group, Cisco and co-leader of the 5G Americas report explained, “3GPP Release 13 is just a peek behind the curtain for the unveiling of future innovations for LTE that will parallel the technical work at 3GPP on 5G. Both LTE and 5G will work together to form our connected future.”
The numerous features in the Rel-13 standards include the following for LTE-Advanced:
  • Active Antenna Systems (AAS), including beamforming, Multi-Input Multi-Output (MIMO) and Self-Organizing Network (SON) aspects
  • Enhanced signaling to support inter-site Coordinated Multi-Point Transmission and Reception (CoMP)
  • Carrier Aggregation (CA) enhancements to support up to 32 component carriers
  • Dual Connectivity (DC) enhancements to better support multi-vendor deployments with improved traffic steering
  • Improvements in Radio Access Network (RAN) sharing
  • Enhancements to Machine Type Communication (MTC)
  • Enhanced Proximity Services (ProSe)
Some of the standards work in Rel-13 related to spectrum efficiency include:                                                                                                                       
  • Licensed Assisted Access for LTE (LAA) in which LTE can be deployed in unlicensed spectrum
  • LTE Wireless Local Area Network (WLAN) Aggregation (LWA) where Wi-Fi can now be supported by a radio bearer and aggregated with an LTE radio bearer
  • Narrowband IoT (NB-IoT) where lower power wider coverage LTE carriers have been designed to support IoT applications
  • Downlink (DL) Multi-User Superposition Transmission (MUST) which is a new concept for transmitting more than one data layer to multiple users without time, frequency or spatial separation
“The vision for 5G is being clarified in each step of the 3GPP standards. To understand those steps, 5G Americas provides reports on the developments in this succinct, understandable format,” said Vicki Livingston, Head of Communications for the association.

The whitepaper as follows:



Related posts:

Saturday, December 12, 2015

LTE-Advanced Pro (a.k.a. 4.5G)

3GPP announced back in October that the next evolution of the 3GPP LTE standards will be known as LTE-Advanced Pro. I am sure this will be shortened to LTE-AP in presentations and discussions but should not be confused with access points.

The 3GPP press release mentioned the following:

LTE-Advanced Pro will allow mobile standards users to associate various new features – from the Release’s freeze in March 2016 – with a distinctive marker that evolves the LTE and LTE-Advanced technology series.

The new term is intended to mark the point in time where the LTE platform has been dramatically enhanced to address new markets as well as adding functionality to improve efficiency.

The major advances achieved with the completion of Release 13 include: MTC enhancements, public safety features – such as D2D and ProSe - small cell dual-connectivity and architecture, carrier aggregation enhancements, interworking with Wi-Fi, licensed assisted access (at 5 GHz), 3D/FD-MIMO, indoor positioning, single cell-point to multi-point and work on latency reduction. Many of these features were started in previous Releases, but will become mature in Release 13.

LTE-evolution timelinea 350pxAs well as sign-posting the achievements to date, the introduction of this new marker confirms the need for LTE enhancements to continue along their distinctive development track, in parallel to the future proposals for the 5G era.


Some vendors have been exploring ways of differentiating the advanced features of Release-13 and have been using the term 4.5G. While 3GPP does not officially support 4.5G (or even 4G) terminology, a new term has been welcomed by operators and vendors alike.

I blogged about Release-13 before, here, which includes a 3GPP presentation and 4G Americas whitepaper. Recently Nokia (Networks) released a short and sweet video and a whitepaper. Both are embedded below:



The Nokia whitepaper (table of contents below) can be downloaded from here.


Monday, September 14, 2015

3GPP Release-13 whitepapers and presentations

With 3GPP Release-13 due early/mid next year, there has been a flurry of presentations and whitepapers on this topic. This post provides some of these. I will try and maintain a list of whitepapers/presentations as part of this post as and when released.

1. June 2015: LTE Release 13 and road to 5G - Presented by Dino Flore, Chairman of 3GPP RAN, (Qualcomm Technologies Inc.)



2. Sep 2015: Executive Summary - Inside 3GPP Release 13 by 4G Americas



3. June 2015: Mobile Broadband Evolution Towards 5G: 3GPP Rel-12 & Rel-13 and Beyond by 4G Americas

4. April 2015: LTE release 13 – expanding the Networked Society by Ericsson


Sunday, August 25, 2013

Centralized SON


I was going through the presentation by SKT that I blogged about here and came across this slide above. SKT is clearly promoting the benefits of their C-SON (centralized SON) here.


The old 4G Americas whitepaper (here) explained the differences between the three approaches; Centralized (C-SON), Distributed (D-SON) and Hybrid (H-SON). An extract from that paper here:

In a centralized architecture, SON algorithms for one or more use cases reside on the Element Management System (EMS) or a separate SON server that manages the eNB's. The output of the SON algorithms namely, the values of specific parameters, are then passed to the eNB's either on a periodic basis or when needed. A centralized approach allows for more manageable implementation of the SON algorithms. It allows for use case interactions between SON algorithms to be considered before modifying SON parameters. However, active updates to the use case parameters are delayed since KPIs and UE measurement information must be forwarded to a centralized location for processing. Filtered and condensed information are passed from the eNB to the centralized SON server to preserve the scalability of the solution in terms of the volume of information transported. Less information is available at the SON server compared to that which would be available at the eNB. Higher latency due to the time taken to collect UE information restricts the applicability of a purely centralized SON architecture to those algorithms that require slower response time. Furthermore, since the centralized SON server presents a single point of failure, an outage in the centralized server or backhaul could result in stale and outdated parameters being used at the eNB due to likely less frequent updates of SON parameters at the eNB compared to that is possible in a distributed solution.

In a distributed approach, SON algorithms reside within the eNB’s, thus allowing autonomous decision making at the eNB's based on UE measurements received on the eNB's and additional information from other eNB's being received via the X2 interface. A distributed architecture allows for ease of deployment in multi-vendor networks and optimization on faster time scales. Optimization could be done for different times of the day. However, due to the inability to ensure standard and identical implementation of algorithms in a multi-vendor network, careful monitoring of KPIs is needed to minimize potential network instabilities and ensure overall optimal operation.

In practical deployments, these architecture alternatives are not mutually exclusive and could coexist for different purposes, as is realized in a hybrid SON approach. In a hybrid approach, part of a given SON optimization algorithm are executed in the NMS while another part of the same SON algorithm could be executed in the eNB. For example, the values of the initial parameters could be done in a centralized server and updates and refinement to those parameters in response to the actual UE measurements could be done on the eNB's. Each implementation has its own advantages and disadvantages. The choice of centralized, distributed or hybrid architecture needs to be decided on a use-case by use case basis depending on the information availability, processing and speed of response requirements of that use case. In the case of a hybrid or centralized solution, a practical deployment would require specific partnership between the infrastructure vendor, the operator and possibly a third party tool company. Operators can choose the most suitable approach depending upon the current infrastructure deployment.

Finally, Celcite CMO recently recently gave an interview on this topic on Thinksmallcell here. An extract below:

SON software tunes and optimises mobile network performance by setting configuration parameters in cellsites (both large and small), such as the maximum RF power levels, neighbour lists and frequency allocation. In some cases, even the antenna tilt angles are updated to adjust the coverage of individual cells.

Centralised SON (C-SON) software co-ordinates all the small and macrocells, across multiple radio technologies and multiple vendors in a geographic region - autonomously updating parameters via closed loop algorithms. Changes can be as frequent as every 15 minutes– this is partly limited by the bottlenecks of how rapidly measurement data is reported by RAN equipment and also the capacity to handle large numbers of parameter changes. Different RAN vendor equipment is driven from the same SON software. A variety of data feeds from the live network are continuously monitored and used to update system performance, allowing it to adapt automatically to changes throughout the day including outages, population movement and changes in services being used.

Distributed SON (D-SON) software is autonomous within each small cell (or macrocell) determining for itself the RF power level, neighbour lists etc. based on signals it can detect itself (RF sniffing) or by communicating directly with other small cells.

LTE has many SON features already designed in from the outset, with the X.2 interface specifically used to co-ordinate between small and macrocell layers whereas 3G lacks SON standards and requires proprietary solutions.
C-SON software is available from a relatively small number of mostly independent software vendors, while D-SON is built-in to each small cell or macro node provided by the vendor. Both C-SON and D-SON will be needed if network operators are to roll out substantial numbers of small cells quickly and efficiently, especially when more tightly integrated into the network with residential femtocells.

Celcite is one of the handful of C-SON software solution vendors. Founded some 10 years ago, it has grown organically by 35% annually to 450 employees. With major customers in both North and South America, the company is expanding from 3G UMTS SON technology and is actively running trials with LTE C-SON.

Quite a few companies are claiming to be in the SON space, but Celcite would argue that there are perhaps only half a dozen with the capabilities for credible C-SON solutions today. Few companies can point to live deployments. As with most software systems, 90% of the issues arise when something goes wrong and it's those "corner cases" which take time to learn about and deal with from real-world deployment experience.

A major concern is termed "Runaway SON" where the system goes out of control and causes tremendous negative impact on the network. It's important to understand when to trigger SON command and when not to. This ability to orchestrate and issue configuration commands is critical for a safe, secure and effective solution.

Let me know your opinions via comments below.