Showing posts with label 5G. Show all posts
Showing posts with label 5G. Show all posts

Wednesday 16 January 2019

5G Slicing Templates

We looked at slicing not long back in this post here, shared by ITU, from Huawei. The other day I read a discussion on how do you define slicing. Here is my definition:

Network slicing allows sharing of the physical network infrastructure resources into independent virtual networks thereby giving an illusion of multiple logically seperate end-to-end networks, each bound by their own SLAs, service quality and peformance guarantees to meet the desired set of requirements. While it is being officially defined for 5G, there is no reason that a proprietary implementation for earlier generations (2G, 3G or 4G)  or Wi-Fi cannot be created.

The picture above from a China Mobile presentation, explain the slice creation process nicely:

  1. Industry customers order network slices from operators and provide the network requirements, including network slice type, capacity, performance, and related coverage. Operators generate network slices according to their needs. Provide the network service requirement as General Service Template (GST).
  2. Transfer GST to NST (Network Slice Template)
  3. Trigger Network Instantiation Process
  4. Allocate the necessary resources and create the slice.
  5. Expose slice management information. Industry customers obtain management information of ordered slices through open interfaces (such as number of access users, etc.).

For each specific requirement, a slicing template is generated that is translated to an actual slice. Let's look at some examples:

Let's take an example of Power Grid. The picture below shows the scenario, requirement and the network slicing template.
As can be seen, the RAN requirement is timing and low latency while the QoS requirement in the core would be 5 ms latency with guaranteed 2 Mbps throughout. There are other requirements as well. The main transport requirement would be hard isolation.

The Network requirement for AR Gaming is high reliability, low latency and high density of devices. This translates to main RAN requirement of low jitter and latency; Transport requirement of Isolation between TICs (telecom integrated cloud) and finally Core QoS requirement of 80 ms latency and 2 Mbps guaranteed bit rate.


More resources on Network Slicing:


Thursday 3 January 2019

Nice short articles on 5G in 25th Anniversary Special NTT Docomo Technical Journal

5G has dominated the 3G4G blog for last few years. Top 10 posts for 2018 featured 6 posts on 5G while top 10 posts for 2017 featured 7. In makes sense to start 2019 posting with a 5G post.

A special 25th Anniversary edition of NTT Docomo Technical Journal features some nice short articles on 5G covering RAN, Core, Devices & Use cases. Here is some more details for anyone interested.

Radio Access Network in 5G Era introduces NTT Docomo's view of world regarding 5G, scenarios for the deployment of 5G and also prospects for further development of 5G in the future. The article looks at the main features in 5G RAN that will enable eMBB (Massive MIMO), URLLC (short TTI) and mMTC (eDRX).

Interested readers should also check out:

Core network for Social Infrastructure in 5G Era describes the principal 5G technologies required in the core network to realise new services and applications that will work through collaboration between various industries and businesses. It also introduces initiatives for more advanced operations, required for efficient operation of this increasingly complex network.

This article also goes in detail of the Services Based Architecture (SBA). In case you were wondering what UL CL and SSC above stands for; UpLink CLassifiers (UL CL) is a technology that identifies packets sent by a terminal to a specific IP address and routes them differently (Local Breakout) as can be seen above. It is generally to be used to connect to a MEC server. Session and Service Continuity (SSC) is used to decide if the IP address would be retained when the UE moves to a new area from the old one.

Interested readers should also check out:
Evolution of devices for the 5G Era discusses prospects for the high-speed, high-capacity, low-latency, and many-terminal connectivity features introduced with 5G, as well as advances in the network expected in the future, technologies that will be required for various types of terminal devices and the services, and a vision for devices in 2020 and thereafter.

According to the article, the medium term strategy of R&D division of NTT Docomo has three main themes: 5G, AI and Devices. In simple terms, devices will collect a lot of data which will become big data, 5G will be used to transport this data and the AI will process all the collected Big Data.

NTT Docomo has also redefined the devices as connecting through various technologies including cellular, Wi-Fi, Bluetooth & Fixed communications.

Interested readers should also check out:

The final article on 5G, Views of the Future Pioneered by 5G: A World Converging the Strengths of Partners looks at field trials, partnerships, etc. In fact here the embedded video playlist below shows some of these use cases described in the article



In addition there are other articles too, but in this post I have focused on 5G only.

The 25th Anniversary Special Edition of NTT Docomo Technical Journal is available here.

Saturday 24 November 2018

5G Top-10 Misconceptions


Here is a video we did a few weeks back to clear the misconceptions about 5G. The list above summarizes the topics covered.



The video is nearly 29 minutes long. If you prefer a shorter version or are bored of hearing me ðŸ˜œ then a summary version (just over 3 minutes) is in 3G4G tweet below.


The slides can be downloaded from our Slideshare channel as always.

As always, we love your feedback, even when you strongly disagree.

Other interesting recent posts on 5G:


Monday 19 November 2018

5G NR Radio Protocols Overview


3GPP held a workshop on 5G NR submission towards IMT-2020 last week. You can access all the agenda, documents, etc. on the 3GPP website here. You can also get a combined version of all presentations from the 3G4G website here. I also wrote a slightly detailed article on this workshop on 3G4G website here.

The following is nice overview of the 5G Radio Interface protocol as defined by 3GPP in NR Rel.15 by Sudeep Palat, Intel. The document was submitted to the 3GPP workshop on ITU submission in Brussels on Oct 24, 2018.



The presentation discusses NR radio interface architecture and protocols for control and user plane; covering RRC, SDAP, PDCP, RLC and MAC, focussing on differences and performance benefits compared to LTE.  RRC states and state transitions with reduced transition delays are also discussed.

Related Posts:

Monday 29 October 2018

Overview 3GPP 5G NR Physical Layer

3GPP held a workshop on 5G NR submission towards IMT-2020 last week. You can access all the agenda, documents, etc. on the 3GPP website here. You can also get a combined version of all presentations from the 3G4G website here. I also wrote a slightly detailed article on this workshop on 3G4G website here.

One of the presentations on 'Physical layer structure, numerology and frame structure, NR spectrum utilization mechanism 3GPP 5G NR submission towards IMT-2020' by Havish Koorapaty, Ericsson is a good introductory material on 5G New Radio (NR) Physical Layer. It is embedded below (thanks to Eiko Seidel for sharing) and the PDF can be downloaded from slideshare or 3G4G website here.



Related Links:

Friday 19 October 2018

5G Network Architecture Options (Updated)


ICYMI, we created an updated video on 5G Network Architecture options. The videos and slides are embedded below.



This updated presentation/video looks at 5G Network Architecture options that have been proposed by 3GPP for deployment of 5G. It covers the Standalone (SA) and Non-Standalone (NSA) architecture. In the NSA architecture, EN-DC (E-UTRA-NR Dual Connectivity), NGEN-DC (NG-RAN E-UTRA-NR Dual Connectivity) and NE-DC (NR-E-UTRA Dual Connectivity) has been looked at. Finally, migration strategies proposed by vendors and operators (MNOs / SPs) have been discussed.


Nokia has also released a whitepaper on this topic that I only became aware of after my slides / video were done. More details in the tweet below.


Related Links:

Wednesday 10 October 2018

Automated 4G / 5G HetNet Design


I recently heard Iris Barcia, COO of Keima speak after nearly 6 years at Cambridge Wireless CWTEC 2018. The last time I heard her, it was part of CW Small Cells SIG, where I used to be a SIG (special interest group) champion. Over the last 6 years, the network planning needs have changed from planning for coverage to planning for capacity from the beginning. This particular point started a little debate that I will cover in another post, but you can sneak a peek here ðŸ˜‰.

Embedded below is the video and presentation. The slides can be downloaded from SlideShare.





Related posts:

Tuesday 2 October 2018

Benefits and Challenges of Applying Device-Level AI to 5G networks


I was part of Cambridge Wireless CWTEC 2018 organising committee where our event 'The inevitable automation of Next Generation Networks' covered variety of topics with AI, 5G, devices, network planning, etc. The presentations are available freely for a limited period here.

One of the thought provoking presentations was by Yue Wang from Samsung R&D. The presentation is embedded below and can be downloaded from Slideshare.



This presentation also brought out some interesting thoughts and discussions:

  • While the device-level AI and network-level AI would generally work cooperatively, there is a risk that some vendor may play the system to make their devices perform better than the competitors. Something similar to the signaling storm generated by SCRI (see here).
  • If the device-level and network-level AI works constructively, an operator may be able to claim that their network can provide a better battery life for a device. For example iPhone XYZ has 25% better battery life on our network rather than competitors network.
  • If the device-level and network-level AI works destructively for any reason then the network can become unstable and the other users may experience issues. 

I guess all these enhancements will start slowly and there will be lots of learning in the first few years before we have a stable, mutually beneficial solution.

Related Posts:

Monday 24 September 2018

5G New Radio Standards and other Presentations


A recent Cambridge Wireless event 'Radio technology for 5G – making it work' was an excellent event where all speakers delivered an interesting and insightful presentation. These presentations are all available to view and download for everyone for a limited time here.

I blogged about the base station antennas last week but there are other couple of presentations that stood out for me.


The first was an excellent presentation from Sylvia Lu from u-Blox, also my fellow CW Board Member. Her talk covered variety of topics including IoT, IIoT, LTE-V2X and Cellular positioning, including 5G NR Positioning Trend. The presentation is embedded below and available to download from Slideshare





The other presentation on 5G NR was one from Yinan Qi of Samsung R&D. His presentation looked at variety of topics, mainly Layer 1 including Massive MIMO, Beamforming, Beam Management, Bandwidth Part, Reference Signals, Phase noise, etc. His presentation is embedded below and can be downloaded from SlideShare.




Related Posts:

Friday 21 September 2018

Base Station Antenna Considerations for 5G

I first mentioned Quintel in this blog three years back for their innovations in 4T8R/8T8R antennas. Since then they have been going strength to strength.


I heard David Barker, CTO of Quintel at Cambridge Wireless event titled "Radio technology for 5G – making it work" talking about the antennas consideration for 5G. There are quite a few important areas in this presentation for consideration. The presentation is embedded below:



Related Posts:

Friday 14 September 2018

End-to-end Network Slicing in 5G

I recently realised that I have never written a post just on Network slicing. So here is one on the topic. So the first question asked is, why do we even need Network Slicing? Alan Carlton from Interdigital wrote a good article on this topic. Below is what I think is interesting:

Network slicing is a specific form of virtualization that allows multiple logical networks to run on top of a shared physical network infrastructure. The key benefit of the network slicing concept is that it provides an end-to-end virtual network encompassing not just networking but compute and storage functions too. The objective is to allow a physical mobile network operator to partition its network resources to allow for very different users, so-called tenants, to multiplex over a single physical infrastructure. The most commonly cited example in 5G discussions is sharing of a given physical network to simultaneously run Internet of Things (IoT), Mobile Broadband (MBB), and very low-latency (e.g. vehicular communications) applications. These applications obviously have very different transmission characteristics. For example, IoT will typically have a very large number of devices, but each device may have very low throughput. MBB has nearly the opposite properties since it will have a much smaller number of devices, but each one will be transmitting or receiving very high bandwidth content. The intent of network slicing is to be able to partition the physical network at an end-to-end level to allow optimum grouping of traffic, isolation from other tenants, and configuring of resources at a macro level.

Source: ITU presentation, see below

The key differentiator of the network slicing approach is that it provides a holistic end-to-end virtual network for a given tenant. No existing QoS-based solution can offer anything like this. For example, DiffServ, which is the most widely deployed QoS solution, can discriminate VoIP traffic from other types of traffic such as HD video and web browsing. However, DiffServ cannot discriminate and differentially treat the same type of traffic (e.g. VoIP traffic) coming from different tenants.

Also, DiffServ does not have the ability to perform traffic isolation at all. For example, IoT traffic from a health monitoring network (e.g. connecting hospitals and outpatients) typically have strict privacy and security requirements including where the data can be stored and who can access it. This cannot be accomplished by DiffServ as it does not have any features dealing with the compute and storage aspects of the network. All these identified shortfalls of DiffServ will be handled by the features being developed for network slicing.

I came across this presentation by Peter Ashwood-Smith from Huawei Technologies who presented '5G End to-end network slicing Demo' at ITU-T Focus Group IMT-2020 Workshop and Demo Day on 7 December 2016. Its a great presentation, I wish a video of this was available as well. Anyway, the presentation is embedded below and the PPT can be downloaded from here.



The European Telecommunications Standards Institute (ETSI) has established a new Industry Specification Group (ISG) on Zero touch network and Service Management (ZSM) that is working to produce a set of technical specifications on fully automated network and service management with, ideally, zero human intervention. ZSM is targeted for 5G, particularly in network slice deployment. NTT Technical review article on this is available here.

Finally, here is a presentation by Sridhar Bhaskaran of Cellular Insights blog on this topic. Unfortunately, not available for download.


Related Posts:

Tuesday 11 September 2018

Introduction to Fixed Wireless Access (FWA)


We have just produced a new tutorial on Fixed Wireless Access (FWA). The high level introductory tutorial looks at what is meant by Fixed Wireless Access, which is being touted as one of the initial 5G use cases. This presentation introduces FWA and looks at a practical deployment example.

According to GSA report, "Global Progress to 5G – Trials, Deployments and Launches", July 2018:

One use-case that has gained prominence is the use of 5G to deliver fixed wireless broadband services. We have identified 20 tests so far that have specifically focused on the fixed wireless access (FWA) use-case, which is five more than three months ago.

Embedded below is the video and presentation of the FWA tutorial.



If you found this useful, you would be interested in other tutorials on the 3G4G website here.

Related Posts:

Wednesday 5 September 2018

LiFi can be a valuable tool for densification

LiFi has been popping up in the news recently. I blogged about it (as LED-Fi) 10 years back. While the concept has remained the same, many of the limitations associated with the technology has been overcome. One of the companies driving LiFi is Scottish startup called pureLiFi.


I heard Professor Harald Haas at IEEE Glasgow Summit speak about how many of the limitations of LiFi have been overcome in the last few years (see videos below). This is a welcome news as there is a tremendous amount of Visible Light Spectrum that is available for exploitation.


While many discussions on LiFi revolve round its use as access technology, I think the real potential lies in its use as backhaul for densification.

For 5G, when we are looking at small cells, every few hundred meters, probably on streetlights and lamp posts, there is a requirement for alternative backhaul to fiber. Its difficult to run fiber to each and every lamp post. Traditionally, this was solved by microwave solutions but another option available in 5G is Integrated Access and Backhauling (IAB) or Self-backhauling.


A better alternative could be to use LiFi for this backhauling between lamp posts or streetlights. This can help avoid complications with IAB when multiple nodes are close by and also any complications with the technology until it matures. This approach is of course being trialed but as the picture above shows, rural backhaul is just one option.
LiFi is being studied as part of IEEE 802.11bb group as well as its potential is being considered for 5G.

Here is a vieo playlist explaining LiFi technology in detail.




Further reading:

Sunday 5 August 2018

ITU 'Network 2030': Initiative to support Emerging Technologies and Innovation looking beyond 5G advances

Source: ITU

As per this recent ITU Press Release:

The International Telecommunication Union, the United Nations specialized agency for information and communication technology (ICT), has launched a new research initiative to identify emerging and future ICT sector network demands, beyond 2030 and the advances expected of IMT-2020 (5G) systems. This work will be carried out by the newly established ITU Focus Group on Technologies for Network 2030, which is open to all interested parties.

The ITU focus group aims to guide the global ICT community in developing a "Network 2030" vision for future ICTs. This will include new concepts, new architecture, new protocols – and new solutions – that are fully backward compatible, so as to support both existing and new applications.

"The work of the ITU Focus Group on Technologies for 'Network 2030' will provide network system experts around the globe with a very valuable international reference point from which to guide the innovation required to support ICT use cases through 2030 and beyond," said ITU Secretary-General Houlin Zhao.

These ICT use cases will span new media such as hologrammes, a new generation of augmented and virtual reality applications, and high-precision communications for 'tactile' and 'haptic' applications in need of processing a very high volume of data in near real-time – extremely high throughput and low latency.   

Emphasizing this need, the focus group's chairman, Huawei's Richard Li, said, "This Focus Group will look at new media, new services and new architectures. Holographic type communications will have a big part to play in industry, agriculture, education, entertainment – and in many other fields. Supporting such capabilities will call for very high throughput in the range of hundreds of gigabits per second or even higher."

The ITU Focus Group on Technologies for 'Network 2030' is co-chaired by Verizon's Mehmet Toy, Rostelecom's Alexey Borodin, China Telecom's Yuan Zhang, Yutaka Miyake from KDDI Research, and is coordinated through ITU's Telecommunication Standardization Sector – which works with ITU's 193 Member States and more than 800 industry and academic members to establish international standards for emerging ICT innovations.

The ITU focus group reports to and will inform a new phase of work of the ITU standardization expert group for 'Future Networks' – Study Group 13. It will also strengthen and leverage collaborative relationships with and among other standards development organizations including: The European Telecommunications Standards Institute (ETSI), the Association for Computing Machinery's Special Interest Group on Data Communications (ACM SIGCOMM), and the Institute of Electrical and Electronics Engineers' Communications Society (IEEE ComSoc).
Source: ITU

According to the Focus Group page:

The FG NET-2030, as a platform to study and advance international networking technologies, will investigate the future network architecture, requirements, use cases, and capabilities of the networks for the year 2030 and beyond. 

The objectives include: 

• To study, review and survey existing technologies, platforms, and standards for identifying the gaps and challenges towards Network 2030, which are not supported by the existing and near future networks like 5G/IMT-2020.
• To formulate all aspects of Network 2030, including vision, requirements, architecture, novel use cases, evaluation methodology, and so forth.
• To provide guidelines for standardization roadmap.
• To establish liaisons and relationships with other SDOs.

An ITU interview with Dr. Richard Li, Huawei, Chairman of the ITU-T FG on Network 2030 is available on YouTube here.

A recent presentation by Dr. Richard Li on this topic is embedded below:



First Workshop on Network 2030 will be held in New York City, United States on 2 October 2018. Details here.

Related News:

Sunday 29 July 2018

Automating the 5G Core using Machine Learning and Data Analytics

One of the new entities introduced by 3GPP in the 5G Core SBA (see tutorial here) is Network Data Analytics Function, NWDAF.
3GPP TR 23.791: Study of Enablers for Network Automation for 5G (Release 16) describes the following 5G Network Architecture Assumptions:

1 The NWDAF (Network Data Analytics Function) as defined in TS 23.503 is used for data collection and data analytics in centralized manner. An NWDAF may be used for analytics for one or more Network Slice.
2 For instances where certain analytics can be performed by a 5GS NF independently, a NWDAF instance specific to that analytic maybe collocated with the 5GS NF. The data utilized by the 5GS NF as input to analytics in this case should also be made available to allow for the centralized NWDAF deployment option.
3 5GS Network Functions and OAM decide how to use the data analytics provided by NWDAF to improve the network performance.
4 NWDAF utilizes the existing service based interfaces to communicate with other 5GC Network Functions and OAM.
5 A 5GC NF may expose the result of the data analytics to any consumer NF utilizing a service based interface.
6 The interactions between NF(s) and the NWDAF take place in the local PLMN (the reporting NF and the NWDAF belong to the same PLMN).
7 Solutions shall neither assume NWDAF knowledge about NF application logic. The NWDAF may use subscription data but only for statistical purpose.

Picture SourceApplication of Data Mining in the 5G Network Architecture by Alexandros Kaloxylos

Continuing from 3GPP TR 23.791:

The NWDAF may serve use cases belonging to one or several domains, e.g. QoS, traffic steering, dimensioning, security.
The input data of the NWDAF may come from multiple sources, and the resulting actions undertaken by the consuming NF or AF may concern several domains (e.g. Mobility management, Session Management, QoS management, Application layer, Security management, NF life cycle management).
Use case descriptions should include the following aspects:
1. General characteristics (domain: performance, QoS, resilience, security; time scale).
2. Nature of input data (e.g. logs, KPI, events).
3. Types of NF consuming the NWDAF output data, how data is conveyed and nature of consumed analytics.
4. Output data.
5. Possible examples of actions undertaken by the consuming NF or AF, resulting from these analytics.
6. Benefits, e.g. revenue, resource saving, QoE, service assurance, reputation.

Picture SourceApplication of Data Mining in the 5G Network Architecture by Alexandros Kaloxylos

3GPP TS 23.501 V15.2.0 (2018-06) Section 6.2.18 says:

NWDAF represents operator managed network analytics logical function. NWDAF provides slice specific network data analytics to a NF. NWDAF provides network analytics information (i.e., load level information) to a NF on a network slice instance level and the NWDAF is not required to be aware of the current subscribers using the slice. NWDAF notifies slice specific network status analytic information to the NFs that are subscribed to it. NF may collect directly slice specific network status analytic information from NWDAF. This information is not subscriber specific.

In this Release of the specification, both PCF and NSSF are consumers of network analytics. The PCF may use that data in its policy decisions. NSSF may use the load level information provided by NWDAF for slice selection.

NOTE 1: NWDAF functionality beyond its support for Nnwdaf is out of scope of 3GPP.
NOTE 2: NWDAF functionality for non-slice-specific analytics information is not supported in this Release of the specification.

3GPP Release-16 is focusing on 5G Expansion and 5G Efficiency, SON and Big Data are part of 5G Efficiency.
Light Reading Artificial Intelligence and Machine Learning section has a news item on this topic from Layer123's Zero Touch & Carrier Automation Congress:

The 3GPP standards group is developing a machine learning function that could allow 5G operators to monitor the status of a network slice or third-party application performance.

The network data analytics function (NWDAF) forms a part of the 3GPP's 5G standardization efforts and could become a central point for analytics in the 5G core network, said Serge Manning, a senior technology strategist at Sprint Corp.

Speaking here in Madrid, Manning said the NWDAF was still in the "early stages" of standardization but could become "an interesting place for innovation."

The 3rd Generation Partnership Project (3GPP) froze the specifications for a 5G new radio standard at the end of 2017 and is due to freeze another set of 5G specifications, covering some of the core network and non-radio features, in June this year as part of its "Release 15" update.

Manning says that Release 15 considers the network slice selection function (NSSF) and the policy control function (PCF) as potential "consumers" of the NWDAF. "Anything else is open to being a consumer," he says. "We have things like monitoring the status of the load of a network slice, or looking at the behavior of mobile devices if you wanted to make adjustments. You could also look at application performance."

In principle, the NWDAF would be able to make use of any data in the core network. The 3GPP does not plan on standardizing the algorithms that will be used but rather the types of raw information the NWDAF will examine. The format of the analytics information that it produces might also be standardized, says Manning.

Such technical developments might help operators to provide network slices more dynamically on their future 5G networks.

Generally seen as one of the most game-changing aspects of 5G, the technique of network slicing would essentially allow an operator to provide a number of virtual network services over the same physical infrastructure.

For example, an operator could provide very high-speed connectivity for mobile gaming over one slice and a low-latency service for factory automation on another -- both reliant on the same underlying hardware.

However, there is concern that without greater automation operators will have less freedom to innovate through network slicing. "If operators don't automate they will be providing capacity-based slices that are relatively large and static and undifferentiated and certainly not on a per-customer basis," says Caroline Chappell, an analyst with Analysys Mason .

In a Madrid presentation, Chappell said that more granular slicing would require "highly agile end-to-end automation" that takes advantage of progress on software-defined networking and network functions virtualization.

"Slices could be very dynamic and perhaps last for only five minutes," she says. "In the very long term, applications could create their own slices."

Despite the talk of standardization, and signs of good progress within the 3GPP, concern emerged this week in Madrid that standards bodies are not moving quickly enough to address operators' needs.

Caroline Chappell's talk is available here whereas Serge Manning's talk is embedded below:



I am helping CW organise the annual CW TEC conference on the topic The inevitable automation of Next Generation Networks
Communications networks are perhaps the most complex machines on the planet. They use vast amounts of hardware, rely on complex software, and are physically distributed over land, underwater, and in orbit. They increasingly provide essential services that underpin almost every aspect of life. Managing networks and optimising their performance is a vast challenge, and will become many times harder with the advent of 5G. The 4th Annual CW Technology Conference will explore this challenge and how Machine Learning and AI may be applied to build more reliable, secure and better performing networks.

Is the AI community aware of the challenges facing network providers? Are the network operators and providers aware of how the very latest developments in AI may provide solutions? The conference will aim to bridge the gap between AI/ML and communications network communities, making each more aware of the nature and scale of the problems and the potential solutions.

I am hoping to see some of this blog readers at the conference. Looking forward to learning more on this topic amongst others for network automation.

Related Post:

Thursday 19 July 2018

5G Synchronisation Requirements


5G will probably introduce tighter synchronization requirements than LTE. A recent presentation from Ericsson provides more details.

In frequencies below 6GHz (referred to as frequency range 1 or FR1 in standards), there is a probability to use both FDD and TDD bands, especially in case of re-farming of existing bands. In frequencies above 6GHz (referred to as frequency range 2 or FR2 in standards, even though FR2 starts from 24.25 GHz), it is expected that all bands would be TDD.

Interesting to see that the cell phase synchronization accuracy measured at BS antenna connectors is specified to be better than 3 μs in 3GPP TS 38 133. This translates into a network-wide requirements of +/-1.5 microseconds and is applicable to both FR1 and FR2, regardless of the cell size.

Frequency Error for NR specified in 3GPP TS 38.104 states that the base station (BS) shall be accurate to within the following accuracy range observed over 1 ms:
Wide Area BS → ±0.05 ppm
Medium Range BS → ±0.1 ppm
Local Area BS → ±0.1 ppm

The presentation specifies that based on request by some operators, studies in ITU-T on the feasibility of solutions targeting end-to-end time synchronization requirements on the order of +/-100 ns to +/-300 ns

There is also a challenge of how the sync information is transported within the network. The conclusion is that while the current LTE sync requirements would work in the short term, new solutions would be required in the longer term.

If this is an area of interest, you will also enjoy watching CW Heritage SIG talk by Prof. Andy Sutton, "The history of synchronisation in digital cellular networks". Its available here.

Thursday 12 July 2018

Minimum Bandwidth Requirement for 5G Non-Standalone (NSA) Deployment

I was attending the IEEE 5G World Forum live-stream, courtesy of IEEE Tv and happen to hear Egil Gronstad, Senior Director of Technology Development and Strategy at T-Mobile USA. He said that they will be building a nationwide 5G network that will initially be based on 600 MHz band.


During the Q&A, Egil mentioned that because of the way the USA has different markets, on average they have 31 MHz of 600 MHz (Band 71). The minimum is 20 MHz and the maximum is 50 MHz.

So I started wondering how would they launch 4G & 5G in the same band for nationwide coverage? They have a good video on their 5G vision but that is of course probably going to come few years down the line.

In simple terms, they will first deploy what is known as Option 3 or EN-DC. If you want a quick refresher on different options, you may want to jump to my tutorial on this topic at 3G4G here.

The Master Node (recall dual connectivity for LTE, Release-12. See here) is an eNodeB. As with any LTE node, it can take bandwidths from 1.4 MHz to 20 MHz. So the minimum bandwidth for LTE node is 1.4 MHz.

The Secondary Node is a gNodeB. Looking at 3GPP TS 38.101-1, Table 5.3.5-1 Channel bandwidths for each NR band, I can see that for band 71


NR band / SCS / UE Channel bandwidth
NR Band
SCS
kHz
5 MHz
101,2 MHz
152 MHz
202 MHz
252 MHz
30 MHz
40 MHz
50 MHz
60 MHz
80 MHz
90 MHz
100 MHz
n71
15
Yes
Yes
Yes
Yes








30

Yes
Yes
Yes








60













The minimum bandwidth is 5MHz. Of course this is paired spectrum for FDD band but the point I am making here is that you need just 6.4 MHz minimum to be able to support the Non-Standalone 5G option.

I am sure you can guess that the speeds will not really be 5G speeds with this amount of bandwidth but I am looking forward to all these kind of complaints in the initial phase of 5G network rollout.

I dont know what bandwidths T-Mobile will be using but we will see at least 10MHz of NR in case where the total spectrum is 20 MHz and 20 MHz of NR where the total spectrum is 50 MHz.

If you look at the earlier requirements list, the number being thrown about for bandwidth was 100 MHz for below 6 GHz and up to 1 GHz bandwidth for spectrum above 6 GHz. Don't think there was a hard and fast requirement though.

Happy to hear your thoughts.