Monday 6 March 2017

IMT-2020 (5G) Requirements


ITU has just agreed on key 5G performance requirements for IMT-2020. A new draft report ITU-R M.[IMT-2020.TECH PERF REQ] is expected to be finally approved by  ITU-R Study Group 5 at its next meeting in November 2017. The press release says "5G mobile systems to provide lightning speed, ultra-reliable communications for broadband and IoT"


The following is from the ITU draft report:

The key minimum technical performance requirements defined in this document are for the purpose of consistent definition, specification, and evaluation of the candidate IMT-2020 radio interface technologies (RITs)/Set of radio interface technologies (SRIT) in conjunction with the development of ITU-R Recommendations and Reports, such as the detailed specifications of IMT-2020. The intent of these requirements is to ensure that IMT-2020 technologies are able to fulfil the objectives of IMT-2020 and to set a specific level of performance that each proposed RIT/SRIT needs to achieve in order to be considered by ITU-R for IMT-2020.


Peak data rate: Peak data rate is the maximum achievable data rate under ideal conditions (in bit/s), which is the received data bits assuming error-free conditions assignable to a single mobile station, when all assignable radio resources for the corresponding link direction are utilized (i.e., excluding radio resources that are used for physical layer synchronization, reference signals or pilots, guard bands and guard times). 

This requirement is defined for the purpose of evaluation in the eMBB usage scenario. 
The minimum requirements for peak data rate are as follows:
Downlink peak data rate is 20 Gbit/s.
Uplink peak data rate is 10 Gbit/s.


Peak spectral efficiency: Peak spectral efficiency is the maximum data rate under ideal conditions normalised by channel bandwidth (in bit/s/Hz), where the maximum data rate is the received data bits assuming error-free conditions assignable to a single mobile station, when all assignable radio resources for the corresponding link direction are utilized (i.e. excluding radio resources that are used for physical layer synchronization, reference signals or pilots, guard bands and guard times).

This requirement is defined for the purpose of evaluation in the eMBB usage scenario.
The minimum requirements for peak spectral efficiencies are as follows: 
Downlink peak spectral efficiency is 30 bit/s/Hz.
Uplink peak spectral efficiency is 15 bit/s/Hz.


User experienced data rate: User experienced data rate is the 5% point of the cumulative distribution function (CDF) of the user throughput. User throughput (during active time) is defined as the number of correctly received bits, i.e. the number of bits contained in the service data units (SDUs) delivered to Layer 3, over a certain period of time.

This requirement is defined for the purpose of evaluation in the related eMBB test environment.
The target values for the user experienced data rate are as follows in the Dense Urban – eMBB test environment: 
Downlink user experienced data rate is 100 Mbit/s
Uplink user experienced data rate is 50 Mbit/s


5th percentile user spectral efficiency: The 5th percentile user spectral efficiency is the 5% point of the CDF of the normalized user throughput. The normalized user throughput is defined as the number of correctly received bits, i.e., the number of bits contained in the SDUs delivered to Layer 3, over a certain period of time, divided by the channel bandwidth and is measured in bit/s/Hz. 

This requirement is defined for the purpose of evaluation in the eMBB usage scenario.
Indoor Hotspot – eMBB - Downlink: 0.3 bit/s/Hz Uplink: 0.21 bit/s/Hz
Dense Urban – eMBB - Downlink: 0.225 bit/s/Hz Uplink: 0.15 bit/s/Hz
Rural – eMBB - Downlink: 0.12 bit/s/Hz Uplink: 0.045 bit/s/Hz


Average spectral efficiency: Average spectral efficiency  is the aggregate throughput of all users (the number of correctly received bits, i.e. the number of bits contained in the SDUs delivered to Layer 3, over a certain period of time) divided by the channel bandwidth of a specific band divided by the number of TRxPs and is measured in bit/s/Hz/TRxP.

This requirement is defined for the purpose of evaluation in the eMBB usage scenario.
Indoor Hotspot – eMBB - Downlink: 9 bit/s/Hz/TRxP Uplink: 6.75 bit/s/Hz/TRxP
Dense Urban – eMBB - Downlink: 7.8 bit/s/Hz/TRxP Uplink: 5.4 bit/s/Hz/TRxP
Rural – eMBB - Downlink: 3.3 bit/s/Hz/TRxP Uplink: 1.6 bit/s/Hz/TRxP


Area traffic capacity: Area traffic capacity is the total traffic throughput served per geographic area (in Mbit/s/m2). The throughput is the number of correctly received bits, i.e. the number of bits contained in the SDUs delivered to Layer 3, over a certain period of time.

This requirement is defined for the purpose of evaluation in the related eMBB test environment.
The target value for Area traffic capacity in downlink is 10 Mbit/s/m2 in the Indoor Hotspot – eMBB test environment.


User plane latency: User plane latency is the contribution of the radio network to the time from when the source sends a packet to when the destination receives it (in ms). It is defined as the one-way time it takes to successfully deliver an application layer packet/message from the radio protocol layer 2/3 SDU ingress point to the radio protocol layer 2/3 SDU egress point of the radio interface in either uplink or downlink in the network for a given service in unloaded conditions, assuming the mobile station is in the active state. 
This requirement is defined for the purpose of evaluation in the eMBB and URLLC usage scenarios.
The minimum requirements for user plane latency are
4 ms for eMBB
1 ms for URLLC 
assuming unloaded conditions (i.e., a single user) for small IP packets (e.g., 0 byte payload + IP header), for both downlink and uplink.


Control plane latency: Control plane latency refers to the transition time from a most “battery efficient” state (e.g. Idle state) to the start of continuous data transfer (e.g. Active state).
This requirement is defined for the purpose of evaluation in the eMBB and URLLC usage scenarios.
The minimum requirement for control plane latency is 20 ms. Proponents are encouraged to consider lower control plane latency, e.g. 10 ms.


Connection density: Connection density is the total number of devices fulfilling a specific quality of service (QoS) per unit area (per km2).

This requirement is defined for the purpose of evaluation in the mMTC usage scenario.
The minimum requirement for connection density is 1 000 000 devices per km2.


Energy efficiency: Network energy efficiency is the capability of a RIT/SRIT to minimize the radio access network energy consumption in relation to the traffic capacity provided. Device energy efficiency is the capability of the RIT/SRIT to minimize the power consumed by the device modem in relation to the traffic characteristics. 
Energy efficiency of the network and the device can relate to the support for the following two aspects:
a) Efficient data transmission in a loaded case;
b) Low energy consumption when there is no data.
Efficient data transmission in a loaded case is demonstrated by the average spectral efficiency 

This requirement is defined for the purpose of evaluation in the eMBB usage scenario.
The RIT/SRIT shall have the capability to support a high sleep ratio and long sleep duration. Proponents are encouraged to describe other mechanisms of the RIT/SRIT that improve the support of energy efficient operation for both network and device.


Reliability: Reliability relates to the capability of transmitting a given amount of traffic within a predetermined time duration with high success probability

This requirement is defined for the purpose of evaluation in the URLLC usage scenario. 
The minimum requirement for the reliability is 1-10-5 success probability of transmitting a layer 2 PDU (protocol data unit) of 32 bytes within 1 ms in channel quality of coverage edge for the Urban Macro-URLLC test environment, assuming small application data (e.g. 20 bytes application data + protocol overhead). 
Proponents are encouraged to consider larger packet sizes, e.g. layer 2 PDU size of up to 100 bytes.


Mobility: Mobility is the maximum mobile station speed at which a defined QoS can be achieved (in km/h).

The following classes of mobility are defined:
Stationary: 0 km/h
Pedestrian: 0 km/h to 10 km/h
Vehicular: 10 km/h to 120 km/h
High speed vehicular: 120 km/h to 500 km/h

Mobility classes supported:
Indoor Hotspot – eMBB: Stationary, Pedestrian
Dense Urban – eMBB: Stationary, Pedestrian, Vehicular (up to 30 km/h)
Rural – eMBB: Pedestrian, Vehicular, High speed vehicular 


Mobility interruption time: Mobility interruption time is the shortest time duration supported by the system during which a user terminal cannot exchange user plane packets with any base station during transitions.

This requirement is defined for the purpose of evaluation in the eMBB and URLLC usage scenarios.
The minimum requirement for mobility interruption time is 0 ms.


Bandwidth: Bandwidth is the maximum aggregated system bandwidth. The bandwidth may be supported by single or multiple radio frequency (RF) carriers. The bandwidth capability of the RIT/SRIT is defined for the purpose of IMT-2020 evaluation.

The requirement for bandwidth is at least 100 MHz
The RIT/SRIT shall support bandwidths up to 1 GHz for operation in higher frequency bands (e.g. above 6 GHz). 

In case you missed, a 5G logo has also been released by 3GPP


Related posts:



Friday 24 February 2017

Connecting Rural Scotland using Airmasts and Droneways


This week EE has finally done a press release on what they term as Airmasts (see my blog post here). Back in Nov. last year, Mansoor Hanif, Director of Converged Networks and Innovation BT/EE gave an excellent presentation on connecting rural Scottish Islands using Airmasts and Droneways at the Facebook TIP Summit. Embedded below are the slides and video from that talk.





In other related news, AT&T is showing flying COWs (Cell On Wheels) that can transmit LTE signals


Their innovation blog says:

It is designed to beam LTE coverage from the sky to customers on the ground during disasters or big events.
...
Here’s how it works. The drone we tested carries a small cell and antennas. It’s connected to the ground by a thin tether. The tether between the drone and the ground provides a highly secure data connection via fiber and supplies power to the Flying COW, which allows for unlimited flight time.  The Flying COW then uses satellite to transport texts, calls, and data. The Flying COW can operate in extremely remote areas and where wired or wireless infrastructure is not immediately available. Like any drone that we deploy, pilots will monitor and operate the device during use.

Once airborne, the Flying COW provides LTE coverage from the sky to a designated area on the ground.  

Compared to a traditional COW, in certain circumstances, a Flying COW can be easier to deploy due to its small size. We expect it to provide coverage to a larger footprint because it can potentially fly at altitudes over 300 feet— about 500% higher than a traditional COW mast.  

Once operational, the Flying COW could eventually provide coverage to an area up to 40 square miles—about the size of a 100 football fields. We may also deploy multiple Flying COWs to expand the coverage footprint.

Nokia on the other hand has also been showcasing drones and LTE connectivity for public safety at D4G Award event in Dubai


Nokia's Ultra Compact Network provides a standalone LTE network to quickly re-establish connectivity to various mission-critical applications including video-equipped drones. Drones can stream video and other sensor data in real time from the disaster site to a control center, providing inputs such as exact locations where people are stranded and nature of the difficulty of reaching the locations.

Related Posts:



Friday 17 February 2017

What's '5G' in one word for you?


Last month in the IET 'Towards 5G Mobile Technology – Vision to Reality' seminar, Dr. Mike Short threw out a challenge to all speakers to come up with one word to describe 5G technology. The speakers came up with the following 'one words':
  • Professor Mischa Dohler, Centre for Telecommunications Research, King's College London, UK - Skills
  • Professor Maziar Nekovee, Professor,University of Sussex UK - Transformative or Magic
  • Professor Andy Sutton, Principal Network Architect, BT, UK - Opportunity
  • Professor Mark Beach, University of Bristol, UK - Networked-Society
  • Mark Barrett, CMO, Bluwireless, UK - Gigabit
  • Dr Nishanth Sastry, Centre for Telecommunications Research, Kings’ College London, UK - Flexibility or Efficiency
  • Dr Reiner Hoppe, Developer Electromagnetic Solutions, Altair - Radio
  • Professor Klaus Moessner, 5G Innovation Centre, University of Surrey, UK - Capacity
  • Joe Butler, Director of Technology, Ofcom, UK - Ubiquity
  • Dr Deeph Chana, Deputy Director, Institute for Security Science and Technology, Imperial College London, UK - Accessibility
What is your one word to describe 5G? Please add in comments. I welcome critical suggestions too :-)

Anyway, for anyone interested, the following story summarises the event:

This section contained story from Storify. Storify was shut down on May 16, 2018 and as a result the story was lost. An archived version of the story can be seen on wayback machine here.

Related links:

Sunday 5 February 2017

An Introduction to IoT: Connectivity & Case Studies


I did an introductory presentation on IoT yesterday at at the University of Northampton, Internet of Things event. Below if my presentation in full. Can be downloaded from slideshare.



xoxoxoxoxoxo Added 18/02/2017 oxoxoxoxoxoxox

Below is video of the presentation above and post presentation interview:

Wednesday 1 February 2017

5G Network Architecture and Design Update - Jan 2017

Andy Sutton, Principal Network Architect at BT recently talked about the architecture update from the Dec 2016 3GPP meeting. The slides and the video is embedded below.





You can see all the presentations from IET event 'Towards 5G Mobile Technology – Vision to Reality' here.

Eiko Seidel recently also wrote an update from 3GPP 5G Adhoc regarding RAN Internal Functional Split. You can read that report here.

Related posts:

Thursday 26 January 2017

3GPP Rel-14 IoT Enhancements


A presentation (embedded below) by 3GPP RAN3 Chairman - Philippe Reininger - at the IoT Business & Technologies Congress (November 30, in Singapore). Main topics are eMTC, NB-IOT and EC-GSM-IoT as completed in 3GPP Release 13 and enhanced in Release 14. Thanks to Eiko Seidel for sharing the presentation.


Sunday 22 January 2017

Augmented / Virtual Reality Requirements for 5G


Ever wondered whether 5G would be good enough for Augmented and Virtual Reality or will we need to wait for 6G? Some researchers are trying to identify the AR / VR requirements, challenges from a mobile network point of view and possible options to solve these challenges. They have recently published a research paper on this topic.

Here is a summary of some of the interesting things I found in this paper:

  • Humans process nearly 5.2 gigabits per second of sound and light.
  • Without moving the head, our eyes can mechanically shift across a field of view of at least 150 degrees horizontally (i.e., 30:000 pixels) and 120 degrees vertically (i.e., 24:000 pixels).
  • The human eye can perceive much faster motion (150 frames per second). For sports, games, science and other high-speed immersive experiences, video rates of 60 or even 120 frames per second are needed to avoid motion blur and disorientation.
  • 5.2 gigabits per second of network throughput (if not more) is needed.
  • At today’s 4K resolution, 30 frames per second and 24 bits per pixel, and using a 300 : 1 compression ratio, yields 300 megabits per second of imagery. That is more than 10x the typical requirement for a high-quality 4K movie experience.
  • 5G network architectures are being designed to move the post-processing at the network edge so that processors at the edge and the client display devices (VR goggles, smart TVs, tablets and phones) carry out advanced image processing to stitch camera feeds into dramatic effects.
  • In order to tackle these grand challenges, the 5G network architecture (radio access network (RAN), Edge and Core) will need to be much smarter than ever before by adaptively and dynamically making use of concepts such as software defined networking (SDN), network function virtualization (NFV) and network slicing, to mention a few facilitating a more flexible allocating resources (resource blocks (RBs), access point, storage, memory, computing, etc.) to meet these demands.
  • Immersive technology will require massive improvements in terms of bandwidth, latency and reliablility. Current remotereality prototype requires 100-to-200Mbps for a one-way immersive experience. While MirrorSys uses a single 8K, estimates about photo-realistic VR will require two 16K x 16K screens (one to each eye).
  • Latency is the other big issue in addition to reliability. With an augmented reality headset, for example, real-life visual and auditory information has to be taken in through the camera and sent to the fog/cloud for processing, with digital information sent back to be precisely overlaid onto the real-world environment, and all this has to happen in less time than it takes for humans to start noticing lag (no more than 13ms). Factoring in the much needed high reliability criteria on top of these bandwidth and delay requirements clearly indicates the need for interactions between several research disciplines.


These key research directions and scientific challenges are summarized in Fig. 3 (above), and discussed in the paper. I advice you to read it here.

Related posts:

Monday 16 January 2017

Gigabit LTE?


Last year Qualcomm announced the X16 LTE modem that was capable of up to 1Gbps, category 16 in DL and Cat 13 (150 Mbps) in UL. See my last post on UE categories here.


Early January, it announced Snapdragon 835 at CES that looks impressive. Android central says "On the connectivity side of things, there's the Snapdragon X16 LTE modem, which enables Category 16 LTE download speeds that go up to one gigabit per second. For uploads, there's a Category 13 modem that lets you upload at 150MB/sec. For Wi-Fi, Qualcomm is offering an integrated 2x2 802.11ac Wave-2 solution along with an 802.11ad multi-gigabit Wi-Fi module that tops out at 4.6Gb/sec. The 835 will consume up to 60% less power while on Wi-Fi."

Technology purists would know that LTE, which is widely referred to as 4G, was in fact pre-4G or as some preferred to call it, 3.9G. New UE categories were introduced in Rel-10 to make LTE into LTE-Advanced with top speeds of 3Gbps. This way, the ITU requirements for a technology to be considered 4G (IMT-Advanced) was satisfied.


LTE-A was already Gigabit capable in theory but in practice we had been seeing peak speeds of up to 600Mbps until recently. With this off my chest, lets look at what announcements are being made. Before that, you may want to revisit what 4.5G or LTE-Advanced Pro is here.

  • Qualcomm, Telstra, Ericsson and NETGEAR Announce World’s First Gigabit Class LTE Mobile Device and Gigabit-Ready Network. Gigabit Class LTE download speeds are achieved through a combination of 3x carrier aggregation, 4x4 MIMO on two aggregated carriers plus 2x2 MIMO on the third carrier, and 256-QAM higher order modulation. 
  • TIM in Italy is the first in Europe to launch 4.5G up to 500 Mbps in Rome, Palermo and Sanremo
  • Telenet in partnership with ZTE have achieved a download speed of 1.3 Gbps during a demonstration of the ZTE 4.5G new technology. That's four times faster than 4G's maximum download speed. Telenet is the first in Europe to reach this speed in real-life circumstances. 4.5G ZTE technology uses 4x4 MIMO beaming, 3-carrier aggregation, and a QAM 256 modulation.
  • AT&T said, "The continued deployment of our 4G LTE-Advanced network remains essential to laying the foundation for our evolution to 5G. In fact, we expect to begin reaching peak theoretical speeds of up to 1 Gbps at some cell sites in 2017. We will continue to densify our wireless network this year through the deployment of small cells and the use of technologies like carrier aggregation, which increases peak data speeds. We’re currently deploying three-way carrier aggregation in select areas, and plan to introduce four-way carrier aggregation as well as LTE-License Assisted Access (LAA) this year."
  • T-Mobile USA nearly reached a Gigabit and here is what they say, "we reached nearly 1 Gbps (979 Mbps) on our LTE network in our lab thanks to a combination of three carrier aggregation, 4x4 MIMO and 256 QAM (and an un-released handset)."
  • The other US operator Sprint expects to unveil some of its work with 256-QAM and massive MIMO on Sprint’s licensed spectrum that pushes the 1 gbps speed boundary. It’s unclear whether this will include an actual deployment of the technology

So we are going to see a lot of higher speed LTE this year and yes we can call it Gigabit LTE but lets not forget that the criteria for a technology to be real '4G' was that it should be able to do 1Gbps in both DL and UL. Sadly, the UL part is still not going Gigabit anytime soon.

Saturday 7 January 2017

New LTE UE Categories (Downlink & Uplink) in Release-13

Just noticed that the LTE UE Categories have been updated since I last posted here. Since Release-12 onwards, we now have a possibility of separate Downlink (ue-CategoryDL) and Uplink (ue-CategoryUL) categories.

From the latest RRC specifications, we can see that now there are two new fields that can be present ue-CategoryDL and ue-CategoryUL.

An example defined here is as follows:

Example of RRC signalling for the highest combination
UE-EUTRA-Capability
   ue-Category = 4
      ue-Category-v1020 = 7
         ue-Category-v1170 = 10
            ue-Category-v11a0 = 12
               ue-CategoryDL-r12 = 12
               ue-CategoryUL-r12 = 13
                  ue-CategoryDL-v1260 = 16

From the RRC Specs:

  • The field ue-CategoryDL is set to values m1, 0, 6, 7, 9 to 19 in this version of the specification.
  • The field ue-CategoryUL is set to values m1, 0, 3, 5, 7, 8, 13 or 14 in this version of the specification.

3GPP TS 36.306 section 4 provides much more details on these UE categories and their values. I am adding these pictures from the LG space website.



More info:



Saturday 10 December 2016

Free Apps for Field Testing

People who follow me on Twitter may have often noticed I put photos when I am doing surveys, field testing, debugging, etc. In the good old days we often had to carry a lot of different kind of specialised test equipment to do basic measurements. Nowadays a lot of these can be done with the help of free apps on Android phones. The best tool that can provide a great amount of info is Qualcomm's QXDM but its really expensive.

Here are a few tools that I use. If you have one that I havent listed below, please add it in comments.


The screen shot shows the main tools along with my favourite, SpeedTest. While I agree that Speedtest is not the most reliable approach to speed of your connection, I think its the most standard one being used.


WiFi Analyzer is another great app that can be used at home and other locations where people complain about not getting good WiFi speeds. I have been at locations where the 2.4GHz is absolutely packed with APs. 5GHz is also getting busier, though there are still a lot of free channels.


G-NetTrack Lite is a great tool to keep track of the cells you have been visiting. In case you are driving this can collect a lot of valuable info. The paid version, G-NetTrack Pro can collect the info in form of a map that can be used for offline viewing with the help of Google Earth.


I use LTE Discovery mainly for finding the band I am currently camped on. It would be great if a tool can give the exact frequency and earfcn but the band is good enough too. I was once in a situation where I could see two different cells but they had the same PCI. Only after using this, I figured out they were on different bands.


Finally, Network Cell Info Lite gives neighbour cells which can often be useful. I am not sure of these are the neighbours from System Info or from Measurement Control messages sent by network or just something like Detected cells that the phone sees around.

Pind and IPConfig are other tools that can come handy sometimes.

Are there any other tools that you like? Please share using comments.

Free Apps for Field Testing - Part 2

Sunday 4 December 2016

5G, Hacking & Security


It looks like devices that are not manufactures with security and privacy in mind are going to be the weakest link in future network security problems. I am sure you have probably read about how hacked cameras and routers enabled a Mirai botnet to take out major websites in October. Since then, there has been no shortage of how IoT devices could be hacked. In fact the one I really liked was 'Researchers hack Philips Hue lights via a drone; IoT worm could cause city blackout' 😏.


Enter 5G and the problem could be be made much worse. With high speed data transfer and signalling, these devices can create an instantaneous attack on a very large scale and generating signalling storm that can take a network down in no time.

Giuseppe TARGIA, Nokia presented an excellent summary of some of these issues at the iDate Digiworld Summit 2016. His talk is embedded below:



You can check out many interesting presentations from the iDate Digiworld Summit 2016 on Youtube and Slideshare.

Related posts:


Wednesday 23 November 2016

Facebook's Attempt to Connect the Unconnected

I am sure that by now everyone is aware of Facebook's attempt to connect the people in rural and remote areas. Back in March they published the State of Connectivity report highlighting that there are still over 4 billion people that are unconnected.


The chart above is very interesting and shows that there are still people who use 2G to access Facebook. Personally, I am not sure if these charts take Wi-Fi into account or not.

In my earlier post in the Small Cells blog, I have made a case for using Small Cells as the best solution for rural & remote coverage. There are a variety of options for power including wind turbines, solar power and even the old fashioned diesel/petrol generators. The main challenge is sometimes the backhaul. To solve this issue Facebook has been working on its drones as a means of providing the backhaul connectivity.


Recently Facebook held its first Telco Infra Project (TIP) Summit in California. The intention was to bring the diverse set of members (over 300 as I write this post) in a room, discuss ideas and ongoing projects.


There were quite a few interesting talks (videos available here). I have embedded the slides and the talk by SK Telecom below but before I that I was to highlight the important point  made by AMN.


As can be seen in the picture above, technology is just one of the challenges in providing rural and remote connectivity. There are other challenges that have to be considered too.

Embedded below is the talk provided by Dr. Alex Jinsung Choi,  CTO, SK Telecom and TIP Chairman and the slides follow that.



For more info, see:
Download the TIP slides from here.

Thursday 17 November 2016

5G, Debates, Predictions and Stories

This post contains summary of three interesting events that took place recently.


CW (Cambridge Wireless) organised a couple of debates on 5G as can be seen from the topics above. Below is the summary video and twitter discussion summary/story.





The second story is from 'The Great Telco Debate 2016' organised by TM forum


I am not embedding the story but for anyone interested, they can read the twitter summary here: https://storify.com/zahidtg/the-great-telco-debate-2016



Finally, it was 'Predictions: 2017 and Beyond', organised by CCS Insight. The whole twitter discussion is embedded below.


Saturday 12 November 2016

Verizon's 5G Standard

Earlier this year I wrote a Linkedin post on how operators are setting a timetable for 5G (5G: Mine is bigger than yours) and recently Dean Bubley of Disruptive Analysis wrote a similar kind of post also on Linkedin with a bit more detail (5G: Industry Politics, Use-Cases & a Realistic Timeline)


Some of you may be unaware that the US operator Verizon has formed 'Verizon 5G Technology Forum' (V5GTF) with the intention of developing the first set of standards that can also influence the direction of 3GPP standardization and also provide an early mover advantage to itself and its partners.

The following from Light Reading news summarizes the situation well:

Verizon has posted its second round of work with its partners on a 5G specification. The first round was around the 5G radio specification; this time the work has been on the mechanics of connecting to the network. The operator has been working on the specification with Cisco Systems Inc., Ericsson AB, Intel Corp., LG Electronics Inc., Nokia Corp., Qualcomm Inc. and Samsung Corp. via the 5G Technology Forum (V5GTF) it formed late in 2015.

Sanyogita Shamsunder, director of strategy at Verizon, says that the specification is "75% to 80% there" at least for a "fixed wireless use case." Verizon is aiming for a "friendly, pre-commercial launch" of a fixed wireless pilot in 2017, Koeppe notes.

Before we go further, lets see this excellent video by R&S wherein Andreas Roessler explains what Verizon is up to:



Verizon and SKT are both trying to be the 5G leaders and trying to roll out a pre-standard 5G whenever they can. In fact Qualcomm recently released a 28 GHz modem that will be used in separate pre-standard 5G cellular trials by Verizon and Korea Telecom

Quoting from the EE times article:

The Snapdragon X50 delivers 5 Gbits/second downlinks and multiple gigabit uplinks for mobile and fixed-wireless networks. It uses a separate LTE connection as an anchor for control signals while the 28 GHz link delivers the higher data rates over distances of tens to hundreds of meters.

The X50 uses eight 100 MHz channels, a 2x2 MIMO antenna array, adaptive beamforming techniques and 64 QAM to achieve a 90 dB link budget. It works in conjunction with Qualcomm’s SDR05x mmWave transceiver and PMX50 power management chip. So far, Qualcomm is not revealing more details of modem that will sample next year and be in production before June 2018.

Verizon and Korea Telecom will use the chips in separate trials starting late next year, anticipating commercial services in 2018. The new chips mark a departure from prototypes not intended as products that Qualcomm Research announced in June.

Korea Telecom plans a mobile 5G offering at the February 2018 Winter Olympics. Verizon plans to launch in 2018 a less ambitious fixed-wireless service in the U.S. based on a specification it released in July. KT and Verizon are among a quartet of carriers that formed a group in February to share results of early 5G trials.

For its part, the 3GPP standards group is also stepping up the pace of the 5G standards efforts it officially started earlier this year. It endorsed last month a proposal to consider moving the date for finishing Phase I, an initial version of 5G anchored to LTE, from June 2018 to as early as December 2017, according to a recent Qualcomm blog.

Coming back to Verizon's 5G standard, is it good enough and compatible with 3GPP standards? The answer right now seems to be NO.


The following is from Rethink Wireless:

The issue is that Verizon’s specs include a subcarrier spacing value of 75 kHz, whereas the 3GPP has laid out guidelines that subcarrier spacing must increase by 30 kHz at a time, according to research from Signals Research Group. This means that different networks can work in synergy if required without interfering with each other.

Verizon’s 5G specs do stick to 3GPP requirements in that it includes MIMO and millimeter wave (mmWave). MmWave is a technology that both AT&T and Verizon are leading the way in – which could succeed in establishing spectrum which is licensed fairly traditionally as the core of the US’s high frequency build outs.

A Verizon-fronted group recently rejected a proposal from AT&T to push the 3GPP into finalizing an initial 5G standard for late 2017, thus returning to the original proposed time of June 2018. Verizon was supported by Samsung, ZTE, Deutsche Telecom, France Telecom, TIM and others, which were concerned the split would defocus SA and New Radio efforts and even delay those standards being finalized.

Verizon has been openly criticized in the industry, mostly by AT&T (unsurprisingly), as its hastiness may lead to fragmentation – yet it still looks likely to beat AT&T to be the first operator to deploy 5G, if only for fixed access.

Verizon probably wants the industry to believe that it was prepared for eventualities such as this – prior to the study from Signal Research Group, the operator said its pre-standard implementation will be close enough to the standard that it could easily achieve full compatibility with simple alterations. However, Signals Research Group’s president Michael Thelander has been working with the 3GPP since the 5G standard was birthed, and he begs to differ.

Thelander told FierceWireless, “I believe what Verizon is doing is not hardware-upgradeable to the real specification. It’s great to be trialing, even if you define your own spec, just to kind of get out there and play around with things. That’s great and wonderful and hats off to them. But when you oversell it and call it 5G and talk about commercial services, it’s not 5G. It’s really its own spec that has nothing to do with Release 16, which is still three years away. Just because you have something that operates in millimeter wave spectrum and uses Massive MIMO and OFDM, that doesn’t make it a 5G solution.”

Back in the 3G days, NTT Docomo was the leader in standards and it didn't have enough patience to wait for 3GPP standards to complete. As a result it released its first 3G network called FOMA (Freedom of Mobile Access) based on pre-standard version of specs. This resulted in handset manufacturers having to tweak their software to cope with this version and it suffered from economy of scale. Early version of 3G phones were also not able to roam on the Docomo network. In a way, Verizon is going down the same path.

While there can be some good learning as a result of this pre-5G standard, it may be a good idea not to get too tied into it. A standard that is not compliant will not achieve the required economy of scale, either with handsets or with dongles and other hotspot devices.


Related posts:



Sunday 6 November 2016

LTE, 5G and V2X

3GPP has recently completed the Initial Cellular V2X standard. The following from the news item:

The initial Cellular Vehicle-to-Everything (V2X) standard, for inclusion in the Release 14, was completed last week - during the 3GPP RAN meeting in New Orleans. It focuses on Vehicle-to-Vehicle (V2V) communications, with further enhancements to support additional V2X operational scenarios to follow, in Release 14, targeting completion during March 2017.
The 3GPP Work Item Description can be found in RP-161894.
V2V communications are based on D2D communications defined as part of ProSe services in Release 12 and Release 13 of the specification. As part of ProSe services, a new D2D interface (designated as PC5, also known as sidelink at the physical layer) was introduced and now as part of the V2V WI it has been enhanced for vehicular use cases, specifically addressing high speed (up to 250Kph) and high density (thousands of nodes).

...


For distributed scheduling (a.k.a. Mode 4) a sensing with semi-persistent transmission based mechanism was introduced. V2V traffic from a device is mostly periodic in nature. This was utilized to sense congestion on a resource and estimate future congestion on that resource. Based on estimation resources were booked. This technique optimizes the use of the channel by enhancing resource separation between transmitters that are using overlapping resources.
The design is scalable for different bandwidths including 10 MHz bandwidth.
Based on these fundamental link and system level changes there are two high level deployment configurations currently defined, and illustrated in Figure 3.
Both configurations use a dedicated carrier for V2V communications, meaning the target band is only used for PC5 based V2V communications. Also in both cases GNSS is used for time synchronization.
In “Configuration 1” scheduling and interference management of V2V traffic is supported based on distributed algorithms (Mode 4) implemented between the vehicles. As mentioned earlier the distributed algorithm is based on sensing with semi-persistent transmission. Additionally, a new mechanism where resource allocation is dependent on geographical information is introduced. Such a mechanism counters near far effect arising due to in-band emissions.
In “Configuration 2” scheduling and interference management of V2V traffic is assisted by eNBs (a.k.a. Mode 3) via control signaling over the Uu interface. The eNodeB will assign the resources being used for V2V signaling in a dynamic manner.

5G Americas has also published a whitepaper on V2X Cellular Solutions. From the press release:

Vehicle-to-Everything (V2X) communications and solutions enable the exchange of information between vehicles and much more - people (V2P), such as bicyclists and pedestrians for alerts, vehicles (V2V) for collision avoidance, infrastructure (V2I) such as roadside devices for timing and prioritization, and the network (V2N) for real time traffic routing and other cloud travel services. The goal of V2X is to improve road safety, increase the efficiency of traffic, reduce environmental impacts and provide additional traveler information services. 5G Americas, the industry trade association and voice of 5G and LTE for the Americas, today announced the publication of a technical whitepaper titled V2X Cellular Solutions that details new connected car opportunities for the cellular and automotive industries.




The whitepaper describes the benefits that Cellular V2X (C-V2X) can provide to support the U.S. Department of Transportation objectives of improving safety and reducing vehicular crashes. Cellular V2X can also be instrumental in transforming the transportation experience by enhancing traveler and traffic information for societal goals.

C-V2X is part of the 3GPP specifications in Release 14. 3GPP announced the completion of the initial C-V2X standard in September 2016. There is a robust evolutionary roadmap for C-V2X towards 5G with a strong ecosystem in place. C-V2X will be a key technology enabler for the safer, more autonomous vehicle of the future.

The whitepaper is embedded below:




Related posts:
Further Reading:



Saturday 29 October 2016

M2M vs IoT

This post is for mainly for my engineering colleagues. Over the years I have had many discussions to explain the difference between Machine-to-Machine (M2M) or Machine Type Communication (MTC) as 3GPP refers to them and the Internet of Things (IoT). Even after explaining the differences, I am often told that this is not correct. Hence I am putting this out here. Please feel free to express your views in the comments section.


Lets take an example of an office with 3 floors. Lets assume that each floor has a coffee machine like the one in this picture or something similar. Lets assume different scenarios:

Scenario 1: No connectivity
In this case a facilities person has to manually go to each of the floor and check if there are enough coffee beans, chocolate powder, milk powder, etc. He/She may have to do this say 3-4 times a day.

Scenario 2: Basic connectivity (M2M)
Lets say the coffee machine has basic sensors so it can send some kind of notification (on your phone or email or message, etc.) whenever the coffee beans, chocolate powder, milk powder, etc., falls below a certain level. In some cases you may also be able to check the levels using some kind of a app on your phone or computer. This is an example of M2M

Scenario 3: Advanced connectivity (IoT)
Lets say that the coffee machine is connected to the office system and database. It knows which employees come when and what is their coffee/drinks consumption pattern. This way the machine can optimize when it needs to be topped up. If there is a large meeting/event going on, the coffee machine can even check before the breaks and indicate in advance that it needs topping up with beans/chocolate/milk/etc.

Scenario 4: Intelligent Devices (Advanced IoT)
If we take the coffee machine from scenario 3 and add intelligence to it, it can even know about the inventory. How much of coffee beans, chocolate powder, milk powder, etc is in stock and when would they need ordering again. It can have an employee UI (User Interface) that can be used by employees to give feedback on which coffee beans are more/less popular or what drinks are popular. This info can be used by the machines to order the supplies, taking into account the price, availability, etc.

In many cases, API's would be available for people to build services on top of the basic available services to make life easier. Someone for example can build a service that if a cup is already at the dispenser and has been there for at least 2 minutes (so you know its not being used by someone else) then the person can choose/order their favourite drink from their seat so he/she doesn't have to wait for 30 seconds at the machine.

If you think about this further you will notice that in this scenario the only requirement for the human is to clean the coffee machine, top it up, etc. In future these can be automated with robots carrying out these kinds of jobs. There would be no need for humans to do these menial tasks.


I really like this slide from InterDigital as it captures the difference between M2M and IoT very well, especially in the light of the discussion above.

With the current M2M, we have:

  • Connectivity: connection for machines;
  • Content: massive raw data from things;

IoT is Communication to/from things which offer new services via cloud / context / collaboration / cognition technologies.

With evolution to IoT, we have:
  • Cloud: cloud service and XaaS (Everything as a Service) for IoT;
  • Context: context-aware design;
  • Collaboration: collaborative services;
  • Cognition: semantics and autonomous system adjustment
Let me know if you agree. 

Sunday 23 October 2016

VoLTE Operator Case Study from LTE Voice Summit


Phil Sheppard, Director of Network Strategy & Architecture, Three UK was the keynote speaker of LTE Voice Summit held in London this month. Its been over a year that Three launched its VoLTE service in the 800MHz band. In fact recently, it has started showing adverts with Maisie Williams (Arya Stark from Game of Thrones) fighting black spots (not spots) with 4G Super-Voice.



As I highlighted in the LTEVoice 2015 summary where China Mobile group vice-president Mr.Liu Aili admitted "VoLTE network deployment is the one of the most difficult project ever, the implementation complexity and workload is unparalleled in history", Three UK's experience wasn't very different. Quoting from ThinkSmallCell summary of the event:
It was a huge project, the scope far exceeding original expectations and affecting almost every part of their operations.  They spent 22,245 man days (excluding vendor staff time) – more than 100 man years of effort – mostly involved with running huge numbers of test cases on the network and devices.

There are some other interesting bits from the different summaries that are provided in references below but here are few things I found of interest with regards to Three UK VoLTE deployment:
  • 170 million voice calls minutes have used VoLTE since the launch in Sept 2015
  • Only devices that can support VoLTE and 800MHz are allowed to camp on 800MHz band. This is to avoid disappointment with CS Fallback
  • There are plans to roll out VoLTE in other bands too once all niggles are ironed out in the 800MHz band.

Here is the presentation from 3 UK:



Blog posts summarizing LTEVoice 2016:

Related posts: