Showing posts with label Technical Details. Show all posts
Showing posts with label Technical Details. Show all posts

Thursday 20 July 2017

Second thoughts about LTE-U / LAA

Its been a while since I wrote about LTE-U / LAA on this blog. I have written a few posts on the small cells blog but they seem to be dated as well. For anyone needing a quick refresher on LTE-U / LAA, please head over to IoTforAll or ShareTechNote. This post is not about the technology per se but the overall ecosystem with LTE-U / LAA (and even Multefire) being part of that.

Lets recap the market status quickly. T-Mobile US has already got LTE-U active and LAA was tested recently. SK Telecom achieved 1Gbps in LAA trials with Ericsson. AT&T has decided to skip the non-standard LTE-U and go to standards based LAA. MTN & Huawei have trialled LAA for in-building in South Africa. All these sound good and inspires confidence in the technology however some observations are worrying me.


Couple of years back when LTE-U idea was conceived, followed by LAA, the 5GHz channels were relatively empty. Recently I have started to see that they are all filling up.

Any malls, hotels, service stations or even big buildings I go to, they all seem to be occupied. While supplemental downlink channels are 20MHz each, the Wi-Fi channels could be 20MHz, 40MHz, 80MHz or even 160MHz.

On many occasions I had to switch off my Wi-Fi as the speeds were so poor (due to high number of active users) and go back to using 4G. How will it impact the supplemental downlink in LTE-U / LAA? How will it impact the Wi-Fi users?

On my smartphone, most days I get 30/40Mbps download speeds and it works perfectly fine for all my needs. The only reason we would need higher speeds is to do tethering and use laptops for work, listen to music, play games or watch videos. Most people I know or work with dont require gigabit speeds at the moment.

Once a user that is receiving high speeds data on their device using LTE-U / LAA creates a Wi-Fi hotspot, it may use the same 5GHz channels as the ones that the network is using for supplemental downlink. How do you manage this interference? I am looking forward to discussions on technical fora where users will be asking why their download speeds fall as soon as they switch Wi-Fi hotspot on.

The fact is that in non-dense areas (rural, sub-urban or even general built-up areas), operators do not have to worry about the network being overloaded and can use their licensed spectrum. Nobody is planning to deploy LTE-U / LAA in these areas. In dense and ultra-dense areas, there are many users, many Wi-Fi access points, ad-hoc Wi-Fi networks and many other sources of interference. In theory LTE-U / LAA can help significantly but as there are many sources of interference,its uncertain if it would be a win-win for everyone or just more interference for everyone to deal with.

Further reading:

Monday 19 June 2017

Network Sharing is becoming more relevant with 5G

5G is becoming a case of 'damned if you do damned if you don't'. Behind the headlines of new achievements and faster speeds lies the reality that many operators are struggling to keep afloat. Indian and Nigerian operators are struggling with heavy debt and it wont be a surprise if some of the operators fold in due course.

With increasing costs and decreasing revenues, its no surprise that operators are looking at ways of keeping costs down. Some operators are postponing their 5G plans in favour of Gigabit LTE. Other die hard operators are pushing ahead with 5G but looking at ways to keep the costs down. In Japan for example, NTT DOCOMO has suggested sharing 5G base stations with its two rivals to trim costs, particularly focusing efforts in urban areas.


In this post, I am looking to summarise an old but brilliant post by Dr. Kim Larsen here. While it is a very well written and in-depth post, I have a feeling that many readers may not have the patience to go through all of it. All pictures in this post are from the original post by Dr. Kim Larsen.


Before embarking on any Network sharing mission, its worthwhile asking the 5W's (Who, Why, What, Where, When) and 2H's (How, How much).

  • Why do you want to share?
  • Who to share with? (your equal, your better or your worse).
  • What to share? (sites, passives, active, frequencies, new sites, old sites, towers, rooftops, organization, ,…).
  • Where to share? (rural, sub-urban, urban, regional, all, etc..).
  • When is a good time to start sharing? During rollout phase, steady phase or modernisation phase. See picture below. For 5G, it would make much more sense that network sharing is done from the beginning, i.e., Rollout Phase


  • How to do sharing?. This may sound like a simple question but it should take account of regulatory complexity in a country. The picture below explains this well:



  • How much will it cost and how much savings can be attained in the long term? This is in-fact a very important question because the end result after a lot of hard work and laying off many people may result in an insignificant amount of cost savings. Dr. Kim provides detailed insight on this topic that I find it difficult to summarise. Best option is to read it on his blog.


An alternative approach to network sharing is national roaming. Many European operators are dead against national roaming as this means the network loses its differentiation compared to rival operators. Having said that, its always worthwhile working out the savings and seeing if this can actually help.

National Roaming can be attractive for relative low traffic scenarios or in case were product of traffic units and national roaming unit cost remains manageable and lower than the Shared Network Cost.

The termination cost or restructuring cost, including write-off of existing telecom assets (i.e., radio nodes, passive site solutions, transmission, aggregation nodes, etc….) is likely to be a substantially financial burden to National Roaming Business Case in an area with existing telecom infrastructure. Certainly above and beyond that of a Network Sharing scenario where assets are being re-used and restructuring cost might be partially shared between the sharing partners.

Obviously, if National Roaming is established in an area that has no network coverage, restructuring and termination cost is not an issue and Network TCO will clearly be avoided, Albeit the above economical logic and P&L trade-offs on cost still applies.

If this has been useful to understand some of the basics of network sharing, I encourage you to read the original blog post as that contains many more details.

Futher Reading:



Sunday 11 June 2017

Theoretical calculation of EE's announcement for 429Mbps throughput


The CEO of UK mobile network operator EE recently announced on twitter that they have achieved 429 Mbps in live network. The following is from their press release:

EE, the UK’s largest mobile network operator and part of the BT Group, has switched on the next generation of its 4G+ network and demonstrated live download speeds of 429Mbps in Cardiff city centre using Sony’s Xperia XZ Premium, which launched on Friday 2 June. 
The state of the art network capability has been switched on in Cardiff and the Tech City area of London today. Birmingham, Manchester and Edinburgh city centres will have sites upgraded during 2017, and the capability will be built across central London. Peak speeds can be above 400Mbps with the right device, and customers connected to these sites should be able to consistently experience speeds above 50Mbps. 
Sony’s Xperia XZ Premium is the UK’s first ‘Cat 16’ smartphone optimised for the EE network, and EE is the only mobile network upgrading its sites to be able to support the new device’s unique upload and download capabilities. All devices on the EE network will benefit from the additional capacity and technology that EE is building into its network. 
... 
The sites that are capable of delivering these maximum speeds are equipped with 30MHz of 1800MHz spectrum, and 35MHz of 2.6GHz spectrum. The 1800MHz carriers are delivered using 4x4 MIMO, which sends and receives four signals instead of just two, making the spectrum up to twice as efficient. The sites also broadcast 4G using 256QAM, or Quadrature Amplitude Modulation, which increases the efficiency of the spectrum.

Before proceeding further you may want to check out my posts 'Gigabit LTE?' and 'New LTE UE Categories (Downlink & Uplink) in Release-13'

If you read the press release carefully, EE are now using 65MHz of spectrum for 4G. I wanted to provide a calculation for whats possible in theory with this much bandwidth.

Going back to basics (detailed calculation for basics in slideshare below), in LTE/LTE-A, the maximum bandwidth possible is 20MHz. Any more bandwidth can be used with Carrier Aggregation. So as per the EE announcement, its 20 + 10 MHz in 1800 band and 20 + 15 MHz in 2600 band

So for 1800 MHz band:

50 resource blocks (RBs) per 10MHZ, 150 for 30MHz.
Each RB has 12x7x2=168 symbols per millisecond in case of normal modulation support cyclic prefix (CP).
For 150 RBs, 150 x 168 = 25200 symbols per ms or 25,200,000 symbols per second. This can also be written as 25.2 Msps (Mega symbols per second)
256 QAM means 8 bits per symbol. So the calculation changes to 25.2 x 8 = 201.6 Mbps. Using 4 x 4 MIMO, 201.6 x 4 = 806.4Mbps
Removing 25% overhead which is used for signalling, this gives 604.80 Mbps


Repeating the same exercise for 35MHz of 2600 MHz band, with 2x2 MIMO and 256 QAM:

175 x 168 = 29400 symbols per ms or 29,400,000 symbols per second. This can be written as 29.4 Msps
29.4 x 8 = 235.2 Mbps
Using 2x2 MIMO, 235.2 x 2 = 470.4 Mbps
Removing 25% overhead which is used for signalling, this gives 352.80 Mbps

The combined theoretical throughput for above is 957.60 Mbps

For those interested in revisiting the basic LTE calculations, here is an interesting document:




Further reading:

Sunday 7 May 2017

10 years battery life calculation for Cellular IoT

I made an attempt to place the different cellular and non-cellular LPWA technologies together in a picture in my last post here. Someone pointed out that these pictures above, from LoRa alliance whitepaper are even better and I agree.

Most IoT technologies lists their battery life as 10 years. There is an article in Medium rightly pointing out that in Verizon's LTE-M network, IoT devices battery may not last very long.

The problem is that 10 years battery life is headline figure and in real world its sometimes not that critical. It all depends on the application. For example this Iota Pet Tracker uses Bluetooth but only claims battery life of  "weeks". I guess ztrack based on LoRa would give similar results. I have to admit that non-cellular based technologies should have longer battery life but it all depends on applications and use cases. An IoT device in the car may not have to worry too much about power consumption. Similarly a fleet tracker that may have solar power or one that is expected to last more than the fleet duration, etc.


So coming back to the power consumption. Martin Sauter in his excellent Wireless Moves blog post, provided the calculation that I am copying below with some additions:

The calculation can be found in 3GPP TR 45.820, for NB-IoT in Chapter 7.3.6.4 on ‘Energy consumption evaluation’.

The battery capacity used for the evaluation was 5 Wh. That’s about half or even only a third of the battery capacity that is in a smartphone today. So yes, that is quite a small battery indeed. The chapter also contains an assumption on how much power the device draws in different states. In the ‘idle’ state the device is in most often, power consumption is assumed to be 0.015 mW.

How long would the battery be able to power the device if it were always in the idle state? The calculation is easy and you end up with 38 years. That doesn’t include battery self-discharge and I wondered how much that would be over 10 years. According to the Varta handbook of primary lithium cells, self-discharge of a non-rechargable lithium battery is less than 1% per year. So subtract roughly 4 years from that number.

Obviously, the device is not always in idle and when transmitting the device is assumed to use 500 mW of power. Yes, with this power consumption, the battery would not last 34 years but less than 10 hours. But we are talking about NB-IoT so the device doesn’t transmit for most of the time. The study looked at different transmission patterns. If 200 bytes are sent once every 2 hours, the device would run on that 5 Wh battery for 1.7 years. If the device only transmits 50 bytes once a day the battery would last 18.1 years.

So yes, the 10 years are quite feasible for devices that collect very little data and only transmit them once or twice a day.

The conclusions from the report clearly state:

The achievable battery life for a MS using the NB-CIoT solution for Cellular IoT has been estimated as a function of reporting frequency and coupling loss. 

It is important to note that these battery life estimates are achieved with a system design that has been intentionally constrained in two key respects:

  • The NB-CIoT solution has a frequency re-use assumption that is compatible with a stand-alone deployment in a minimum system bandwidth for the entire IoT network of just 200 kHz (FDD), plus guard bands if needed.
  • The NB-CIoT solution uses a MS transmit power of only +23 dBm (200 mW), resulting in a peak current requirement that is compatible with a wider range of battery technologies, whilst still achieving the 20 dB coverage extension objective.  

The key conclusions are as follows:

  • For all coupling losses (so up to 20 dB coverage extension compared with legacy GPRS), a 10 year battery life is achievable with a reporting interval of one day for both 50 bytes and 200 bytes application payloads.
  • For a coupling loss of 144 dB (so equal to the MCL for legacy GPRS), a 10 year battery life is achievable with a two hour reporting interval for both 50 bytes and 200 bytes application payloads. 
  • For a coupling loss of 154 dB, a 10 year battery life is achievable with a 2 hour reporting interval for a 50 byte application payload. 
  • For a coupling loss of 154 dB with 200 byte application payload, or a coupling loss of 164 dB with 50 or 200 byte application payload, a 10 year battery life is not achievable for a 2 hour reporting interval. This is a consequence of the transmit energy per data bit (integrated over the number of repetitions) that is required to overcome the coupling loss and so provide an adequate SNR at the receiver. 
  • Use of an integrated PA only has a small negative impact on battery life, based on the assumption of a 5% reduction in PA efficiency compared with an external PA.

Further improvements in battery life, especially for the case of high coupling loss, could be obtained if the common assumption that the downlink PSD will not exceed that of legacy GPRS was either relaxed to allow PSD boosting, or defined more precisely to allow adaptive power allocation with frequency hopping.

I will look at the technology aspects in a future post how 3GPP made enhancements in Rel-13 to reduce power consumption in CIoT.

Also have a look this GSMA whitepaper on 3GPP LPWA lists the applications requirements that are quite handy.

Saturday 7 January 2017

New LTE UE Categories (Downlink & Uplink) in Release-13

Just noticed that the LTE UE Categories have been updated since I last posted here. Since Release-12 onwards, we now have a possibility of separate Downlink (ue-CategoryDL) and Uplink (ue-CategoryUL) categories.

From the latest RRC specifications, we can see that now there are two new fields that can be present ue-CategoryDL and ue-CategoryUL.

An example defined here is as follows:

Example of RRC signalling for the highest combination
UE-EUTRA-Capability
   ue-Category = 4
      ue-Category-v1020 = 7
         ue-Category-v1170 = 10
            ue-Category-v11a0 = 12
               ue-CategoryDL-r12 = 12
               ue-CategoryUL-r12 = 13
                  ue-CategoryDL-v1260 = 16

From the RRC Specs:

  • The field ue-CategoryDL is set to values m1, 0, 6, 7, 9 to 19 in this version of the specification.
  • The field ue-CategoryUL is set to values m1, 0, 3, 5, 7, 8, 13 or 14 in this version of the specification.

3GPP TS 36.306 section 4 provides much more details on these UE categories and their values. I am adding these pictures from the LG space website.



More info:



Sunday 16 October 2016

Inside 3GPP Release-13 - Whitepaper by 5G Americas


The following is from the 5G Americas press release:

The summary offers insight to the future of wireless broadband and how new requirements and technological goals will be achieved. The report updates Release 13 (Rel-13) features that are now completed at 3GPP and were not available at the time of the publication of a detailed 5G Americas report, Mobile Broadband Evolution Towards 5G: 3GPP Release 12 & Release 13 and Beyond in June 2015.
The 3GPP standards have many innovations remaining for LTE to create a foundation for 5G.  Rel-12, which was finalized in December 2014, contains a vast array of features for both LTE and HSPA+ that bring greater efficiency for networks and devices, as well as enable new applications and services. Many of the Rel-12 features were extended into Rel-13.  Rel-13, functionally frozen in December 2015 and completed in March 2016, continues to build on these technical capabilities while adding many robust new features.
Jim Seymour, Principal Engineer, Mobility CTO Group, Cisco and co-leader of the 5G Americas report explained, “3GPP Release 13 is just a peek behind the curtain for the unveiling of future innovations for LTE that will parallel the technical work at 3GPP on 5G. Both LTE and 5G will work together to form our connected future.”
The numerous features in the Rel-13 standards include the following for LTE-Advanced:
  • Active Antenna Systems (AAS), including beamforming, Multi-Input Multi-Output (MIMO) and Self-Organizing Network (SON) aspects
  • Enhanced signaling to support inter-site Coordinated Multi-Point Transmission and Reception (CoMP)
  • Carrier Aggregation (CA) enhancements to support up to 32 component carriers
  • Dual Connectivity (DC) enhancements to better support multi-vendor deployments with improved traffic steering
  • Improvements in Radio Access Network (RAN) sharing
  • Enhancements to Machine Type Communication (MTC)
  • Enhanced Proximity Services (ProSe)
Some of the standards work in Rel-13 related to spectrum efficiency include:                                                                                                                       
  • Licensed Assisted Access for LTE (LAA) in which LTE can be deployed in unlicensed spectrum
  • LTE Wireless Local Area Network (WLAN) Aggregation (LWA) where Wi-Fi can now be supported by a radio bearer and aggregated with an LTE radio bearer
  • Narrowband IoT (NB-IoT) where lower power wider coverage LTE carriers have been designed to support IoT applications
  • Downlink (DL) Multi-User Superposition Transmission (MUST) which is a new concept for transmitting more than one data layer to multiple users without time, frequency or spatial separation
“The vision for 5G is being clarified in each step of the 3GPP standards. To understand those steps, 5G Americas provides reports on the developments in this succinct, understandable format,” said Vicki Livingston, Head of Communications for the association.

The whitepaper as follows:



Related posts:

Friday 7 October 2016

Whats up with VoLTE Roaming?

I have been covering the LTE Voice Summit for last couple of years (see here: 2015 & 2014) but this year I wont be around unfortunately. Anyway, I am sure there will be many interesting discussions. From my point of view, the 2 topics that have been widely discussed is roaming and VoWiFi.

One of the criticisms of VoWiFi is that it does not the QoS aspect is missing, which makes VoLTE special. In a recent post, I looked at the QoS in VoWiFi issue. If you haven't seen it, see here.

Coming back to VoLTE roaming, I came across this recent presentation by Orange.
This suggests that S8HR is a bad idea, the focus should be on LBO. For anyone who is not aware of the details of S8HR & LBO, please see my earlier blog post here. What this presentation suggests is to use LBO with no MTR (Mobile Termination Rates) but instead use TAP (Transferred Account Procedures). The presentation is embedded below:



Another approach that is not discussed too much but seems to be the norm at the moment is the use of IP eXchange (IPX). I also came across this other panel discussion on the topic


IPX is already in use for data roaming today and acts as a hub between different operators helping to solve inter-operability issues and mediating between roaming models. It can work out based on the calling and callee party what kind of quality and approach to use.

Here is the summary of the panel discussion:



Hopefully the LTE Voice Summit next week will provide some more insights. I look forward to hearing them.

Blog posts on related topics:

Monday 26 September 2016

QoS in VoWiFi

Came across this presentation by Eir from last year's LTE Voice Summit.



As the summary of the above presentation says:
  • Turning on WMM (or WME) at access point provides significant protection for voice traffic against competing wireless data traffic
  • Turning on WMM at the client makes only a small difference where there are a small number of clients on the wireless LAN. This plus the “TCP Unfairness” problem means that it can be omitted.
  • All Home gateways support WMM but their firmware may need to be altered to prioritise on DSCP rather than layer two

As this Wikipedia entry explains:

Wireless Multimedia Extensions (WME), also known as Wi-Fi Multimedia (WMM), is a Wi-Fi Alliance interoperability certification, based on the IEEE 802.11e standard. It provides basic Quality of service (QoS) features to IEEE 802.11 networks. WMM prioritizes traffic according to four Access Categories (AC): voice (AC_VO), video (AC_VI), best effort (AC_BE), and background (AC_BK). However, it does not provide guaranteed throughput. It is suitable for well-defined applications that require QoS, such as Voice over IP (VoIP) on Wi-Fi phones (VoWLAN).

WMM replaces the Wi-Fi DCF distributed coordination function for CSMA/CA wireless frame transmission with Enhanced Distributed Coordination Function (EDCF). EDCF, according to version 1.1 of the WMM specifications by the Wi-Fi Alliance, defines Access Categories labels AC_VO, AC_VI, AC_BE, and AC_BK for the Enhanced Distributed Channel Access (EDCA) parameters that are used by a WMM-enabled station to control how long it sets its Transmission Opportunity (TXOP), according to the information transmitted by the access point to the station. It is implemented for wireless QoS between RF media.

This blog post describes how the QoS works in case of WMM.



Finally, this slide from Cisco shows how it will all fit together.

Further reading:

Friday 23 September 2016

5G New Radio (NR), Architecture options and migration from LTE


You have probably read about the demanding requirements for 5G in many of my blog posts. To meet these demanding requirements a 'next-generation radio' or 'new radio' (NR) will be introduced in time for 5G. We dont know as of yet what air interface, modulation technology, number of antennas, etc. for this NR but this slide above from Qualcomm gives an idea of what technologies will be required for this 5G NR.
The slide above gives a list of design innovations that will be required across diverse services as envisioned by 5G proponents.

It should be mentioned that Rel-10/11/12 version of LTE is referred to as LTE-Advanced and Rel-13/14 is being referred to as LTE-A Pro. Rel-15 will probably have a new name but in various discussions its being referred to as eLTE.

When first phase of 5G arrives in Rel-15, eLTE would be used for access network and EPC will still be used for core network. 5G will use NR and eventually get a new core network, probably in time for phase 2. This is often referred to as next generation core network (NGCN).

The slides below from Deutsche Telekom show their vision of how operators should migrate from eLTE to 5G.



The slides below from AT&T show their vision of LTE to 5G migration.



Eiko Seidel posted the following in 3GPP 5G standards group (i recommend you join if you want to follow technical discussions)


Summary RAN1#86 on New Radio (5G) Gothenburg, Sweden

At this meeting RAN1 delegates presented and discussed numerous evaluation results mainly in the areas of waveforms and channel coding.

Nonetheless RAN1 was not yet prepared to take many technical decisions. Most agreements are still rather general. 

First NR terminology has been defined. For describing time structures mini-slots have been introduced: a mini-slot is the smallest possible scheduling unit and smaller than a slot or a subframe.

Discussions on waveforms favored filtered and windowed OFDM. Channel coding discussions were in favor of LDPC and Turbo codes. But no decisions have been made yet.

Not having taken many decisions at this meeting, RAN1 now is behind its schedule for New Radio.
Hopefully the lag can be made up at two additional NR specific ad hoc meetings that have been scheduled for January and June 2017.

(thanks to my colleague and friend Dr. Frank Kowalewski for writing this short summary!)

Yet another post from Eiko on 3GPP RAN 3 on related topic.

The RAN3 schedule is that in February 2017 recommendations can be made for a protocol architecture.  In the meeting arguments came up by some parties that the work plan is mainly addressing U-Plane architecture and that split of C- and U-plane is not considered sufficiently. The background is that the first step will be dual connectivity with LTE using LTE RRC as control plane and some companies would like to concentrate on this initially. It looks like that a prioritization of features might happen in November timeframe. Beside UP and CP split, also the functional split between the central RAN node and the distributed RAN node is taking place for the cloud RAN fronthaul interface. Besides this, also discussion on the fronthaul interface takes place and it will be interesting to see if RAN3 will take the initiative to standardize a CPRI like interface for 5G. Basically on each of the three interfaces controversial discussion is ongoing.

Yet another basic question is, what is actually considered as a “New 5G RAN”? Is this term limited to a 5G eNB connected to the NG core? Or can it also be also an eLTE eNB with Dual Connectivity to 5G? Must this eLTE eNB be connected to the 5G core or is it already a 5G RAN when connected to the EPC? 

Finally, a slide from Qualcomm on 5G NR standardization & launch.


Sunday 22 May 2016

QCI Enhancements For Mission Critical Communications

Its been quite a while since I posted about QCI and end-to-end bearer QoS in EPC. In LTE Release-12 some new QCI values were added to handle mission critical communications.


This picture is taken from a new blog called Public Safety LTE. I have discussed about the Default and Dedicated bearers in an earlier post here (see comments in that post too). You will notice in the picture above that new QCI values 65, 66, 69 & 70 have been added. For mission critical group communications new default bearer 69 would be used for signalling and dedicated bearer 65 will be used for data. Mission critical data would also benefit by using QCI 70.


LTE for Public Safety that was published last year provides a good insight on this topic as follows:

The EPS provides IP connectivity between a UE and a packet data network external to the PLMN. This is referred to as PDN connectivity service. An EPS bearer uniquely identifies traffic flows that receive a common QoS treatment. It is the level of granularity for bearer level QoS control in the EPC/E-UTRAN. All traffic mapped to the same EPS bearer receives the same bearer level packet forwarding treatment. Providing different bearer level packet forwarding treatment requires separate EPS bearers.

An EPS bearer is referred to as a GBR bearer, if dedicated network resources related to a Guaranteed Bit Rate (GBR) are permanently allocated once the bearer is established or modified. Otherwise, an EPS bearer is referred to as a non-GBR bearer.

Each EPS bearer is associated with a QoS profile including the following data:
• QoS Class Identifier (QCI): A scalar pointing in the P-GW and eNodeB to node-specific parameters that control the bearer level packet forwarding treatment in this node.
• Allocation and Retention Priority (ARP): Contains information about the priority level, the pre-emption capability, and the pre-emption vulnerability. The primary purpose of the ARP is to decide whether a bearer establishment or modification request can be accepted or needs to be rejected due to resource limitations.
• GBR: The bit rate that can be expected to be provided by a GBR bearer.
• Maximum Bit Rate (MBR): Limits the bit rate that can be expected to be provided by a GBR bearer.

Following QoS parameters are applied to an aggregated set of EPS bearers and are part of user’s subscription data:
• APN Aggregate Maximum Bit Rate (APN-AMBR): Limits the aggregate bit rate that can be expected to be provided across all non-GBR bearers and across all PDN connections associated with the APN.
• UE Aggregate Maximum Bit Rate (UE-AMBR): Limits the aggregate bit rate that can be expected to be provided across all non-GBR bearers of a UE. The UE routes uplink packets to the different EPS bearers based on uplink packet filters assigned to the bearers while the P-GW routes downlink packets to the different EPS bearers based on downlink packet filters assigned to the bearers in the PDN connection.

Figure 1.5 above shows the nodes where QoS parameters are enforced in the EPS system.

Related links:



Saturday 2 January 2016

End to end and top to bottom network design…


A good way to start 2016 is by a lecture delivered by Andy Sutton, EE at the IET conference 'Towards 5G Mobile Technology – Vision to Reality'. The slides and the video are both embedded below. The video also contains Q&A at the end which people may find useful.




Videos of all other presentations from the conference are available here for anyone interested.

Wednesday 18 November 2015

Cellular IoT (CIoT) or LoRa?

Back in September, 3GPP reached a decision to standardise NarrowBand IOT (NB-IOT). Now people familiar with the evolution of LTE-A UE categories may be a bit surprised with this. Upto Release-11, the lowest data rate device was UE Cat-1, which could do 10Mbps in DL and 5Mbps in UL. This was power hungry and not really that useful for low data rate sensor devices. Then we got Cat-0 as part of Release-12 which simplified the design and have 1Mbps in DL & UL.

Things start to become a bit complex in Release-13. The above picture from Qualcomm explains the evolution and use cases very well. However, to put more details to the above picture, here is some details from the 4G Americas whitepaper (embedded below)


In support of IoT, 3GPP has been working on all several related solutions and generating an abundance of LTE-based and GSM-based proposals. As a consequence, 3GPP has been developing three different cellular IoT standard- solutions in Release-13:
  • LTE-M, based on LTE evolution
  • EC-GSM, a narrowband solution based on GSM evolution, and
  • NB-LTE, a narrowband cellular IoT solution, also known as Clean Slate technologies
However, in October 2015, the 3GPP RAN body mutually agreed to study the combination of the two different narrowband IoT technical solutions, EC-GSM and NB-LTE, for standardization as a single NB-IoT technology until the December 2015 timeframe. This is in consideration of the need to support different operation modes and avoid divided industry support for two different technical solutions. It has been agreed that NB-IoT would support three modes of operation as follows:
  • ‘Stand-alone operation’ utilizing, for example, the spectrum currently being used by GERAN systems as a replacement of one or more GSM carriers,
  • ‘Guard band operation’ utilizing the unused resource blocks within a LTE carrier’s guard-band, and
  • ‘In-band operation’ utilizing resource blocks within a normal LTE carrier.

Following is a brief description of the various standard solutions being developed at 3GPP by October 2015:

LTE-M: 3GPP RAN is developing LTE-Machine-to-Machine (LTE-M) specifications for supporting LTE-based low cost CIoT in Rel-12 (Low-Cost MTC) with further enhancements planned for Rel-13 (LTE eMTC). LTE-M supports data rates of up to 1 Mbps with lower device cost and power consumption and enhanced coverage and capacity on the existing LTE carrier.

EC-GSM: In the 3GPP GERAN #62 study item “Cellular System Support for Ultra Low Complexity and Low Throughput Internet of Things”, narrowband (200 kHz) CIoT solutions for migration of existing GSM carriers sought to enhance coverage by 20 dB compared to legacy GPRS, and achieve a ten year battery life for devices that were also cost efficient. Performance objectives included improved indoor coverage, support for massive numbers of low-throughput devices, reduced device complexity, improved power efficiency and latency. Extended Coverage GSM (EC-GSM) was fully compliant with all five performance objectives according to the August 2015 TSG GERAN #67 meeting report. GERAN will continue with EC-GSM as a work item within GERAN with the expectation that standards will be frozen by March 2016. This solution necessarily requires a GSM network.

NB-LTE: In August 2015, work began in 3GPP RAN Rel-13 on a new narrowband radio access solution also termed as Clean Slate CIoT. The Clean Slate approach covers the Narrowband Cellular IoT (NB-CIoT), which was the only one of six proposed Clean Slate technologies compliant against a set of performance objectives (as noted previously) in the TSG GERAN #67 meeting report and will be part of Rel-13 to be frozen in March 2016. Also contending in the standards is Narrowband LTE Evolution (NB-LTE) which has the advantage of easy deployment across existing LTE networks.

Rel-12 introduces important improvements for M2M like lower device cost and longer battery life. Further improvements for M2M are envisioned in Rel-13 such as enhanced coverage, lower device cost and longer battery life. The narrowband CIoT solutions also aim to provide lower cost and device power consumption and better coverage; however, they will also have reduced data rates. NB CleanSlate CIoT is expected to support data rates of 160bps with extended coverage.

Table 7.1 provides some comparison of the three options to be standardized, as well as the 5G option, and shows when each release is expected to be finalized.

Another IoT technology that has been giving the cellular IoT industry run for money is the LoRa alliance. I blogged about LoRa in May and it has been a very popular post. A extract from a recent article from Rethink Research as follows:

In the past few weeks, the announcements have been ramping up. Semtech (the creator of the LoRa protocol itself, and the key IP owner) has been most active, announcing that The Lace Company, a wireless operator, has deployed LoRa network architecture in over a dozen Russian cities, claiming to cover 30m people over 9,000km2. Lace is currently aiming at building out Russian coverage, but will be able to communicate to other LoRa devices over the LoRa cloud, as the messages are managed on cloud servers once they have been transmitted from end-device to base unit via LoRaWAN.

“Our network allows the user to connect to an unlimited number of smart sensors,” said Igor Shirokov, CEO of Lace Ltd. “We are providing connectivity to any device that supports the open LoRaWAN standard. Any third party company can create new businesses and services in IoT and M2M market based on our network and the LoRaWAN protocol.”

Elsewhere, Saudi Arabian telco Du has launched a test LoRa network in Dubai, as part of a smart city test project. “This is a defining moment in the UAE’s smart city transformation,” said Carlos Domingo, senior executive officer at Du. “We need a new breed of sensor friendly network to establish the smart city ecosystem. Thanks to Du, this capability now exists in the UAE Today we’ve shown how our network capabilities and digital know-how can deliver the smart city ecosystem Dubai needs. We will not stop in Dubai; our deployment will continue country-wide throughout the UAE.”

But the biggest recent LoRa news is that Orange has committed itself to a national French network rollout, following an investment in key LoRa player Actility. Orange has previously trialed a LoRa network in Grenoble, and has said that it opted for LoRa over Sigfox thanks to its more open ecosystem – although it’s worth clarifying here that Semtech still gets a royalty on every LoRa chip that’s made, and will continue to do so until it chooses not to or instead donates the IP to the non-profit LoRa Alliance itself.

It would be interesting to see if this LoRa vs CIoT ends up the same way as WiMAX vs LTE or not.

Embedded below is the 4G Americas whitepaper as well as a LoRa presentation from Semtech:






Further reading:


Monday 9 November 2015

5G and Evolution of the Inter-connected Network


While there are many parameters to consider when designing the next generation network, speed is the simplest one to understand and sell to the end user.

Last week, I did a keynote at the International Telecom Sync Forum (ITSF) 2015. As an analyst keynote, I looked at how the networks are evolving and getting more complex, full of interesting options and features available for the operator to decide which ones to select.

There wont just be multiple generations of technologies existing at the same time but there will also be small cells based networks, macro networks, drones and balloons based networks and satellite based networks.

My presentation is embedded below. For any reason, if you want to download it, please fill the form at the bottom of this page and download.



Just after my keynote, I came across this news in Guardian about 'Alphabet and Facebook develop rival secret drone plans'; its an interesting read. As you may be aware Google is actively working with Sri Lanka and Indonesia for providing seamless internet access nationally.


It was nice to hear EE provide the second keynote which focused on 5G. I especially liked this slide which summarised their key 5G research areas. Their presentation is embedded below and available to download from slideshare.




The panel discussion was interesting as well. As the conference focused on timing and synchronisation, the questions were on those topics too. I have some of them below, interested to hear your thoughts:

  • Who cares about syncing the core? - Everything has moved to packets, the only reason for sync is to coordinate access points in wireless for higher level services. We have multiple options to sync the edge, why bother to sync the core at all?
  • We need synchronisation to improve the user’s experience right? - Given the ever improving quality of the time-bases embedded within equipment, what exactly would happen to the user experience if synchronisation collapsed… or is good sync all about operators experience?
  • IoT… and the impact on synchronisation- can we afford it? - M2M divisions of network operators make a very small fraction of the operator’s revenue, is that going to change and will it allow the required investment in sync technology that it might require?

Sunday 1 November 2015

Quick Summary of LTE Voice Summit 2015 (#LTEVoice)

Last year's summary of the LTE voice summit was very much appreciated so I have created one this year too.

The status of VoLTE can be very well summarised as can be seen in the image above.
‘VoLTE network deployment is the one of the most difficult project ever, the implementation complexity and workload is unparalleled in history’ - China Mobile group vice-president Mr.Liu Aili
Surprisingly, not many presentations were shared so I have gone back to the tweets and the pictures I took to compile this report. You may want to download the PDF from slideshare to be able to see the links. Hope you find it useful.



Related links:

Sunday 26 July 2015

LTE vs TETRA for Critical Communications

Sometime back I was reading this interview between Martin Geddes and Peter Clemons on 'The Crisis in UK Critical Communications'. If you haven't read it, I urge you to read it here. One thing that stuck out was as follows:

LTE was not designed for critical communications.

Commercial mobile operators have moved from GSM to UMTS to WCDMA networks to reflect the strong growth in demand for mobile data services. Smartphones are now used for social media and streaming video. LTE technology fulfils a need to supply cheap mass market data communications.

So LTE is a data service at heart, and reflects the consumer and enterprise market shift from being predominantly voice-centric to data-centric. In this wireless data world you can still control quality to a degree. So with OFDM-A modulation we have reduced latency. We have improved how we allocate different resource blocks to different uses.

The marketing story is that we should be able to allocate dedicated resources to emergency services, so we can assure voice communications and group calling even when the network is stressed. Unfortunately, this is not the case. Even the 3GPP standards bodies and mobile operators have recognised that there are serious technology limitations.
This means they face a reputational risk in delivering a like-for-like mission-critical voice service.

Won’t this be fixed by updated standards?
The TETRA Critical Communications association (TCCA) began to engage with the 3GPP standards process in 2012. 3GPP then reached out to peers in the USA and elsewhere: the ESMCP project here in the UK, the US FirstNet programme, and the various European associations.

These lobbied 3GPP for capabilities specifically aimed at critical communications requirements. At the Edinburgh meeting in September 2014, 3GPP set up the SA6specification group, the first new group in a decade.

The hope is that by taking the critical communications requirement into a separate stream, it will no longer hold up the mass market release 12 LTE standard. Even with six meetings a year, this SA6 process will be a long one. By the end of the second meeting it had (as might be expected) only got as far as electing the chairman.

It will take time to scope out what can be achieved, and develop the critical communications functionality. For many players in the 3GPP process this is not a priority, since they are focusing solely on mass market commercial applications.

Similar point was made in another Critical communications blog here:

LTE has emerged as a long term possible replacement for TETRA in this age of mobile broadband and data. LTE offer unrivalled broadband capabilities for such applications as body warn video streaming, digital imaging, automatic vehicle location, computer-assisted dispatch, mobile and command centre apps, web access, enriched e-mail, mobile video surveillance apps such as facial recognition, enhanced Telemetry/remote diagnostics, GIS and many more. However, Phil Kidner, CEO of the TCCA pointed out recently that it will take many LTE releases to get us to the point where LTE can match TETRA on key features such as group working, pre-emptive services, network resilience, call set-up times and direct mode.
The result being, we are at a point where we have two technologies, one offering what end users want, and the other offering what end users need. This has altered the discussion, where now instead of looking at LTE as a replacement, we can look at LTE as a complimentary technology, used alongside TETRA to give end users the best of both worlds. Now the challenge appears to be how we can integrate TETRA and LTE to meet the needs and wants of our emergency services, and it seems that if we want to look for guidance and lessons on the possible harmony of TETRA and LTE we should look at the Middle East.
While I was researching, I came across this interesting presentation (embedded below) from the LTE World Summit 2015





The above is an interesting SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis for TETRA and LTE. While I can understand that LTE is yet unproven, I agree on the lack of spectrum and appropriate bands.

I have been told in the past that its not just the technology which is an issue, TETRA has many functionalities that would need to be duplicated in LTE.



As you can see from this timeline above, while Rel-13 and Rel-14 will have some of these features, there are still other features that need to be included. Without which, safety of the critical communication workers and public could be compromised.

The complete presentation as follows. Feel free to voice your opinions via comments.