Pages

Parallel Wireless is Hiring

Thursday, 20 July 2017

Second thoughts about LTE-U / LAA

Its been a while since I wrote about LTE-U / LAA on this blog. I have written a few posts on the small cells blog but they seem to be dated as well. For anyone needing a quick refresher on LTE-U / LAA, please head over to IoTforAll or ShareTechNote. This post is not about the technology per se but the overall ecosystem with LTE-U / LAA (and even Multefire) being part of that.

Lets recap the market status quickly. T-Mobile US has already got LTE-U active and LAA was tested recently. SK Telecom achieved 1Gbps in LAA trials with Ericsson. AT&T has decided to skip the non-standard LTE-U and go to standards based LAA. MTN & Huawei have trialled LAA for in-building in South Africa. All these sound good and inspires confidence in the technology however some observations are worrying me.


Couple of years back when LTE-U idea was conceived, followed by LAA, the 5GHz channels were relatively empty. Recently I have started to see that they are all filling up.

Any malls, hotels, service stations or even big buildings I go to, they all seem to be occupied. While supplemental downlink channels are 20MHz each, the Wi-Fi channels could be 20MHz, 40MHz, 80MHz or even 160MHz.

On many occasions I had to switch off my Wi-Fi as the speeds were so poor (due to high number of active users) and go back to using 4G. How will it impact the supplemental downlink in LTE-U / LAA? How will it impact the Wi-Fi users?

On my smartphone, most days I get 30/40Mbps download speeds and it works perfectly fine for all my needs. The only reason we would need higher speeds is to do tethering and use laptops for work, listen to music, play games or watch videos. Most people I know or work with dont require gigabit speeds at the moment.

Once a user that is receiving high speeds data on their device using LTE-U / LAA creates a Wi-Fi hotspot, it may use the same 5GHz channels as the ones that the network is using for supplemental downlink. How do you manage this interference? I am looking forward to discussions on technical fora where users will be asking why their download speeds fall as soon as they switch Wi-Fi hotspot on.

The fact is that in non-dense areas (rural, sub-urban or even general built-up areas), operators do not have to worry about the network being overloaded and can use their licensed spectrum. Nobody is planning to deploy LTE-U / LAA in these areas. In dense and ultra-dense areas, there are many users, many Wi-Fi access points, ad-hoc Wi-Fi networks and many other sources of interference. In theory LTE-U / LAA can help significantly but as there are many sources of interference,its uncertain if it would be a win-win for everyone or just more interference for everyone to deal with.

Further reading:

Thursday, 13 July 2017

Different types of Mobile Masts



Today's post is inspired by two things. One of them being my most popular answer on Quora. As you can see, its gathered over 19K upvotes.


The other being #EEGoldenSIM competition started by Marc Allera, CEO of UK mobile operator, EE,. The users were required to find a mast, take a picture and share it. This led to a lot of people asking how do masts look like but also generated lots of interesting pictures. You can search #EEGoldenSIM on twitter to see them.

Below is a presentation prepared by my 3G4G colleagues on how different types of antennas and mobile masts look like. Hope you like it.



Friday, 7 July 2017

Wireless Smart Ubiquitous Network (Wi-SUN) - Another IoT Standard


While we have been discussing IoT these last few weeks, here is another one that I came across. This picture above from a recent Rethink research shows that Wi-SUN is going to enjoy more growth than LoRaWAN or Sigfox. Another recent report by Mobile Experts also makes a mention of this IoT technology.

I am sure most of the readers have not heard of Wi-SUN, so what exactly is Wi-SUN technology?


From Rethink Research, The Wi-SUN Alliance was formed in 2011 to form an organization to push adoption of the IEEE 802.15.4g standard, which aimed to improve utility networks using a narrowband wireless technology. The peer-to-peer self-healing mesh has moved from its initial grid focus to encompass smart city applications (especially street lighting), and we spoke to its Chairman, Phil Beecher, to learn more.

Beecher explained that the non-profit Alliance set about defining subsets of the open standards, testing for interoperability, and certifying compatible products, and soon developed both a Field Area Network (FAN) and a Home Area Network (HAN), which allowed it to move into Home Energy Management Systems (HEMS) in Japan – a country that is leading the curve in HEMS deployments and developments.


As can be seen in the picture above:

  • Develops technical specifications of Physical Layer (PHY) and Medium Access Control (MAC) layers, with Network layer as required
  • Develop Interoperability test programs to ensure implementations are interoperable
  • Physical layer specification is based on IEEE802.15.4g/4u/4v
  • MAC layer may use different options depending on the application
  • Profile specifications are categorized based on application types

Picture source for the last three pics, Wi-SUN presentation here.


A new whitepaper from Wi-SUN Alliance provides comparison of Wi-SUN, LoRaWAN and NB-IoT.

A recent presentation by Dr. Simon Dunkley in Cambridge Wireless is embedded below:



Further reading:



Tuesday, 27 June 2017

Mission Critical Services update from 3GPP - June 2017


3GPP has published an overview of what has been achieved so far in the Mission Critical and also provides an outlook of what can be expected in the near future. A more detailed paper summarizing the use cases and functional aspects of Rel-13, Rel-14 and upcoming Rel-15 will be published later this year.

Mission Critical Services – Detailed List of Rel-13, Rel-14 and Rel-15 Functionalities

Rel-13 MCPTT (completed 2016)
  • User authentication and service authorization
  • Configuration
  • Affiliation and de-affiliation
  • Group calls on-network and off-network (within one system or multiple systems, pre-arranged or chat model, late entry, broadcast group calls, emergency group calls, imminent peril group calls, emergency alerts)
  • Private calls on-network and off-network (automatic or manual commencement modes, emergency private calls)
  • MCPTT security
  • Encryption (media and control signalling)
  • Simultaneous sessions for call
  • Dynamic group management (group regrouping)
  • Floor control in on-network (within one system or across systems) and in off-network
  • Pre-established sessions
  • Resource management (unicast, multicast, modification, shared priority)
  • Multicast/Unicast bearer control, MBMS (Multimedia Broadcast/Multicast Service) bearers
  • Location configuration, reporting and triggering
  • Use of UE-to-network relays
Rel-14 MC Services (completed 2017)
MC Services Common Functionalities:
  • User authentication and service authorization
  • Service configuration
  • Affiliation and de-affiliation
  • Extended Location Features
  • (Dynamic) Group Management
  • Identity management
  • MC Security framework
  • Encryption (media and control signalling)
MCPTT Enhancements:
  • First-to-answer call setup (with and without floor control)
  • Floor control for audio cut-in enabled group
  • Updating the selected MC Service user profile for an MC Service
  • Ambient listening call
  • MCPTT private call-back request
  • Remote change of selected group
MCVideo, Common Functions plus:
  • Group Call (including emergency group calls, imminent peril group calls, emergency alerts)
  • Private Call (off-network)
  • Transmission Control
MCData, Common Functions plus:
  • Short Data Service (SDS)
  • File Distribution (FD) (on-network)
  • Transmission and Reception Control
  • Handling of Disposition Notifications
  • Communication Release
Rel-15 MC Services (in progress)

MC Services Common Functionalities Enhancements:
  • Enhanced MCPTT group call setup procedure with MBMS bearer
  • Enhanced Location management, information and triggers
  • Interconnection between 3GPP defined MC systems
  • Interworking with legacy systems

MCPTT Enhancements:
  • Remotely initiated MCPTT call
  • Enhanced handling of MCPTT Emergency Alerts
  • Enhanced Broadcast group call
  • Updating pre-selected MC Service user profile
  • Temporary group call - user regroup
  • Functional alias identity for user and equipment
  • Multiple simultaneous users
MCVideo Additions:
  • Video push
  • Video pull
  • Private call (on-network)
  • Broadcast Group Call
  • Ambient Viewing Call
  • Capability information sharing
  • Simultaneous Sessions
  • Use of MBMS transmission
  • Emergency and imminent peril private communications
  • Primary and Partner MC system interactions for MCVideo communications
  • Remote video parameters control capabilities

MCData Additions:
  • MCData specific Location
  • Enhanced Status
  • Accessing list of deferred communications
  • Usage of MBMS
  • Emergency Alert
  • Data streaming
  • File Distribution (FD) (off-network)
  • IP connectivity

Release-14 features will be available by end of September 2017 and many Release-15 features, that is being hurried due to 5G will be available by June 2018.

For more details, follow the links below:



Monday, 19 June 2017

Network Sharing is becoming more relevant with 5G

5G is becoming a case of 'damned if you do damned if you don't'. Behind the headlines of new achievements and faster speeds lies the reality that many operators are struggling to keep afloat. Indian and Nigerian operators are struggling with heavy debt and it wont be a surprise if some of the operators fold in due course.

With increasing costs and decreasing revenues, its no surprise that operators are looking at ways of keeping costs down. Some operators are postponing their 5G plans in favour of Gigabit LTE. Other die hard operators are pushing ahead with 5G but looking at ways to keep the costs down. In Japan for example, NTT DOCOMO has suggested sharing 5G base stations with its two rivals to trim costs, particularly focusing efforts in urban areas.


In this post, I am looking to summarise an old but brilliant post by Dr. Kim Larsen here. While it is a very well written and in-depth post, I have a feeling that many readers may not have the patience to go through all of it. All pictures in this post are from the original post by Dr. Kim Larsen.


Before embarking on any Network sharing mission, its worthwhile asking the 5W's (Who, Why, What, Where, When) and 2H's (How, How much).

  • Why do you want to share?
  • Who to share with? (your equal, your better or your worse).
  • What to share? (sites, passives, active, frequencies, new sites, old sites, towers, rooftops, organization, ,…).
  • Where to share? (rural, sub-urban, urban, regional, all, etc..).
  • When is a good time to start sharing? During rollout phase, steady phase or modernisation phase. See picture below. For 5G, it would make much more sense that network sharing is done from the beginning, i.e., Rollout Phase


  • How to do sharing?. This may sound like a simple question but it should take account of regulatory complexity in a country. The picture below explains this well:



  • How much will it cost and how much savings can be attained in the long term? This is in-fact a very important question because the end result after a lot of hard work and laying off many people may result in an insignificant amount of cost savings. Dr. Kim provides detailed insight on this topic that I find it difficult to summarise. Best option is to read it on his blog.


An alternative approach to network sharing is national roaming. Many European operators are dead against national roaming as this means the network loses its differentiation compared to rival operators. Having said that, its always worthwhile working out the savings and seeing if this can actually help.

National Roaming can be attractive for relative low traffic scenarios or in case were product of traffic units and national roaming unit cost remains manageable and lower than the Shared Network Cost.

The termination cost or restructuring cost, including write-off of existing telecom assets (i.e., radio nodes, passive site solutions, transmission, aggregation nodes, etc….) is likely to be a substantially financial burden to National Roaming Business Case in an area with existing telecom infrastructure. Certainly above and beyond that of a Network Sharing scenario where assets are being re-used and restructuring cost might be partially shared between the sharing partners.

Obviously, if National Roaming is established in an area that has no network coverage, restructuring and termination cost is not an issue and Network TCO will clearly be avoided, Albeit the above economical logic and P&L trade-offs on cost still applies.

If this has been useful to understand some of the basics of network sharing, I encourage you to read the original blog post as that contains many more details.

Futher Reading:



Sunday, 11 June 2017

Theoretical calculation of EE's announcement for 429Mbps throughput


The CEO of UK mobile network operator EE recently announced on twitter that they have achieved 429 Mbps in live network. The following is from their press release:

EE, the UK’s largest mobile network operator and part of the BT Group, has switched on the next generation of its 4G+ network and demonstrated live download speeds of 429Mbps in Cardiff city centre using Sony’s Xperia XZ Premium, which launched on Friday 2 June. 
The state of the art network capability has been switched on in Cardiff and the Tech City area of London today. Birmingham, Manchester and Edinburgh city centres will have sites upgraded during 2017, and the capability will be built across central London. Peak speeds can be above 400Mbps with the right device, and customers connected to these sites should be able to consistently experience speeds above 50Mbps. 
Sony’s Xperia XZ Premium is the UK’s first ‘Cat 16’ smartphone optimised for the EE network, and EE is the only mobile network upgrading its sites to be able to support the new device’s unique upload and download capabilities. All devices on the EE network will benefit from the additional capacity and technology that EE is building into its network. 
... 
The sites that are capable of delivering these maximum speeds are equipped with 30MHz of 1800MHz spectrum, and 35MHz of 2.6GHz spectrum. The 1800MHz carriers are delivered using 4x4 MIMO, which sends and receives four signals instead of just two, making the spectrum up to twice as efficient. The sites also broadcast 4G using 256QAM, or Quadrature Amplitude Modulation, which increases the efficiency of the spectrum.

Before proceeding further you may want to check out my posts 'Gigabit LTE?' and 'New LTE UE Categories (Downlink & Uplink) in Release-13'

If you read the press release carefully, EE are now using 65MHz of spectrum for 4G. I wanted to provide a calculation for whats possible in theory with this much bandwidth.

Going back to basics (detailed calculation for basics in slideshare below), in LTE/LTE-A, the maximum bandwidth possible is 20MHz. Any more bandwidth can be used with Carrier Aggregation. So as per the EE announcement, its 20 + 10 MHz in 1800 band and 20 + 15 MHz in 2600 band

So for 1800 MHz band:

50 resource blocks (RBs) per 10MHZ, 150 for 30MHz.
Each RB has 12x7x2=168 symbols per millisecond in case of normal modulation support cyclic prefix (CP).
For 150 RBs, 150 x 168 = 25200 symbols per ms or 25,200,000 symbols per second. This can also be written as 25.2 Msps (Mega symbols per second)
256 QAM means 8 bits per symbol. So the calculation changes to 25.2 x 8 = 201.6 Mbps. Using 4 x 4 MIMO, 201.6 x 4 = 806.4Mbps
Removing 25% overhead which is used for signalling, this gives 604.80 Mbps


Repeating the same exercise for 35MHz of 2600 MHz band, with 2x2 MIMO and 256 QAM:

175 x 168 = 29400 symbols per ms or 29,400,000 symbols per second. This can be written as 29.4 Msps
29.4 x 8 = 235.2 Mbps
Using 2x2 MIMO, 235.2 x 2 = 470.4 Mbps
Removing 25% overhead which is used for signalling, this gives 352.80 Mbps

The combined theoretical throughput for above is 957.60 Mbps

For those interested in revisiting the basic LTE calculations, here is an interesting document:




Further reading:

Thursday, 1 June 2017

Smartphones, Internet Trends, etc

Every few years I add Mary Meeker's Internet Trends slides on the blog. Interested people can refer to 2011 and  2014 slide pack to see how world has changed.


One of the initial slide highlights that the number of smartphones are reached nearly 3 billion by end of 2016. If we looked at this excellent recent post by Tomi Ahonen, there were 3.2 billion smartphones at the end of Q1 2017. Here is a bit of extract from that.

SMARTPHONE INSTALLED BASE AT END OF MARCH 2017 BY OPERATING SYSTEM

Rank . OS Platform . . . . Units . . . . Market share  Was Q4 2016
1 . . . . All Android . . . . . . . . . . . . 2,584 M . . . 81 % . . . . . . ( 79 %)  
a . . . . . . Pure Android/Play . . . . 1,757 M . . . 55%
b . . . . . . Forked Anroid/AOSP . . . 827 M . . . 26%
2 . . . . iOS  . . . . . . . . . . . . . . . . . . 603 M . . . 19 % . . . . . . ( 19 %) 
Others . . . . . . . . . . . . . . . . . . . . . . 24 M  . . . . 1 % . . . . . . (   1 %)
TOTAL Installed Base . 3,211 M smartphones (ie 3.2 Billion) in use at end of Q1, 2017

Source: TomiAhonen Consulting Analysis 25 May 2017, based on manufacturer and industry data


BIGGEST SMARTPHONE MANUFACTURERS BY UNIT SALES IN Q1 2017

Rank . . . Manufacturer . Units . . . Market Share . Was Q4 2016 
1 (2) . . . Samsung . . . .  79.4 M . . 22.7% . . . . . . . ( 17.9% ) 
2 (1) . . . Apple  . . . . . . . 50.8 M . . 14.5% . . . . . . . ( 18.0% ) 
3 (3) . . . Huawei  . . . . . . 34.6 M . . . 9.9% . . . . . . . (10.4% ) 
4 (4) . . . Oppo . . . . . . . . 28.0 M . . . 8.0% . . . . . . . (   7.1% ) 
5 (5) . . . Vivo . . . . . . . . . 22.0 M . . . 6.3% . . . . . . . (   5.6% ) 
6 (9) . . . LG  . . . . . . . .  . 14.8 M . . . 4.2% . . . . . . . (   3.3% ) 
7 (7) . . . Lenovo .  . . . . . 13.2 M . . . 3.8% . . . . . . . (   3.8% )
8 (8) . . . Gionee . . . . . . . .9.6 M . . . 2.7% . . . . . . .  (   3.5% )
9 (6) . . . ZTE  . . . . . . . . . 9.2 M . . . 2.6% . . . . . . . (   5.2% ) 
10 (10) . TCL/Alcatel . . .  8.7 M . . . 2.5% . . . . . . . (  2.4% ) 
Others . . . . . . . . . . . . . . 80.2 MTOTAL . . . . . . . . . . . . . 350.4 M

Source: TomiAhonen Consulting Analysis 25 May 2017, based on manufacturer and industry data


This year, the number of slides have gone up to 355 and there are some interesting sections like China Internet, India Internet, Healthcare, Interactive games, etc. The presentation is embedded below and can be downloaded from slideshare



Sunday, 21 May 2017

Research on Unvoiced Speech Communications using Smartphones and Mobiles

A startup on kickstarter is touting world's first voice mask for smartphones. Having said that Hushme has been compared to Bane from Batman and Dr. Hannibal Lecter. Good detail of Hushme at Engadget here.

This is an interesting concept and has come back in the news after a long gap. Even though we are well past the point of 'Peak Telephony' because we now use text messages and OTT apps for non-urgent communications. Voice will always be around though for not only urgent communications but for things like audio/video conference calls.


Back in 2003 NTT Docomo generated a lot of news on this topic. Their research paper "Unvoiced speech recognition using EMG - mime speech recognition" was the first step in trying to find a way to speak silently while the other party can hear voice. This is probably the most quoted paper on this topic. (picture source).


NASA was working on this area around the same time. They referred to this approach as 'Subvocal Speech'. While the original intention of this approach was for astronauts suits, the intention was that it could also be available for other commercial use. Also, NASA was effectively working on limited number of words using this approach (picture source).

For both the approaches above, there isn't a lot of recent updated information. While it has been easy to recognize certain characters, it takes a lot of effort to do the whole speech. Its also a challenge to play your voice rather than a robotic voice to the other party.

To give a comparison of how big a challenge this is, look at the Youtube videos where they do an automatic captions generation. Even though you can understand what the person is speaking, its always a challenge for the machine. You can read more about the challenge here.

A lot of research in similar areas has been done is France and is available here.


Motorola has gone a step further and patented an e-Tattoo that can be emblazoned over your vocal cords to intercept subtle voice commands — perhaps even subvocal commands, or even the fully internal whisperings that fail to pluck the vocal cords when not given full cerebral approval. One might even conclude that they are not just patenting device communications from a patch of smartskin, but communications from your soul. Read more here.


Another term used for research has been 'lip reading'. While the initial approaches to lip reading was the same as other approaches of attaching sensors to facial muscles (see here), the newer approaches are looking at exploiting smartphone camera for this.

Many researchers have achieved reasonable success using cameras for lip reading (see here and here) but researchers from Google’s AI division DeepMind and the University of Oxford have used artificial intelligence to create the most accurate lip-reading software ever.
Now the challenge with smartphones for using camera for speech recognition will be high speed data connectivity and ability to see lip movement clearly. While in indoor environment this can be solved with Wi-Fi connectivity and looking at the camera, it may be a bit tricky outdoors or not looking at the camera while driving. Who knows, this may be a killer use-case for 5G.

By the way, this is not complete research in this area. If you have additional info, please help others by adding it in the comments section.

Related links:



Friday, 12 May 2017

5G – Beyond the Hype

Dan Warren, former GSMA Technology Director who created VoLTE and coined the term 'Phablet' has been busy with his new role as Head of 5G Research at Samsung R&D in UK. In a presentation delivered couple of days back at Wi-Fi Global Congress he set out a realistic vision of 5G really means.

A brief summary of the presentation in his own words below, followed by the actual presentation:
"I started with a comment I have made before – I really hate the term 5G.  It doesn’t allow us to have a proper discussion about the multiplicity of technologies that have been throw under the common umbrella of the term, and hence blurs the rationale for one why each technology is important in its own right.  What I have tried to do in these slides is talk more about the technology, then look at the 5G requirements, and consider how each technology helps or hinders the drive to meet those requirements, and then to consider what that enables in practical terms.

The session was titled ‘5G – beyond the hype’ so in the first three slides I cut straight to the technology that is being brought in to 5G.  Building from the Air Interface enhancements, then the changes in topology in the RAN and then looking at the ‘softwarisation’ on the Core Network.  This last group of technologies sets up the friction in the network between the desire to change the CapEx model of network build by placing functions in a Cloud (both C-RAN and an NFV-based Core, as well as the virtualisation of transport network functions) and the need to push functions to the network edge by employing MEC to reduce latency.  You end up with every function existing everywhere, data breaking out of the network at many different points and some really hard management issues.

On slide 5 I then look at how these technologies line up to meeting 5G requirements.  It becomes clear that the RAN innovations are all about performance enhancement, but the core changes are about enabling new business models from flexibility in topology and network slicing.  There is also a hidden part of the equation that I call out, which is that while technology enables the central five requirements to be met, they also require massive investment by the Operator.  For example you won’t reach 100% coverage if you don’t build a network that has total coverage, so you need to put base stations in all the places that they don’t exist today.

On the next slide I look at how network slicing will be sold.  There are three ways in which a network might be sliced – by SLA or topology, by enterprise customer and by MVNO.  The SLA or topology option is key to allowing the co-existence of MEC and Cloud based CN.  The enterprise or sector based option is important for operators to address large vertical industry players, but each enterprise may want a range of SLA’s for different applications and devices, so you end up with an enterprise slice being made up of sub-slices of differing SLA and topology.  Then, an MVNO may take a slice of the network, but will have it’s own enterprise customers that will take a sub-slice of the MVNO slice, which may in turn be made of sub-sub-slices of differing SLAs.  Somewhere all of this has be stitched back together, so my suggestion is that ‘Network Splicing’ will be as important as network slicing.

Slide illustrates all of this again and notes that there will also be other networks that have been sliced as well, be that 2G, 3G, 4G, WiFi, fixed, LPWA or anything else.  There is also going to be an overarching orchestration requirement both within a network and in the Enterprise customer (or more likely in System Integrator networks who take on the ‘Splicing’ role).  The red flags are showing that Orchestration is both really difficult and expensive, but the challenge for the MNO will also exist in the RAN.  The RRC will be a pinch point that has to sort out all of these device sitting in disparate network topologies with varying demands on the sliced RAN.

Then, in the next four slides I look at the business model around this.  Operators will need to deal with the realities of B2B or B2B2C business models, where they are the first B. The first ‘B’s price is the second ‘B’s cost, so the operator should expect considerable pressure on what it charges, and to be held contractually accountable for the performance of the network.  If 5G is going to claim 100% coverage, 5 9’s reliability, 50Mbps everywhere and be sold to enterprise customers on that basis, it is going to have to deliver it else there will be penalties to pay.  On the flip side to this, if all operators do meet the 5G targets, then they will become very much the same so the only true differentiation option will be on price.  With the focus on large scale B2B contracts, this has all the hallmarks of a race downwards and commoditisation of connectivity, which will also lead to disintermediation of operators from the value chain on applications.

So to conclude I pondered on what the real 5G justification is.  Maybe operators shouldn’t be promising everything, since there will be healthy competition on speed, coverage and reliability while those remain as differentiators.  Equally, it could just be that operators will fight out the consumer market share on 5G, but then that doesn’t offer any real uplift in market size, certainly not in mature developed world markets.  The one thing that is sure is that there is a lot of money to be spent getting there."



Let me know what do you think?

Sunday, 7 May 2017

10 years battery life calculation for Cellular IoT

I made an attempt to place the different cellular and non-cellular LPWA technologies together in a picture in my last post here. Someone pointed out that these pictures above, from LoRa alliance whitepaper are even better and I agree.

Most IoT technologies lists their battery life as 10 years. There is an article in Medium rightly pointing out that in Verizon's LTE-M network, IoT devices battery may not last very long.

The problem is that 10 years battery life is headline figure and in real world its sometimes not that critical. It all depends on the application. For example this Iota Pet Tracker uses Bluetooth but only claims battery life of  "weeks". I guess ztrack based on LoRa would give similar results. I have to admit that non-cellular based technologies should have longer battery life but it all depends on applications and use cases. An IoT device in the car may not have to worry too much about power consumption. Similarly a fleet tracker that may have solar power or one that is expected to last more than the fleet duration, etc.


So coming back to the power consumption. Martin Sauter in his excellent Wireless Moves blog post, provided the calculation that I am copying below with some additions:

The calculation can be found in 3GPP TR 45.820, for NB-IoT in Chapter 7.3.6.4 on ‘Energy consumption evaluation’.

The battery capacity used for the evaluation was 5 Wh. That’s about half or even only a third of the battery capacity that is in a smartphone today. So yes, that is quite a small battery indeed. The chapter also contains an assumption on how much power the device draws in different states. In the ‘idle’ state the device is in most often, power consumption is assumed to be 0.015 mW.

How long would the battery be able to power the device if it were always in the idle state? The calculation is easy and you end up with 38 years. That doesn’t include battery self-discharge and I wondered how much that would be over 10 years. According to the Varta handbook of primary lithium cells, self-discharge of a non-rechargable lithium battery is less than 1% per year. So subtract roughly 4 years from that number.

Obviously, the device is not always in idle and when transmitting the device is assumed to use 500 mW of power. Yes, with this power consumption, the battery would not last 34 years but less than 10 hours. But we are talking about NB-IoT so the device doesn’t transmit for most of the time. The study looked at different transmission patterns. If 200 bytes are sent once every 2 hours, the device would run on that 5 Wh battery for 1.7 years. If the device only transmits 50 bytes once a day the battery would last 18.1 years.

So yes, the 10 years are quite feasible for devices that collect very little data and only transmit them once or twice a day.

The conclusions from the report clearly state:

The achievable battery life for a MS using the NB-CIoT solution for Cellular IoT has been estimated as a function of reporting frequency and coupling loss. 

It is important to note that these battery life estimates are achieved with a system design that has been intentionally constrained in two key respects:

  • The NB-CIoT solution has a frequency re-use assumption that is compatible with a stand-alone deployment in a minimum system bandwidth for the entire IoT network of just 200 kHz (FDD), plus guard bands if needed.
  • The NB-CIoT solution uses a MS transmit power of only +23 dBm (200 mW), resulting in a peak current requirement that is compatible with a wider range of battery technologies, whilst still achieving the 20 dB coverage extension objective.  

The key conclusions are as follows:

  • For all coupling losses (so up to 20 dB coverage extension compared with legacy GPRS), a 10 year battery life is achievable with a reporting interval of one day for both 50 bytes and 200 bytes application payloads.
  • For a coupling loss of 144 dB (so equal to the MCL for legacy GPRS), a 10 year battery life is achievable with a two hour reporting interval for both 50 bytes and 200 bytes application payloads. 
  • For a coupling loss of 154 dB, a 10 year battery life is achievable with a 2 hour reporting interval for a 50 byte application payload. 
  • For a coupling loss of 154 dB with 200 byte application payload, or a coupling loss of 164 dB with 50 or 200 byte application payload, a 10 year battery life is not achievable for a 2 hour reporting interval. This is a consequence of the transmit energy per data bit (integrated over the number of repetitions) that is required to overcome the coupling loss and so provide an adequate SNR at the receiver. 
  • Use of an integrated PA only has a small negative impact on battery life, based on the assumption of a 5% reduction in PA efficiency compared with an external PA.

Further improvements in battery life, especially for the case of high coupling loss, could be obtained if the common assumption that the downlink PSD will not exceed that of legacy GPRS was either relaxed to allow PSD boosting, or defined more precisely to allow adaptive power allocation with frequency hopping.

I will look at the technology aspects in a future post how 3GPP made enhancements in Rel-13 to reduce power consumption in CIoT.

Also have a look this GSMA whitepaper on 3GPP LPWA lists the applications requirements that are quite handy.