Pages

Sunday, 21 May 2017

Research on Unvoiced Speech Communications using Smartphones and Mobiles

A startup on kickstarter is touting world's first voice mask for smartphones. Having said that Hushme has been compared to Bane from Batman and Dr. Hannibal Lecter. Good detail of Hushme at Engadget here.

This is an interesting concept and has come back in the news after a long gap. Even though we are well past the point of 'Peak Telephony' because we now use text messages and OTT apps for non-urgent communications. Voice will always be around though for not only urgent communications but for things like audio/video conference calls.


Back in 2003 NTT Docomo generated a lot of news on this topic. Their research paper "Unvoiced speech recognition using EMG - mime speech recognition" was the first step in trying to find a way to speak silently while the other party can hear voice. This is probably the most quoted paper on this topic. (picture source).


NASA was working on this area around the same time. They referred to this approach as 'Subvocal Speech'. While the original intention of this approach was for astronauts suits, the intention was that it could also be available for other commercial use. Also, NASA was effectively working on limited number of words using this approach (picture source).

For both the approaches above, there isn't a lot of recent updated information. While it has been easy to recognize certain characters, it takes a lot of effort to do the whole speech. Its also a challenge to play your voice rather than a robotic voice to the other party.

To give a comparison of how big a challenge this is, look at the Youtube videos where they do an automatic captions generation. Even though you can understand what the person is speaking, its always a challenge for the machine. You can read more about the challenge here.

A lot of research in similar areas has been done is France and is available here.


Motorola has gone a step further and patented an e-Tattoo that can be emblazoned over your vocal cords to intercept subtle voice commands — perhaps even subvocal commands, or even the fully internal whisperings that fail to pluck the vocal cords when not given full cerebral approval. One might even conclude that they are not just patenting device communications from a patch of smartskin, but communications from your soul. Read more here.


Another term used for research has been 'lip reading'. While the initial approaches to lip reading was the same as other approaches of attaching sensors to facial muscles (see here), the newer approaches are looking at exploiting smartphone camera for this.

Many researchers have achieved reasonable success using cameras for lip reading (see here and here) but researchers from Google’s AI division DeepMind and the University of Oxford have used artificial intelligence to create the most accurate lip-reading software ever.
Now the challenge with smartphones for using camera for speech recognition will be high speed data connectivity and ability to see lip movement clearly. While in indoor environment this can be solved with Wi-Fi connectivity and looking at the camera, it may be a bit tricky outdoors or not looking at the camera while driving. Who knows, this may be a killer use-case for 5G.

By the way, this is not complete research in this area. If you have additional info, please help others by adding it in the comments section.

Related links:



Friday, 12 May 2017

5G – Beyond the Hype

Dan Warren, former GSMA Technology Director who created VoLTE and coined the term 'Phablet' has been busy with his new role as Head of 5G Research at Samsung R&D in UK. In a presentation delivered couple of days back at Wi-Fi Global Congress he set out a realistic vision of 5G really means.

A brief summary of the presentation in his own words below, followed by the actual presentation:
"I started with a comment I have made before – I really hate the term 5G.  It doesn’t allow us to have a proper discussion about the multiplicity of technologies that have been throw under the common umbrella of the term, and hence blurs the rationale for one why each technology is important in its own right.  What I have tried to do in these slides is talk more about the technology, then look at the 5G requirements, and consider how each technology helps or hinders the drive to meet those requirements, and then to consider what that enables in practical terms.

The session was titled ‘5G – beyond the hype’ so in the first three slides I cut straight to the technology that is being brought in to 5G.  Building from the Air Interface enhancements, then the changes in topology in the RAN and then looking at the ‘softwarisation’ on the Core Network.  This last group of technologies sets up the friction in the network between the desire to change the CapEx model of network build by placing functions in a Cloud (both C-RAN and an NFV-based Core, as well as the virtualisation of transport network functions) and the need to push functions to the network edge by employing MEC to reduce latency.  You end up with every function existing everywhere, data breaking out of the network at many different points and some really hard management issues.

On slide 5 I then look at how these technologies line up to meeting 5G requirements.  It becomes clear that the RAN innovations are all about performance enhancement, but the core changes are about enabling new business models from flexibility in topology and network slicing.  There is also a hidden part of the equation that I call out, which is that while technology enables the central five requirements to be met, they also require massive investment by the Operator.  For example you won’t reach 100% coverage if you don’t build a network that has total coverage, so you need to put base stations in all the places that they don’t exist today.

On the next slide I look at how network slicing will be sold.  There are three ways in which a network might be sliced – by SLA or topology, by enterprise customer and by MVNO.  The SLA or topology option is key to allowing the co-existence of MEC and Cloud based CN.  The enterprise or sector based option is important for operators to address large vertical industry players, but each enterprise may want a range of SLA’s for different applications and devices, so you end up with an enterprise slice being made up of sub-slices of differing SLA and topology.  Then, an MVNO may take a slice of the network, but will have it’s own enterprise customers that will take a sub-slice of the MVNO slice, which may in turn be made of sub-sub-slices of differing SLAs.  Somewhere all of this has be stitched back together, so my suggestion is that ‘Network Splicing’ will be as important as network slicing.

Slide illustrates all of this again and notes that there will also be other networks that have been sliced as well, be that 2G, 3G, 4G, WiFi, fixed, LPWA or anything else.  There is also going to be an overarching orchestration requirement both within a network and in the Enterprise customer (or more likely in System Integrator networks who take on the ‘Splicing’ role).  The red flags are showing that Orchestration is both really difficult and expensive, but the challenge for the MNO will also exist in the RAN.  The RRC will be a pinch point that has to sort out all of these device sitting in disparate network topologies with varying demands on the sliced RAN.

Then, in the next four slides I look at the business model around this.  Operators will need to deal with the realities of B2B or B2B2C business models, where they are the first B. The first ‘B’s price is the second ‘B’s cost, so the operator should expect considerable pressure on what it charges, and to be held contractually accountable for the performance of the network.  If 5G is going to claim 100% coverage, 5 9’s reliability, 50Mbps everywhere and be sold to enterprise customers on that basis, it is going to have to deliver it else there will be penalties to pay.  On the flip side to this, if all operators do meet the 5G targets, then they will become very much the same so the only true differentiation option will be on price.  With the focus on large scale B2B contracts, this has all the hallmarks of a race downwards and commoditisation of connectivity, which will also lead to disintermediation of operators from the value chain on applications.

So to conclude I pondered on what the real 5G justification is.  Maybe operators shouldn’t be promising everything, since there will be healthy competition on speed, coverage and reliability while those remain as differentiators.  Equally, it could just be that operators will fight out the consumer market share on 5G, but then that doesn’t offer any real uplift in market size, certainly not in mature developed world markets.  The one thing that is sure is that there is a lot of money to be spent getting there."



Let me know what do you think?

Sunday, 7 May 2017

10 years battery life calculation for Cellular IoT

I made an attempt to place the different cellular and non-cellular LPWA technologies together in a picture in my last post here. Someone pointed out that these pictures above, from LoRa alliance whitepaper are even better and I agree.

Most IoT technologies lists their battery life as 10 years. There is an article in Medium rightly pointing out that in Verizon's LTE-M network, IoT devices battery may not last very long.

The problem is that 10 years battery life is headline figure and in real world its sometimes not that critical. It all depends on the application. For example this Iota Pet Tracker uses Bluetooth but only claims battery life of  "weeks". I guess ztrack based on LoRa would give similar results. I have to admit that non-cellular based technologies should have longer battery life but it all depends on applications and use cases. An IoT device in the car may not have to worry too much about power consumption. Similarly a fleet tracker that may have solar power or one that is expected to last more than the fleet duration, etc.


So coming back to the power consumption. Martin Sauter in his excellent Wireless Moves blog post, provided the calculation that I am copying below with some additions:

The calculation can be found in 3GPP TR 45.820, for NB-IoT in Chapter 7.3.6.4 on ‘Energy consumption evaluation’.

The battery capacity used for the evaluation was 5 Wh. That’s about half or even only a third of the battery capacity that is in a smartphone today. So yes, that is quite a small battery indeed. The chapter also contains an assumption on how much power the device draws in different states. In the ‘idle’ state the device is in most often, power consumption is assumed to be 0.015 mW.

How long would the battery be able to power the device if it were always in the idle state? The calculation is easy and you end up with 38 years. That doesn’t include battery self-discharge and I wondered how much that would be over 10 years. According to the Varta handbook of primary lithium cells, self-discharge of a non-rechargable lithium battery is less than 1% per year. So subtract roughly 4 years from that number.

Obviously, the device is not always in idle and when transmitting the device is assumed to use 500 mW of power. Yes, with this power consumption, the battery would not last 34 years but less than 10 hours. But we are talking about NB-IoT so the device doesn’t transmit for most of the time. The study looked at different transmission patterns. If 200 bytes are sent once every 2 hours, the device would run on that 5 Wh battery for 1.7 years. If the device only transmits 50 bytes once a day the battery would last 18.1 years.

So yes, the 10 years are quite feasible for devices that collect very little data and only transmit them once or twice a day.

The conclusions from the report clearly state:

The achievable battery life for a MS using the NB-CIoT solution for Cellular IoT has been estimated as a function of reporting frequency and coupling loss. 

It is important to note that these battery life estimates are achieved with a system design that has been intentionally constrained in two key respects:

  • The NB-CIoT solution has a frequency re-use assumption that is compatible with a stand-alone deployment in a minimum system bandwidth for the entire IoT network of just 200 kHz (FDD), plus guard bands if needed.
  • The NB-CIoT solution uses a MS transmit power of only +23 dBm (200 mW), resulting in a peak current requirement that is compatible with a wider range of battery technologies, whilst still achieving the 20 dB coverage extension objective.  

The key conclusions are as follows:

  • For all coupling losses (so up to 20 dB coverage extension compared with legacy GPRS), a 10 year battery life is achievable with a reporting interval of one day for both 50 bytes and 200 bytes application payloads.
  • For a coupling loss of 144 dB (so equal to the MCL for legacy GPRS), a 10 year battery life is achievable with a two hour reporting interval for both 50 bytes and 200 bytes application payloads. 
  • For a coupling loss of 154 dB, a 10 year battery life is achievable with a 2 hour reporting interval for a 50 byte application payload. 
  • For a coupling loss of 154 dB with 200 byte application payload, or a coupling loss of 164 dB with 50 or 200 byte application payload, a 10 year battery life is not achievable for a 2 hour reporting interval. This is a consequence of the transmit energy per data bit (integrated over the number of repetitions) that is required to overcome the coupling loss and so provide an adequate SNR at the receiver. 
  • Use of an integrated PA only has a small negative impact on battery life, based on the assumption of a 5% reduction in PA efficiency compared with an external PA.

Further improvements in battery life, especially for the case of high coupling loss, could be obtained if the common assumption that the downlink PSD will not exceed that of legacy GPRS was either relaxed to allow PSD boosting, or defined more precisely to allow adaptive power allocation with frequency hopping.

I will look at the technology aspects in a future post how 3GPP made enhancements in Rel-13 to reduce power consumption in CIoT.

Also have a look this GSMA whitepaper on 3GPP LPWA lists the applications requirements that are quite handy.

Monday, 1 May 2017

Variety of 3GPP IoT technologies and Market Status - May 2017



I have seen many people wondering if so many different types of IoT technologies are needed, 3GPP or otherwise. The story behind that is that for many years 3GPP did not focus too much on creating an IoT variant of the standards. Their hope was that users will make use of LTE Cat 1 for IoT and then later on they created LTE Cat 0 (see here and here).

The problem with this approach was that the market was ripe for a solution to a different types of IoT technologies that 3GPP could not satisfy. The table below is just an indication of the different types of technologies, but there are many others not listed in here.


The most popular IoT (or M2M) technology to date is the humble 2G GSM/GPRS. Couple of weeks back Vodafone announced that it has reached a milestone of 50 million IoT connections worldwide. They are also adding roughly 1 million new connections every month. The majority of these are GSM/GPRS.

Different operators have been assessing their strategy for IoT devices. Some operators have either switched off or are planning to switch off they 2G networks. Others have a long term plan for 2G networks and would rather switch off their 3G networks to refarm the spectrum to more efficient 4G. A small chunk of 2G on the other hand would be a good option for voice & existing IoT devices with small amount of data transfer.

In fact this is one of the reasons that in Release-13 GSM is being enhanced for IoT. This new version is known as Extended Coverage – GSM – Internet of Things (EC-GSM-IoT ). According to GSMA, "It is based on eGPRS and designed as a high capacity, long range, low energy and low complexity cellular system for IoT communications. The optimisations made in EC-GSM-IoT that need to be made to existing GSM networks can be made as a software upgrade, ensuring coverage and accelerated time to-market. Battery life of up to 10 years can be supported for a wide range use cases."

The most popular of the non-3GPP IoT technologies are Sigfox and LoRa. Both these technologies have gained significant ground and many backers in the market. This, along with the gap in the market and the need for low power IoT technologies that transfer just a little amount of data and has a long battery life motivated 3GPP to create new IoT technologies that were standardised as part of Rel-13 and are being further enhanced in Rel-14. A summary of these technologies can be seen below


If you look at the first picture on the top (modified from Qualcomm's original here), you will see that these different IoT technologies, 3GPP or otherwise address different needs. No wonder many operators are using the unlicensed LPWA IoT technologies as a starting point, hoping to complement them by 3GPP technologies when ready.

Finally, looks like there is a difference in understanding of standards between Ericsson and Huawei and as a result their implementation is incompatible. Hopefully this will be sorted out soon.


Market Status:

Telefonica has publicly said that Sigfox is the best way forward for the time being. No news about any 3GPP IoT technologies.

Orange has rolled out LoRa network but has said that when NB-IoT is ready, they will switch the customers on to that.

KPN deployed LoRa throughout the Netherlands thereby making it the first country across the world with complete coverage. Haven't ruled out NB-IoT when available.

SK Telecom completed nationwide LoRa IoT network deployment in South Korea last year. It sees LTE-M and LoRa as Its 'Two Main IoT Pillars'.

Deutsche Telekom has rolled out NarrowBand-IoT (NB-IoT) Network across eight countries in Europe (Germany, the Netherlands, Greece, Poland, Hungary, Austria, Slovakia, Croatia)

Vodafone is fully committed to NB-IoT. Their network is already operational in Spain and will be launching in Ireland and Netherlands later on this year.

Telecom Italia is in process of launching NB-IoT. Water meters in Turin are already sending their readings using NB-IoT.

China Telecom, in conjunction with Shenzhen Water and Huawei launched 'World's First' Commercial NB-IoT-based Smart Water Project on World Water Day.

SoftBank is deploying LTE-M (Cat-M1) and NB-IoT networks nationwide, powered by Ericsson.

Orange Belgium plans to roll-out nationwide NB-IoT & LTE-M IoT Networks in 2017

China Mobile is committed to 3GPP based IoT technologies. It has conducted outdoor trials of NB-IoT with Huawei and ZTE and is also trialing LTE-M with Ericsson and Qualcomm.

Verizon has launched Industry’s first LTE-M Nationwide IoT Network.

AT&T will be launching LTE-M network later on this year in US as well as Mexico.

Sprint said it plans to deploy LTE Cat 1 technology in support of the Internet of Things (IoT) across its network by the end of July.

Further reading:

Thursday, 20 April 2017

5G: Architecture, QoS, gNB, Specifications - April 2017 Update


The 5G NR (New Radio) plan was finalised in March (3GPP press release) and as a result Non-StandAlone (NSA) 5G NR will be finalised by March 2018. The final 3GPP Release-15 will nevertheless include NR StandAlone (SA) mode as well.

NSA is based on Option 3 (proposed by DT). If you dont know much about this, then I suggest listening to Andy Sutton's lecture here.


3GPP TR 38.804: Technical Specification Group Radio Access Network; Study on New Radio Access Technology; Radio Interface Protocol Aspects provides the overall architecture as shown above

Compared to LTE the big differences are:

  • Core network control plane split into AMF and SMF nodes (Access and Session Management Functions). A given device is assigned a single AMF to handle mobility and AAA roles but can then have multiple SMF each dedicated to a given network slice
  • Core network user plane handled by single node UPF (User Plane Function) with support for multiple UPF serving the same device and hence we avoid need for a common SGW used in LTE. UPF nodes may be daisy chained to offer local breakout and may have parallel nodes serving the same APN to assist seamless mobility.

Hat tip Alistair Urie.
Notice that like eNodeB (eNB) in case of LTE, the new radio access network is called gNodeB (gNB). Martin Sauter points out in his excellent blog that 'g' stands for next generation.

3GPP TS 23.501: Technical Specification Group Services and System Aspects; System Architecture for the 5G System; Stage 2 provides architecture model and concepts including roaming and non-roaming architecture. I will probably have to revisit as its got so much information. The QoS table is shown above. You will notice the terms QFI (QoS Flow Identity) & 5QI (5G QoS Indicator). I have a feeling that there will be a lot of new additions, especially due to URLLC.

Finally, here are the specifications (hat tip Eiko Seidel for his excellent Linkedin posts - references below):
5G NR will use 38 series (like 25 series for 3G & 36 series for 4G).

RAN3 TR 38.801 v2.0.0 on Study on New Radio Access Technology; Radio Access Architecture and Interfaces

RAN1 TR 38.802 v2.0.0 on Study on New Radio (NR) Access Technology; Physical Layer Aspects

RAN4 TR 38.803 v2.0.0 on Study on New Radio Access Technology: RF and co-existence aspects

RAN2 TR 38.804 v1.0.0 on Study on New Radio Access Technology; Radio Interface Protocol Aspects

38.201 TS Physical layer; General description
38.211 TS Physical channels and modulation
38.212 TS Multiplexing and channel coding
38.213 TS Physical layer procedures
38.214 TS Physical layer measurements
38.21X TS Physical layer services provided to upper layer
38.300 TS Overall description; Stage-2
38.304 TS User Equipment (UE) procedures in idle mode
38.306 TS User Equipment (UE) radio access capabilities
38.321 TS Medium Access Control (MAC) protocol specification
38.322 TS Radio Link Control (RLC) protocol specification
38.323 TS Packet Data Convergence Protocol (PDCP) specification
38.331 TS Radio Resource Control (RRC); Protocol specification
37.3XX TS [TBD for new QoS]
37.3XX TS Multi-Connectivity; Overall description; Stage-2
38.401 TS Architecture description
38.410 TS NG general aspects and principles
38.411 TS NG layer 1
38.412 TS NG signalling transport
38.413 TS NG Application Protocol (NGAP)
38.414 TS NG data transport
38.420 TS Xn general aspects and principles
38.421 TS Xn layer 1
38.422 TS Xn signalling transport
38.423 TS Xn Application Protocol (XnAP)
38.424 TS Xn data transport
38.425 TS Xn interface user plane protocol
38.101 TS User Equipment (UE) radio transmission and reception
38.133 TS Requirements for support of radio resource management
38.104 TS Base Station (BS) radio transmission and reception
38.307 TS Requirements on User Equipments (UEs) supporting a release-independent frequency band
38.113 TS Base Station (BS) and repeater ElectroMagnetic Compatibility (EMC)
38.124 TS Electromagnetic compatibility (EMC) requirements for mobile terminals and ancillary equipment
38.101 TS User Equipment (UE) radio transmission and reception
38.133 TS Requirements for support of radio resource management
38.104 TS Base Station (BS) radio transmission and reception
38.141 TS Base Station (BS) conformance testing

Note that all specifications are not in place yet. Use this link to navigate 3GPP specs: http://www.3gpp.org/ftp/Specs/archive/38_series/

Further reading:



Saturday, 15 April 2017

Self-backhauling: Integrated access and backhaul links for 5G


One of the items that was proposed during the 3GPP RAN Plenary #75 held in Dubrovnik, Croatia, was Study on Integrated Access and Backhaul for NR (NR = New Radio). RP-17148 provides more details as follows:

One of the potential technologies targeted to enable future cellular network deployment scenarios and applications is the support for wireless backhaul and relay links enabling flexible and very dense deployment of NR cells without the need for densifying the transport network proportionately. 

Due to the expected larger bandwidth available for NR compared to LTE (e.g. mmWave spectrum) along with the native deployment of massive MIMO or multi-beam systems in NR creates an opportunity to develop and deploy integrated access and backhaul links. This may allow easier deployment of a dense network of self-backhauled NR cells in a more integrated manner by building upon many of the control and data channels/procedures defined for providing access to UEs. An example illustration of a network with such integrated access and backhaul links is shown in Figure 1, where relay nodes (rTRPs) can multiplex access and backhaul links in time, frequency, or space (e.g. beam-based operation).

The operation of the different links may be on the same or different frequencies (also termed ‘in-band’ and ‘out-band’ relays). While efficient support of out-band relays is important for some NR deployment scenarios, it is critically important to understand the requirements of in-band operation which imply tighter interworking with the access links operating on the same frequency to accommodate duplex constraints and avoid/mitigate interference. 

In addition, operating NR systems in mmWave spectrum presents some unique challenges including experiencing severe short-term blocking that cannot be readily mitigated by present RRC-based handover mechanisms due to the larger time-scales required for completion of the procedures compared to short-term blocking. Overcoming short-term blocking in mmWave systems may require fast L2-based switching between rTRPs, much like dynamic point selection, or modified L3-based solutions. The above described need to mitigate short-term blocking for NR operation in mmWave spectrum along with the desire for easier deployment of self-backhauled NR cells creates a need for the development of an integrated framework that allows fast switching of access and backhaul links. Over-the-air (OTA) coordination between rTRPs can also be considered to mitigate interference and support end-to-end route selection and optimization.

The benefits of integrated access and backhaul (IAB) are crucial during network rollout and the initial network growth phase. To leverage these benefits, IAB needs to be available when NR rollout occurs. Consequently, postponing IAB-related work to a later stage may have adverse impact on the timely deployment of NR access.


There is also an interesting presentation on this topic from Interdigital on the 5G Crosshaul group here. I found the following points worth noting:

  • This will create a new type of interference (access-backhaul interference) to mitigate and will require sophisticated (complex) scheduling of the channel resources (across two domains, access and backhaul).
  • One of the main drivers is Small cells densification calling for cost-effective and low latency backhauling
  • The goal would be to maximize efficiency through joint optimization/integration of access and backhaul resources
  • The existing approach of Fronthaul using CPRI will not scale for 5G, self-backhaul may be an alternative in the shape of wireless fronthaul

Let me know what you think.

Related Links:



Saturday, 8 April 2017

The Iconic British Red Phone Boxes

Source: BBC

Brits love their red phone boxes. Even with mobiles prevalent today, we don't want to get rid of the phone boxes. The BBC estimates that there are 46,000 phones boxes in use today, including 8,000 red ones.

Some of these phone boxes are being put to other interesting uses too. One of them has become 'world's smallest museum', another has been converted into a coffee shop, yet another one is a salad bar and another one in Cumbria is hosting life saving medical equipment. This is all thanks to BT that has encouraged adoption of some of these much loved icons for as little as £1.



Two British Phonebox enthusiasts, Prof. Nigel Linge and Prof. Andy Sutton have written a very well researched and comprehensive book on this topic looking at the history and evolution of the humble phone boxes through all of its major models, including those that were introduced by organisations such as the emergency services. The British Phonebox is available to purchase from Amazon and other popular bookshops.


In addition to the book, they have also written an article in 'The Journal' that gives a taster of whats in the book. Its available to download here.

5 interesting facts from the little reading that I did on this topic:

  • The model K1 (K stand for Kiosk) was very unpopular and hence a competition was held to find the best possible design. The winning design by Sir Giles Gilbert Scott became K2 that was rolled out in 1926
  • Sir Giles had suggested silver colour with blue and green interior. This was changed to red for making it easy to spot
  • The latest model is called KX100+
  • The most popular and loved model is the K6 that was designed to celebrate King George V’s Silver Jubilee, though he died before any of them were actually installed.
  • Before Queen Elizabeth came along, a vague representation of the Tudor crown was used on the telephone boxes. Wanting to put her stamp on things after she ascended to the throne in 1952, QEII had all of the crowns changed to St. Edward's Crown, the crown actually used in coronations. Scotland opted to keep the Crown of Scotland on theirs, and so all K6 boxes manufactured after 1955 had to be made with a slot in the top to insert the plate with the correct crown depending on the location of the booth.

Related Links:

Saturday, 1 April 2017

Some interesting April Fools' Day 2017 Technology Jokes

Here are the interesting April Fools' Day 2017 Technology Jokes. If I have missed one, please add them in comments. For those who don't know what April Fools' Day means, see here.

Google Windmill from Google Nederland: Interesting use of Wind and Cloud to keep Rain away.




Amazon Petlexa (Alexa for Pets): It allows dogs, cats, and other animals to communicate with Alexa just like you do. The Petlexa feature gives pets the freedom to place orders from Amazon, and to activate smart home enabled toys.




Google Play for Pets: A new category of games, apps and training tools to keep your pet stimulated. Honestly, I cant see why this cannot be real.



Honda Horn Emoji's: Horn Emojis offer a range of horn sounds for a variety of scenarios, from seeing your kids off to school to commiserating with other drivers in rush-hour traffic.


See video here. Honda has also launched In-car dating app to help lonely drivers find love at the wheel. More details here.


T-Mobile ONEsie: T-Mobile CEO John Legere has designed this Onesie and CTO Neville Ray has also participated in testing. Its got a lot of Amazing properties, including creating Human SotSpots.


See videos here and here.


Virgin Atlantic Dreambird 1417: World’s first-ever aircraft using new patented technology – flapology – to create the world’s first aircraft with flappable wings.




Virgin Trains (UK) Tickink: Innovative new contactless ticket system, offering customers the opportunity to have their train ticket permanently tattooed on their body, preventing frequent passengers from ever losing their tickets again. Details here.


Virgin Mobile Australia PhoYo:


Prysm Avatar: I quite like this concept. A work drone that enables you to reap the benefits of working from home, without sacrificing the interpersonal advantages of being in the office. Each drone is equipped with sophisticated sensors and a holographic projector that displays your likeness as a realistic, life-sized avatar.




Telenor prohibits the word ‘Digital’ in all communication: There is a small chance this may be true ;-). See here.


Finally, there was also Google Gnome (like Google home), Google Ms. Pac-ManLexus LC: Lane ValetShake Me - by Trade Me, Huawei Mate 9.2 with 2 headphone jacks


There is also this picture from 1992 circulating, showing how standards body (ETSI) celebrated April Fools' before twitter/youtube :-)


Related Posts:

Sunday, 19 March 2017

Latest on 5G Spectrum - March 2017

In an earlier post I mentioned that there will be three different types of spectrum that would be needed for 5G; coverage layer, capacity layer and high throughput layer. There is now a consensus within the industry for this approach.


In a 5G seminar, back in Jan, there were a few speakers who felt that there is an informal agreement about the frequencies that will be used. One such slide from Ofcom could be seen in the picture above. Ofcom has also recently released a report expanding on this further.


Analysys Mason has nicely summarized the bands suggested by Ofcom and possibly available in the UK for 5G in the picture above.

Global mobile Suppliers Association (GSA) has also nicely summarised the bands under investigations and trials as follows:

Coverage Layer600 MHz, 700 MHz, 800 MHz, 900 MHz, 1.5 GHz, 2.1 GHz, 2.3 GHz and 2.6 GHz

Capacity Layer:

Europe                     3400 – 3800 MHz (awarding trial licenses)

China                       3300 – 3600 MHz (ongoing trial), 4400 – 4500 MHz, 4800 – 4990 MHz

Japan                       3600 – 4200 MHz and 4400-4900 MHz

Korea                       3400 – 3700 MHz

USA                          3100 – 3550 MHz (and 3700 – 4200 MHz)

High Throughput Layer:

USA:      27.5 – 28.35 GHz and 37 – 40 GHz pre-commercial deployments in 2018

Korea:   26.5 – 29.5 GHz trials in 2018 and commercial deployments in 2019

Japan:   27.5 – 28.28 GHz trials planned from 2017 and potentially commercial deployments in 2020

China:    Focusing on 24.25 – 27.5 GHz and 37 – 43.5 GHz studies

Sweden: 26.5 – 27.5 GHz awarding trial licenses for use in 2018 and onwards

EU:        24.25 – 27.5 GHz for commercial deployments from 2020

Finally, as a reminder, list of bands originally approved for IMT-2020 (5G) as follows:


Another potential band, not being mentioned above is the 66-76GHz spectrum. This band is adjacent to the 60 GHz Wi-Fi (57 GHz - 66 GHz). Lessons learned from that band can be applied to the 5G band too.

Related links:



Thursday, 16 March 2017

Satellite Industry is Gearing up for The Next Revolution in Communications

Intelsat graphic
Source: Intelsat
I have been talking about the role of satellites in future communications on my blog and various industry fora. While most of the telecom industry is focused on 5G, it’s good to see that the satellite industry is getting ready for the next revolution.

Source: New York Times
Masayoshi Son, chief executive of SoftBank has made it his mission to merge satellite operators Intelsat and OneWeb. While on the surface they may seem as competitors, in reality they complement each other. Intelsat operates geostationary (GEO) satellites while OneWeb is building low earth orbit (LEO) satellites. They both serve overlapping but different purposes and it makes sense for them to work together. LEO satellites which are roughly at 1200km have far lower latency than compared to GEO satellites that are 36,000km away. On the other hand LEO satellites do not appear stationary unlike GEO satellites.

We in CW are already aware of Masayoshi Son’s ambition and vision. Last year Softbank acquired ARM for approximately £24 billion. In a recent keynote delivered at the Mobile World Congress 2017 (#MWC17), Son explained his vision and reasoning for this purchase. In fact he mentioned that he has a 30 year vision which is why he thinks ‘cell towers from space’ are the next step in evolution. While he refers to them as fiber from the space, I wouldn’t go that far in comparison but do admit they have the potential to deliver high speed connectivity anywhere on earth.

The most obvious application of high speed connectivity ubiquitously available anywhere on earth are connected cars. While there is Wi-Fi to provide connectivity and software updates when parked at home, it will be complemented by mobile connectivity within the cities and the major roads. What is missing is anywhere and everywhere connectivity that the satellites can bring.

The big barrier for satellite connectivity in the cars had been the need for satellite dish mounted on the top of a car roof. Kymeta, an innovative company based in Washington, USA has been trying for years to solve this problem. In May, they will start selling  their “lightweight flat-panel antennas, meant to bring fast satellite-transmitted internet connections to cars, trains and boats”.

Source: Seattle Times
Kymeta is partnering with Toyota and Intelsat to bring a complete solution for future connectivity in the cars. They are not the only ones, there are other similar interesting projects ongoing in many different parts of the world.


The telecom industry cannot ignore satellite communications forever. Satellites have already proved themselves beyond doubt in broadcasting, navigation, earth observation, etc. It’s just a matter of time before they prove their while in communications as well.

Originally Posted on CW Blog here.

Sunday, 12 March 2017

High Power / Performance User Equipment (#HPUE)

3GPP refers to HPUE as High Power UE while the US operator Sprint prefers to use the term High Performance UE.

HPUE was initially defined for US Public Safety Band 14 (700MHz). The intention was that this high power UEs can increase the coverage range from 4km to 8km. This would mean larger coverage areas and less number of cells.

While the commercial UE's (class 3) transmit at +23dBm (max 200mW), the Public Safety people intend to use class 1 UE transmitting +31 dBm (max 1.25W). It was felt that this feature could be beneficial for some TDD bands that do not have to worry about backward compatibility. One such band, pushed by Sprint was TDD Band 41 (2500MHz). As this band is for the commercial UE's, instead of class 1, class 2 power at +26dBm (max 400mW) was proposed.

3GPP TS 36.886 provides the following justification:

Currently, 3GPP has defined only Power Class UE 3 as the type of UE supported for TDD LTE band 41 operations. This definition was based on aligning TDD LTE Band 41 UE power classes with prior work in 3GPP related to other bands. However, it should be mentioned that 3GPP UE Power Class 3 definition (i.e. 23dBm) was mainly driven to ensure backward compatibility with prior technologies (i.e. GSM/UMTS) [2] so that network deployment topologies remain similar. Furthermore, maintaining the same power class UE definition (i.e. Class 3) as previous technologies would maintaining compliance with various national regulatory rulings, particularly in terms of SAR, for FDD LTE duplexing mode. 

However, TDD LTE band 41 does not have any 3GPP legacy technologies associated with it, hence the backward compatibility consideration is not applicable in its case. Also, since band 41 is defined as a TDD LTE band, it is less susceptible to SAR levels that FDD LTE bands due to SAR definition. Therefore, defining a new UE power class with higher than 23dBm Tx power for TDD LTE Band 41 operations would not compromise any of 3GPP foundational work, while improving UE and network performance. It should also be mentioned that 3GPP has done similar work on other bands (i.e. band 14) when defining a higher power class UE, hence the concept presented in this document is a continuation of that process.

The present document carries out a feasibility analysis for defining a UE Power class 2 (i.e. 26dBm) for operation on TDD LTE band 41. The document analyses current and future technological advancements in the area of UE RF front-end components and architectures that enable such definition while maintaining 3GPP specification and other regulatory bodies' requirements. It should be emphasized that this proposal only relates to single carrier UL operations on TDD band 41 (i.e. TM-1/2 modes) without affecting current 3GPP definition for UL carrier aggregation on band 41.

As you can see from the tweet above, Sprint CEO is quite pleased with the HPUE. 

SourceDiana Goovaerts

Iain Gillott, iGR points out that HPUE applies to Sprint’s 2.5 GHz TDD network and associated spectrum, and the company claims up to 30 percent increase in cell cover from the new technology.  It should be noted that HPUE is a 3GPP standard that applies to the 2.5 GHz TDD band (Band 41) and is also to be used by China Mobile and Softbank.  HPUE was developed as part of the Global TDD LTE Initiative (GTI) which includes Qualcomm Technologies, Samsung, ZTE, Broadcom, MediaTek, Skyworks Solutions, Alcatel, Motorola, LG and Qorvo... The cool part: the improvement in coverage comes from simply improving the device uplink power.  So Sprint, China Mobile and Softbank will not have to visit their cell sites to make changes; they just need 2.5 GHz TDD devices with HPUE to get the benefit.


Milan Milanović recently wrote about Sprint’s Gigabit Class LTE network goes live in New Orleans. One of the questions I had was why is the uplink so rubbish as compared to downlink. He kindly pointed out to me that this is TDD config 2
If you are wondering what is TDD Config 2, see the pic below
Source: ShareTechNote

Sprint expects HPUE to appear in postpaid devices starting in 2017, including new devices from Samsung, LG, HTC, and Moto. It’s expected that all of Sprint’s new devices will have HPUE support within the next two years.

I think it would be interesting to see how this impacts when there are a lot more users and devices. I am quite sure there will be more requests for HPUE in further TDD bands.

Related Links:

Monday, 6 March 2017

IMT-2020 (5G) Requirements


ITU has just agreed on key 5G performance requirements for IMT-2020. A new draft report ITU-R M.[IMT-2020.TECH PERF REQ] is expected to be finally approved by  ITU-R Study Group 5 at its next meeting in November 2017. The press release says "5G mobile systems to provide lightning speed, ultra-reliable communications for broadband and IoT"


The following is from the ITU draft report:

The key minimum technical performance requirements defined in this document are for the purpose of consistent definition, specification, and evaluation of the candidate IMT-2020 radio interface technologies (RITs)/Set of radio interface technologies (SRIT) in conjunction with the development of ITU-R Recommendations and Reports, such as the detailed specifications of IMT-2020. The intent of these requirements is to ensure that IMT-2020 technologies are able to fulfil the objectives of IMT-2020 and to set a specific level of performance that each proposed RIT/SRIT needs to achieve in order to be considered by ITU-R for IMT-2020.


Peak data rate: Peak data rate is the maximum achievable data rate under ideal conditions (in bit/s), which is the received data bits assuming error-free conditions assignable to a single mobile station, when all assignable radio resources for the corresponding link direction are utilized (i.e., excluding radio resources that are used for physical layer synchronization, reference signals or pilots, guard bands and guard times). 

This requirement is defined for the purpose of evaluation in the eMBB usage scenario. 
The minimum requirements for peak data rate are as follows:
Downlink peak data rate is 20 Gbit/s.
Uplink peak data rate is 10 Gbit/s.


Peak spectral efficiency: Peak spectral efficiency is the maximum data rate under ideal conditions normalised by channel bandwidth (in bit/s/Hz), where the maximum data rate is the received data bits assuming error-free conditions assignable to a single mobile station, when all assignable radio resources for the corresponding link direction are utilized (i.e. excluding radio resources that are used for physical layer synchronization, reference signals or pilots, guard bands and guard times).

This requirement is defined for the purpose of evaluation in the eMBB usage scenario.
The minimum requirements for peak spectral efficiencies are as follows: 
Downlink peak spectral efficiency is 30 bit/s/Hz.
Uplink peak spectral efficiency is 15 bit/s/Hz.


User experienced data rate: User experienced data rate is the 5% point of the cumulative distribution function (CDF) of the user throughput. User throughput (during active time) is defined as the number of correctly received bits, i.e. the number of bits contained in the service data units (SDUs) delivered to Layer 3, over a certain period of time.

This requirement is defined for the purpose of evaluation in the related eMBB test environment.
The target values for the user experienced data rate are as follows in the Dense Urban – eMBB test environment: 
Downlink user experienced data rate is 100 Mbit/s
Uplink user experienced data rate is 50 Mbit/s


5th percentile user spectral efficiency: The 5th percentile user spectral efficiency is the 5% point of the CDF of the normalized user throughput. The normalized user throughput is defined as the number of correctly received bits, i.e., the number of bits contained in the SDUs delivered to Layer 3, over a certain period of time, divided by the channel bandwidth and is measured in bit/s/Hz. 

This requirement is defined for the purpose of evaluation in the eMBB usage scenario.
Indoor Hotspot – eMBB - Downlink: 0.3 bit/s/Hz Uplink: 0.21 bit/s/Hz
Dense Urban – eMBB - Downlink: 0.225 bit/s/Hz Uplink: 0.15 bit/s/Hz
Rural – eMBB - Downlink: 0.12 bit/s/Hz Uplink: 0.045 bit/s/Hz


Average spectral efficiency: Average spectral efficiency  is the aggregate throughput of all users (the number of correctly received bits, i.e. the number of bits contained in the SDUs delivered to Layer 3, over a certain period of time) divided by the channel bandwidth of a specific band divided by the number of TRxPs and is measured in bit/s/Hz/TRxP.

This requirement is defined for the purpose of evaluation in the eMBB usage scenario.
Indoor Hotspot – eMBB - Downlink: 9 bit/s/Hz/TRxP Uplink: 6.75 bit/s/Hz/TRxP
Dense Urban – eMBB - Downlink: 7.8 bit/s/Hz/TRxP Uplink: 5.4 bit/s/Hz/TRxP
Rural – eMBB - Downlink: 3.3 bit/s/Hz/TRxP Uplink: 1.6 bit/s/Hz/TRxP


Area traffic capacity: Area traffic capacity is the total traffic throughput served per geographic area (in Mbit/s/m2). The throughput is the number of correctly received bits, i.e. the number of bits contained in the SDUs delivered to Layer 3, over a certain period of time.

This requirement is defined for the purpose of evaluation in the related eMBB test environment.
The target value for Area traffic capacity in downlink is 10 Mbit/s/m2 in the Indoor Hotspot – eMBB test environment.


User plane latency: User plane latency is the contribution of the radio network to the time from when the source sends a packet to when the destination receives it (in ms). It is defined as the one-way time it takes to successfully deliver an application layer packet/message from the radio protocol layer 2/3 SDU ingress point to the radio protocol layer 2/3 SDU egress point of the radio interface in either uplink or downlink in the network for a given service in unloaded conditions, assuming the mobile station is in the active state. 
This requirement is defined for the purpose of evaluation in the eMBB and URLLC usage scenarios.
The minimum requirements for user plane latency are
4 ms for eMBB
1 ms for URLLC 
assuming unloaded conditions (i.e., a single user) for small IP packets (e.g., 0 byte payload + IP header), for both downlink and uplink.


Control plane latency: Control plane latency refers to the transition time from a most “battery efficient” state (e.g. Idle state) to the start of continuous data transfer (e.g. Active state).
This requirement is defined for the purpose of evaluation in the eMBB and URLLC usage scenarios.
The minimum requirement for control plane latency is 20 ms. Proponents are encouraged to consider lower control plane latency, e.g. 10 ms.


Connection density: Connection density is the total number of devices fulfilling a specific quality of service (QoS) per unit area (per km2).

This requirement is defined for the purpose of evaluation in the mMTC usage scenario.
The minimum requirement for connection density is 1 000 000 devices per km2.


Energy efficiency: Network energy efficiency is the capability of a RIT/SRIT to minimize the radio access network energy consumption in relation to the traffic capacity provided. Device energy efficiency is the capability of the RIT/SRIT to minimize the power consumed by the device modem in relation to the traffic characteristics. 
Energy efficiency of the network and the device can relate to the support for the following two aspects:
a) Efficient data transmission in a loaded case;
b) Low energy consumption when there is no data.
Efficient data transmission in a loaded case is demonstrated by the average spectral efficiency 

This requirement is defined for the purpose of evaluation in the eMBB usage scenario.
The RIT/SRIT shall have the capability to support a high sleep ratio and long sleep duration. Proponents are encouraged to describe other mechanisms of the RIT/SRIT that improve the support of energy efficient operation for both network and device.


Reliability: Reliability relates to the capability of transmitting a given amount of traffic within a predetermined time duration with high success probability

This requirement is defined for the purpose of evaluation in the URLLC usage scenario. 
The minimum requirement for the reliability is 1-10-5 success probability of transmitting a layer 2 PDU (protocol data unit) of 32 bytes within 1 ms in channel quality of coverage edge for the Urban Macro-URLLC test environment, assuming small application data (e.g. 20 bytes application data + protocol overhead). 
Proponents are encouraged to consider larger packet sizes, e.g. layer 2 PDU size of up to 100 bytes.


Mobility: Mobility is the maximum mobile station speed at which a defined QoS can be achieved (in km/h).

The following classes of mobility are defined:
Stationary: 0 km/h
Pedestrian: 0 km/h to 10 km/h
Vehicular: 10 km/h to 120 km/h
High speed vehicular: 120 km/h to 500 km/h

Mobility classes supported:
Indoor Hotspot – eMBB: Stationary, Pedestrian
Dense Urban – eMBB: Stationary, Pedestrian, Vehicular (up to 30 km/h)
Rural – eMBB: Pedestrian, Vehicular, High speed vehicular 


Mobility interruption time: Mobility interruption time is the shortest time duration supported by the system during which a user terminal cannot exchange user plane packets with any base station during transitions.

This requirement is defined for the purpose of evaluation in the eMBB and URLLC usage scenarios.
The minimum requirement for mobility interruption time is 0 ms.


Bandwidth: Bandwidth is the maximum aggregated system bandwidth. The bandwidth may be supported by single or multiple radio frequency (RF) carriers. The bandwidth capability of the RIT/SRIT is defined for the purpose of IMT-2020 evaluation.

The requirement for bandwidth is at least 100 MHz
The RIT/SRIT shall support bandwidths up to 1 GHz for operation in higher frequency bands (e.g. above 6 GHz). 

In case you missed, a 5G logo has also been released by 3GPP


Related posts: