Sunday 9 August 2015

Diameter Security is worse than SS7 Security?


Back in December last year, there was a flurry of news about SS7 security flaw that allowed hackers to snoop on an unsuspecting users calls and SMS. The blog readers will also be aware that SS7 is being replaced by the Diameter protocol. The main reason being to simplify roaming while at the same time being able to manage the signalling storm in the networks.


The bad news is that while is case of SS7, security issues are due to network implementation and configuration (above pic), the security issues in Diameter seem to be due to the protocol and architecture themselves (below pic)


Diameter is very important for LTE network architecture and will possibly continue in the future networks too. It is very important to identify all such issues and iron them before some hackers start exploiting the network vulnerabilities causing issues for everyone.

The presentation by Cédric Bonnet, Roaming Technical Domain Manager, Orange at Signalling Focus Day of LTE World Summit 2015 is embedded below:


From SS7 to Diameter Security from Zahid Ghadialy

Some important information from this post has been removed due to a valid complaint.

Tuesday 4 August 2015

The Importance of License Exempt Frequency Bands


Some of you may be aware that I am also a Technical Programme Manager with the UK Spectrum Policy Forum. Recently we published a whitepaper that we had commissioned to Plum consulting on "Future use of Licence Exempt Radio Spectrum". It is an interested read not only for spectrum experts but also for people trying to understand the complex world of spectrum.

The report is very well written. Here are a few extracts in purple:

Licence exempt frequency bands are those that can be used by certain applications without the need for prior authorisation or an individual right of use. This does not mean that they are not subject to regulation – use must still comply with pre-defined technical rules to minimise the risk of interference. Most licence exempt bands are harmonised throughout Europe and are shared with other services or applications, such as radars or industrial, scientific and medical (ISM) equipment. Wi-Fi and Bluetooth are probably the most familiar examples of mass-market licence exempt wireless applications, but the bands support many other consumer devices, such as cordless phones, doorbells, car key fobs, central heating controllers, baby monitors and intruder alarms. Looking to the future, licence exempt bands are likely to be a key enabler of wireless machine to machine (M2M) communication applications.

Key benefits of licence exempt bands include:
  • For end-users:
    • Greater convenience and flexibility by avoiding the need for lengthy runs of cable in home and work environments
    • Ability to connect mobile devices to a fixed broadband network, reducing dependence on the mobile network and potentially saving costs both for the service provider and the end-user
    • Enhanced convenience, safety and security, e.g. through installation of low cost wireless alarm systems or ability to unlock vehicles remotely rather than fumbling with keys
  • For equipment vendors and operators:
    • Facilitating market entry – there is no need to acquire a licence to deploy a service
    • Enabling niche applications or services to be addressed quickly and cheaply using existing technology and spectrum – this has been particularly effective in serving new machine to machine (M2M) applications in areas such as health, transport and home automation.
    • Providing certainty about spectrum access – there is no need to compete or pay for spectrum access (though the collective nature of spectrum use means quality of service cannot be guaranteed)
    • The ability to extend the reach of fixed communication networks, by providing wireless local area connectivity in homes, businesses and at public traffic hotspots.
The two most notable drawbacks are the inability to guarantee quality of service and the more limited geographic range that is typically available (reflecting the lower power limits that apply to these bands). Licence exempt wireless applications cannot claim protection from interference arising from other users or radio services. They operate in shared frequency bands and must not themselves cause harmful interference to other radio services.

From a regulator’s perspective, licence exempt bands can be more problematic than licensed bands in terms of refarming spectrum, since it is difficult to prevent the continued deployment of legacy equipment in the bands or to monitor effectively their utilisation. There is also generally no control over numbers and / or location of devices, which can make sharing difficult and limits the amount of spectrum that can be used in this way.

In Europe, regulation of licence exempt bands is primarily dealt with at an international level by European institutions. Most bands are fully harmonised, whereby free circulation of devices that comply with the relevant standards is effectively mandated throughout the EU. However some bands are subject to “soft” harmonisation, where the frequency limits and technical characteristics are harmonised but adoption of the band is left to national administrations to decide.

A key recommendation, which I think would be very interesting and useful would be: Promote further international harmonisation of licence exempt bands, in particular the recently identified 870 – 876 MHz and 915 – 921 MHz band that are likely to be critical for supporting future M2M demand growth in Europe.

Note that a similar sub-1GHz band has been recommended for 5G for M2M/IoT. The advantage for low frequencies is that the coverage area is very large, suitable for devices with low date rates. Depending on how the final 5G would be positioned, it may well use the license exempt bands, similar to the LAA/LTE-U kind of approach maybe.

The whitepaper is embedded below and is available to download from here:




Sunday 26 July 2015

LTE vs TETRA for Critical Communications

Sometime back I was reading this interview between Martin Geddes and Peter Clemons on 'The Crisis in UK Critical Communications'. If you haven't read it, I urge you to read it here. One thing that stuck out was as follows:

LTE was not designed for critical communications.

Commercial mobile operators have moved from GSM to UMTS to WCDMA networks to reflect the strong growth in demand for mobile data services. Smartphones are now used for social media and streaming video. LTE technology fulfils a need to supply cheap mass market data communications.

So LTE is a data service at heart, and reflects the consumer and enterprise market shift from being predominantly voice-centric to data-centric. In this wireless data world you can still control quality to a degree. So with OFDM-A modulation we have reduced latency. We have improved how we allocate different resource blocks to different uses.

The marketing story is that we should be able to allocate dedicated resources to emergency services, so we can assure voice communications and group calling even when the network is stressed. Unfortunately, this is not the case. Even the 3GPP standards bodies and mobile operators have recognised that there are serious technology limitations.
This means they face a reputational risk in delivering a like-for-like mission-critical voice service.

Won’t this be fixed by updated standards?
The TETRA Critical Communications association (TCCA) began to engage with the 3GPP standards process in 2012. 3GPP then reached out to peers in the USA and elsewhere: the ESMCP project here in the UK, the US FirstNet programme, and the various European associations.

These lobbied 3GPP for capabilities specifically aimed at critical communications requirements. At the Edinburgh meeting in September 2014, 3GPP set up the SA6specification group, the first new group in a decade.

The hope is that by taking the critical communications requirement into a separate stream, it will no longer hold up the mass market release 12 LTE standard. Even with six meetings a year, this SA6 process will be a long one. By the end of the second meeting it had (as might be expected) only got as far as electing the chairman.

It will take time to scope out what can be achieved, and develop the critical communications functionality. For many players in the 3GPP process this is not a priority, since they are focusing solely on mass market commercial applications.

Similar point was made in another Critical communications blog here:

LTE has emerged as a long term possible replacement for TETRA in this age of mobile broadband and data. LTE offer unrivalled broadband capabilities for such applications as body warn video streaming, digital imaging, automatic vehicle location, computer-assisted dispatch, mobile and command centre apps, web access, enriched e-mail, mobile video surveillance apps such as facial recognition, enhanced Telemetry/remote diagnostics, GIS and many more. However, Phil Kidner, CEO of the TCCA pointed out recently that it will take many LTE releases to get us to the point where LTE can match TETRA on key features such as group working, pre-emptive services, network resilience, call set-up times and direct mode.
The result being, we are at a point where we have two technologies, one offering what end users want, and the other offering what end users need. This has altered the discussion, where now instead of looking at LTE as a replacement, we can look at LTE as a complimentary technology, used alongside TETRA to give end users the best of both worlds. Now the challenge appears to be how we can integrate TETRA and LTE to meet the needs and wants of our emergency services, and it seems that if we want to look for guidance and lessons on the possible harmony of TETRA and LTE we should look at the Middle East.
While I was researching, I came across this interesting presentation (embedded below) from the LTE World Summit 2015





The above is an interesting SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis for TETRA and LTE. While I can understand that LTE is yet unproven, I agree on the lack of spectrum and appropriate bands.

I have been told in the past that its not just the technology which is an issue, TETRA has many functionalities that would need to be duplicated in LTE.



As you can see from this timeline above, while Rel-13 and Rel-14 will have some of these features, there are still other features that need to be included. Without which, safety of the critical communication workers and public could be compromised.

The complete presentation as follows. Feel free to voice your opinions via comments.


Tuesday 21 July 2015

TDD-FDD Joint Carrier Aggregation deployed


As per Analysis Mason, of the 413 commercial LTE networks that have been launched worldwide by the end of 2Q 2015, FD-LTE accounts for 348 (or 84%) of them, while TD-LTE accounts for only 55 (or 13%). Having said that, TD-LTE will be growing in market share, thanks to the unpaired spectrum that many operators secured during the auctions. This, combined with LTE-A Small Cells (as recently demoed by Nokia Networks) can help offload traffic from hotspots.

Light Reading had an interesting summary of TD-LTE rollouts and status that is further summarised below:
  • China Mobile has managed to sign up more than 200 million subscribers in just 19 months, making it the fastest-growing operator in the world today. It has now deployed 900,000 basestations in more than 300 cities. From next year, it is also planning to upgrade to TDD+ which combines carrier aggregation and MIMO to deliver download speeds of up to 5 Gbit/s and a fivefold improvement in spectrum efficiency. TDD+ will be commercially available next year and while it is not an industry standard executives say several elements have been accepted by 3GPP. 
  • SoftBank Japan has revealed plans to trial LTE-TDD Massive MIMO, a likely 5G technology as well as an important 4G enhancement, from the end of the year. Even though it was one of the world's first operators to go live with LTE-TDD, it has until now focused mainly on its LTE-FDD network. It has rolled out 70,000 FDD basestations, compared with 50,000 TDD units. But TDD is playing a sharply increasing role. The operator expects to add another 10,000 TDD basestations this year to deliver additional capacity to Japan's data-hungry consumers. By 2019 at least half of SoftBank's traffic to run over the TDD network.

According to the Analysis Mason article, Operators consider TD-LTE to be an attractive BWA (broadband wireless access) replacement for WiMAX because:

  • most WiMAX deployments use unpaired, TD spectrum in the 2.5GHz and3.5GHz bands, and these bands have since been designated by the 3GPP as being suitable for TD-LTE
  • TD-LTE is 'future-proof' – it has a reasonably long evolution roadmap and should remain a relevant and supported technology throughout the next decade
  • TD-LTE enables operators to reserve paired FD spectrum for mobile services, which mitigates against congestion in the spectrum from fixed–mobile substitution usage profiles.

For people who may be interested in looking further into migrating from WiMAX to TD-LTE, may want to read this case study here.


I have looked at the joint FDD-TDD CA earlier here. The following is from the 4G Americas whitepaper on Carrier Aggregation embedded here.

Previously, CA has been possible only between FDD and FDD spectrum or between TDD and TDD spectrum. 3GPP has finalized the work on TDD-FDD CA, which offers the possibility to aggregate FDD and TDD carriers jointly. The main target with introducing the support for TDD-FDD CA is to allow the network to boost the user throughput by aggregating both TDD and FDD toward the same UE. This will allow the network to boost the UE throughput independently from where the UE is in the cell (at least for DL CA).

TDD and FDD CA would also allow dividing the load more quickly between the TDD and FDD frequencies. In short, TDD-FDD CA extends CA to be applicable also in cases where an operator has spectrum allocation in both TDD and FDD bands. The typical benefits of CA – more flexible and efficient utilization of spectrum resources – are also made available for a combination of TDD and FDD spectrum resources. The Rel-12 TDD-FDD CA design supports either a TDD or FDD cell as the primary cell.

There are several different target scenarios in 3GPP for TDD-FDD CA, but there are two main scenarios that 3GPP aims to support. The first scenario assumes that the TDD-FDD CA is done from the same physical site that is typically a macro eNB. In the second scenario, the macro eNB provides either a TDD and FDD frequency, and the other frequency is provided from a Remote Radio Head (RRH) deployed at another physical location. The typical use case for the second scenario is that the macro eNB provides the FDD frequency and the TDD frequency from the RRH.

Nokia Networks were the first in the world with TDD-FDD CA demo, back in Feb 2014. In fact they also have a nice video here. Surprisingly there wasnt much news since then. Recently Ericsson announced the first commercial implementation of FDD/TDD carrier aggregation (CA) on Vodafone’s network in Portugal. Vodafone’s current trial in its Portuguese network uses 15 MHz of band 3 (FDD 1800) and 20 MHz of band 38 (TDD 2600). Qualcomm’s Snapdragon 810 SoC was used for measurement and testing.

3 Hong Kong is another operator that has revealed its plans to launch FDD-TDD LTE-Advanced in early 2016 after demonstrating the technology on its live network.

The operator used equipment supplied by Huawei to aggregate an FDD carrier in either of the 1800 MHz or 2.6 GHz bands with a TDD carrier in the 2.3 GHz band. 3 Hong Kong also used terminals equipped with Qualcomm's Snapdragon X12 LTE processor.

3 Hong Kong already offers FDD LTE-A using its 1800-MHz and 2.6-GHz spectrum, and is in the midst of deploying TD-LTE with a view to launching later this year.

The company said it expects devices that can support hybrid FDD-TDD LTE-A to be available early next year "and 3 Hong Kong is expected to launch the respective network around that time."

3 Hong Kong also revealed it plans to commercially launch tri-carrier LTE-A in the second half of 2016, and is working to aggregate no fewer than five carriers by refarming its 900-MHz and 2.1-GHz spectrum.

TDD-FDD CA is another tool in the network operators toolbox to help plan the network and make it better. Lets hope more operators take the opportunity to deploy one.

Sunday 12 July 2015

S8HR: Standardization of New VoLTE Roaming Architecture

VoLTE is a very popular topic on this blog. A basic VoLTE document from Anritsu has over 40K views and my summary from last years LTE Voice summit has over 30K views. I assume this is not just due to the complexity of this feature.

When I attended the LTE Voice summit last year, of the many solutions being proposed for roaming, 'Roaming Architecture for Voice over LTE with Local Breakout (RAVEL)' was being touted as the preferred solution, even though many vendors had reservations.

Since then, GSMA has endorsed a new VoLTE roaming architecture, S8HR, as a candidate for VoLTE roaming. Unlike previous architectures, S8HR does not require the deployment of an IMS platform in VPLMN. This is advantageous because it shortens time-to-market and provides services universally without having to depend on the capability of VPLMN.



Telecom Italia has a nice quick summary, reproduced below:

S8HR simplicity, however, is not only its strength but also its weakness, as it is the source of some serious technical issues that will have to be solved. The analysis of these issues is on the Rel13 3GPP agenda for the next months, but may overflow to Rel14. Let’s see what these issues are, more in detail:


Regulatory requirements - S8HR roaming architecture needs to meet all the current regulatory requirements applicable to voice roaming, specifically:
  • Support of emergency calls - The issues in this context are several. For example, authenticated emergency calls rely on the existence if an IMS NNI between VPLMN and HPLMN (which S8HR does not provide); conversely, the unauthenticated emergency calls, although technically feasible in S8HR, are allowed only in some Countries subject to the local regulation of VPLMN. Also, for a non-UE-detectable IMS Emergency call, the P-CSCF in the HPLMN needs to be capable of deciding the subsequent action (e.g. translate the dialed number and progress the call or reject it with the indication to set up an emergency call instead), taking the VPLMN ID into account. A configuration of local emergency numbers per Mobile Country Code on P-CSCF may thus be needed.
  • ­Support of Lawful Interception (LI) & data retention for inbound roamers in VPLMN -  S8HR offers no solution to the case where interception is required in the VPLMN for inbound roamers. 3GPP is required to define a solution that fulfill such vital regulatory requirement, as done today in circuit switched networks. Of course VPLMN and HPLMN can agree in their bilateral roaming agreement to disable confidentiality protection to support inbound roamer LI but is this practice really viable from a regulatory point of view?
Voice call continuity – The issue is that when the inbound roamers lose the LTE coverage to enter into  a 2G/3G CS area, the Single Radio Voice Call Continuity (SRVCC) should be performed involving the HPLMN in a totally different way than current specification (i.e. without any IMS NNI being deployed).
Coexistence of LBO and S8HR roaming architectures will have to be studied since an operator may need to support both LBO and S8HR VoLTE roaming architecture options for roaming with different operators, on the basis of bilateral agreement and depending on the capability.
Other issues relate to the capability of the home based S-CSCF and TAS (Telephony Application Server) to be made aware about the VPLMN identity for charging purposes and to enable the TAS to subsequently perform communication barring supplementary services. Also, where the roaming user calls a geo-local number (e.g. short code, or premium numbers), the IMS entities in HPLMN must do number resolution to correctly route the call.
From preliminary discussions held at Working Group level in SA2 (architecture) and SA3 (security) in April, it was felt useful to create a new 3GPP Technical Report to perform comprehensive technical analysis on the subject. Thus it is expected that the discussions will continue in the next months until the end of 2015 and will overheat Release 13 agenda due to their commercial and “political” nature. Stay tuned to monitor the progress of the subject or contact the authors for further information!
NTT Docomo also did some trials back in February and got some brilliant results:

In the trials, DOCOMO and KT achieved the world's first high-definition voice and video call with full end-to-end quality of service. Also, DOCOMO and Verizon achieved the world's first transoceanic high-definition VoLTE roaming calls. DOCOMO has existing commercial 3G and 4G roaming relations with Verizon Wireless and KT.
The calls were made on an IP eXchange (IPX) and network equipment to replicate commercial networks. With only two months of preparation, which also proved the technology's feasibility of speedy commercialization, the quality of VoLTE roaming calls using S8HR architecture over both short and long distances was proven to be better than that of existing 3G voice roaming services.


In fact, NTT Docomo has already said based on the survery from GSMA's Network 2020 programme that 80% of the network operators want this to be supported by the standards and 46% of the operators already have a plan to support this.


The architecture has the following technical characteristics:
(1) Bearers for IMS services are established on the S8 reference point, just as LTE data roaming.
(2) All IMS nodes are located at Home Public Land Mobile Network (HPLMN), and all signaling and media traffic for the VoLTE roaming service go through HPLMN.
(3) IMS transactions are performed directly between the terminal and P-CSCF at HPLMN. Accordingly, Visited Public Land Mobile Network (VPLMN) and interconnect networks (IPX/GRX) are not service-aware at the IMS level. The services can only be differentiated by APN or QoS levels.

These three technical features make it possible to provide all IMS services by HPLMN only and to minimize functional addition to VPLMN. As a result, S8HR shortens the time-to-market for VoLTE roaming services.

Figure 2 shows the attach procedure for S8HR VoLTE roaming. From Steps 1 to 3, there is no significant difference from the LTE data roaming attach procedure. In Step 4, HSS sends an update location answer message to MME. In order for the MME to select the PGW in HPLMN (Step 5), the MME must set the information element VPLMN Dynamic Address “Allowed,” which is included in the subscribed data, to “Not Allowed.” In Step 6, the bearer for SIP signaling is created between SGW and PGW with QCI=5. MME sends an attach accept message to the terminal with an IMS Voice over PS Session Support Indication information element, which indicates that VoLTE is supported. The information element is set on the basis of the MME’s internal configuration specifying whether there is a VoLTE roaming agreement to use S8HR. If no agreement exists between two PLMNs, the information element will not be set.

The complete article from the NTT Docomo technical journal is embedded



Sunday 5 July 2015

A tale of two Smart Cities

Over the last few months I heard quite a few talks about Smart Cities. Here are two that I thought its worth posting and a very good TEDx talk at the bottom



I think we all agree that more and more people will move from rural to urban areas and the cities will not only grow in population but also in size. The infrastructure will have to grow to be able to cope with the influx of people and increased demand on services.



I guess in most developed nations we have the 1.0 Era Digital City which is long way away from the 3.0 Era Smart City.



To be a full fledged 3.0 Smart City, every aspect of our life may need to evolve into "Smart". Anyway, here is the complete presentation:





While IoT would be important, access, big data, applications, etc. all will have a role to play.



If you want to find out more about the Milton Keynes smart city, also see this video on Youtube. There are driverless pods and other autonomous cars which may be considered as initial step towards smart cities, see this interesting video here.

Finally here is the TEDx talk about designing these smart cities for future:


Sunday 28 June 2015

LTE-M a.k.a. Rel-13 Cellular IoT

Some months back I wrote about the LTE Category-0 devices here. While Rel-12 LTE Cat 0 devices are a first step in the right direction, they are not enough for small sensor type of devices where long battery life is extremely important. As can be seen in the picture above, this will represent a huge market in 2025.


To cater for this requirement of extremely long battery life, it is proposed that Rel-13 does certain modifications for these low throughput sensor type devices. The main modification would be that the devices will work in 1.4MHz bandwidth only, regardless of the bandwidth of the cell. The UE transmit power will be max of 20dB and the throughput would be further reduced to a maximum of 200kbps.

The presentation, from Cambridge Wireless Future of Wireless International Conference is embedded below:



See also:

Sunday 21 June 2015

Broadband Access via Integrated Terrestrial & Satellite Systems


Last week I attended an event in the University of Surrey that was about providing high speed connectivity to un-served and under-served areas in future. While there is no arguing that satellites are a great option for unserved areas, the underserved areas can really benefit by such initiatives.


The way this is being proposed is to have a specialised Intelligent User Gateway (IUG) that can connect to ADSL, Mobile and Satellite. The assumption is that in areas of poor conectivity, ADSL can provide 2Mbps and the mobile could do something similar, upto 8Mbps. The satellites can easily do 20Mbps.

While the satellite broadband has the advantage of high speeds, they often suffer from high latencies. ADSL on the other hand has very small latency but may not be good enough for streaming kind of applications. Mobile generally falls in between for latency and speed. Using Multipath TCP and some intelligent routing algorithms, decisions can be taken to optimise for latency and speeds.

I did see some impressive demo's in the lab and it did what is says on the tin. The real challenge would be the business models. While ADSL can offer unlimited internet, both Mobile and Satellite broadband will have caps. I was told that limits could be imposed so that once the Mobile/Satellite data allowance is over, only ADSL would be used. Maybe a more complex algorithm could be implemented in future that can include cost and priority of the application/service being used.

An example would be that sometimes I want to watch some long videos over Youtube but I am happy to start buffering an hour in advance. Its not critical that I have to watch that now. I would be more than happy to save my Mobile/Satellite broadband data allowance for some other day when I need to watch things more urgently. If the end of month is coming and I have a lot of data allowance left then maybe I dont mind using the quota otherwise I will anyway lose the allowance. Its always challenging to put this intelligence in the routing decision algorithms though.

Anyway, the combined presentations are embedded below and you can download them from the BATS project page here:



Tuesday 16 June 2015

Have researchers moved on past 5G on to 6G Wireless?


As I am active on multiple social networks including blogs, twitter, facebook, linkedin, etc., Its always tricky to be able to share information from one on to another. Some time back I tweeted about the 6G research that seems to have started according to an article in FT.

While I had a few retweets and interactions, I realised that its always challenging to search the tweets so I decided to add this in the blog post, always easier to look it up.

So the FT Article states that:

Even as 5G remains a distant prospect for most mobile users, some scientists have already begun to work on plans for 6G services in the future.

To an extent, terms such as 4G and 5G have become as much about marketing equipment as any single technology breakthrough, with incremental improvements to technical specifications often arbitrarily given names such as 3.5G or 4.5G.

But that has not stopped people from thinking about what 6G could look like — and in the UK at least, the prediction is for a “quantum” leap.

Britain has created a “national quantum strategy” to identify areas where advances in technology will have the greatest impact on daily lives in the future. The strategy was developed by the Quantum Technologies Strategic Advisory Board, a government funded agency, which oversees the £270m programme. 

One of the key goals will be the development of faster communications for mobile devices. The advisory board predicts that the market for quantum products and technology has the potential to become a £1bn industry, even if details of how mobile technology can use quantum theory — science at an atomic level — are thin on the ground.

So why did I suddenly think about 6G? Because I have had a few discussions where the research community feel that they should focus on technologies beyond 5G, something that would be a game changer and would change the way we do communications. To be honest, new ways of communications have been found (like LED-Fi / Li-Fi ) but they have not really been ground breaking.

Do you have any ideas or suggestions, add it as comments.

Sunday 14 June 2015

Using 8T8R Antennas for TD-LTE


People often ask at various conferences if TD-LTE is a fad or is it something that will continue to exist along with the FDD networks. TDD networks were a bit tricky to implement in the past due to the necessity for the whole network to be time synchronised to make sure there is no interference. Also, if there was another TDD network in an adjacent band, it would have to be time synchronised with the first network too. In the areas bordering another country where they might have had their own TDD network in this band, it would have to be time synchronised too. This complexity meant that most networks were happy to live with FDD networks.

In 5G networks, at higher frequencies it would also make much more sense to use TDD to estimate the channel accurately. This is because the same channel would be used in downlink and uplink so the downlink channel can be estimated accurately based on the uplink channel condition. Due to small transmit time intervals (TTI's), these channel condition estimation would be quite good. Another advantage of this is that the beam could be formed and directed exactly at the user and it would appear as a null to other users.

This is where 8T8R or 8 Transmit and 8 Receive antennas in the base station can help. The more the antennas, the better and narrower the beam they can create. This can help send more energy to users at the cell edge and hence provide better and more reliable coverage there.  

SONWav Operator Solution

How do these antennas look like? 8T8R needs 8x Antennas at the Base Station Cell, and this is typically delivered using four X-Polar columns about half wavelength apart. I found the above picture on antenna specialist Quintel's page here, where the four column example is shown right. At spectrum bands such as 2.3GHz, 2.6GHz and 3.5GHz where TD-LTE networks are currently deployed, the antenna width is still practical. Quintel’s webpage also indicates how their technology allows 8T8R to be effectively emulated using only two X-Polar columns thus promising Slimline antenna solutions at lower frequency bands. China Mobile and Huawei have claimed to be the first ones to deploy these four X-Pol column 8T8R antennas. Sprint, USA is another network that has been actively deploying these 8T8R antennas.

There are couple of interesting tweets that show their kit below:

In fact Sprint has very ambitious plans. The following is from a report in Fierce Wireless:

Sprint's deployment of 8T8R (eight-branch transmit and eight-branch receive) radios in its 2.5 GHz TDD LTE spectrum is resulting in increased data throughput as well as coverage according to a new report from Signals Research. "Thanks to TM8 [transmission mode 8] and 8T8R, we observed meaningful increases in coverage and spectral efficiency, not to mention overall device throughput," Signals said in its executive summary of the report.

The firm said it extensively tested Sprint's network in the Chicago market using Band 41 (2.5 GHz) and Band 25 (1.9 GHz) in April using Accuver's drive test tools and two Galaxy Note Edge smartphones. Signals tested TM8 vs. non-TM8 performance, Band 41 and Band 25 coverage and performance as well as 8T8R receive vs. 2T2R coverage/performance and stand-alone carrier aggregation.

Sprint has been deploying 8T8R radios in its 2.5 GHz footprint, which the company has said will allow its cell sites to send multiple data streams, achieve better signal strength and increase data throughput and coverage without requiring more bandwidth.

The company also has said it will use carrier aggregation technology to combine TD-LTE and FDD-LTE transmission across all of its spectrum bands. In its fourth quarter 2014 earnings call with investors in February, Sprint CEO Marcelo Claure said implementing carrier aggregation across all Sprint spectrum bands means Sprint eventually will be able to deploy 1900 MHz FDD-LTE for uplink and 2.5 GHz TD-LTE for downlink, and ultimately improve the coverage of 2.5 GHz LTE to levels that its 1900 MHz spectrum currently achieves. Carrier aggregation, which is the most well-known and widely used technique of the LTE Advanced standard, bonds together disparate bands of spectrum to create wider channels and produce more capacity and faster speeds.

Alcatel-Lucent has a good article in their TECHzine, an extract from that below:

Field tests on base stations equipped with beamforming and 8T8R technologies confirm the sustainability of the solution. Operators can make the most of transmission (Tx) and receiving (Rx) diversity by adding in Tx and Rx paths at the eNodeB level, and beamforming delivers a direct impact on uplink and downlink performance at the cell edge.

By using 8 receiver paths instead of 2, cell range is increased by a factor of 1.5 – and this difference is emphasized by the fact that the number of sites needed is reduced by nearly 50 per cent. Furthermore, using the beamforming approach in transmission mode generates a specific beam per user which improves the quality of the signal received by the end-user’s device, or user equipment (UE). In fact, steering the radiated energy in a specific direction can reduce interference and improves the radio link, helping enable a better throughput. The orientation of the beam is decided by shifting the phases of the Tx paths based on signal feedback from the UE. This approach can deliver double the cell edge downlink throughput and can increase global average throughput by 65 per cent.

These types of deployments are made possible by using innovative radio heads and antenna solutions.  In traditional deployments, it would require the installation of multiple remote radio heads (RRH) and multiple antennas at the site to reach the same level of performance. The use of an 8T8R RRH and a smart antenna array, comprising 4 cross-polar antennas in a radome, means an 8T8R sector deployment can be done within the same footprint as traditional systems.



Anyone interested in seeing pictures of different 8T8R antennas like the one above, see here. While this page shows Samsung's antennas, you can navigate to equipment from other vendors.

Finally, if you can provide any additional info or feel there is something incorrect, please feel free to let me know via comments below.

Sunday 7 June 2015

Nuggets from Ericsson Mobility Report


Ericsson mobility report 2015 was released last week. Its interesting to see quite a few of these stats on devices, traffic, usage, etc. is getting released around this time. All of these reports are full of useful information and in the old days when I used to work as an analyst, I would spend hours trying to dig into them to find gold. Anyway, some interesting things as follows and report at the end.

The above chart, as expected, data will keep growing but voice will get flatter and maybe go down, if people start moving to VoIP

Application volume shares, based on the data plan. This is interesting. If you are a heavy user, you may be watching a lot of videos and if you are a light user then you are watching just a few of them.

How about device sizes, does our behaviour change based on the screen size?

What about the 50 Billion connected devices, was it too much? Is the real figure more like 28 billion?

Anyway, the report is embedded below.



Wednesday 3 June 2015

'The Future Inter-connected Network' and Timing, Frequency & Phase requirements


I had the pleasure of doing a keynote at PhaseReady 2015 in London today. My presentation is embedded below along with some comments, followed by tweets some of which I think are important to think about. Finally, I have embedded a video by EE and Light Reading which was quoted and maybe its important in the context of this event.


My main focus during this presentation has been on how the networks have evolved from 3G days with the main focus (unconsciously) on speeds. While the networks are evolving, they are also getting more complex. The future ecosystem will consist of many Inter-connected (and in many cases inter-operable) networks that will work out the requirements in different situations and adapt to the necessary network(technology) accordingly.

An example of today's networks are like driving a manual car where we have to change gears depending on the traffic, speed required and fuel efficiency. Automatic cars are supposed to optimise this and achieve the best in all different cases. The future inter-connected networks should achieve the best based on the requirements in all different scenarios.

While it is easy to say this in theory, the practical networks will have many challenges to solve, including business and/or technical. The theme of the conference was timing, frequency and phase synchronisation. There are already challenges around them today, with the advanced LTE-A features. These challenges are only going to get bigger.

The following are the tweets from the day:



Finally, here is the link to video referred to in the last tweet. Its from last year but well worth listening.

Saturday 30 May 2015

'5G' talks from Johannesburg Summit 2015


The annual Johannesburg Summit took place May 10th-12th 2015. While it seems like there is a 5G related event every week, most of the events focus on different themes, use cases, applications and possibilities.

While there were some quite futuristic grand visions, there were a few technical presentations that would be a treat to the audience of this blog. I would especially recommend the presentations from Qualcomm and Samsung. Here is a video of all the presentations:


Some of the presentations from this summit, in PDF format are available here.

Saturday 23 May 2015

The path from 4.5G to 5G

In the WiFi Global Congress last week, I heard this interesting talk from an ex-colleague who now works with Huawei. While there were a few interesting things, the one I want to highlight is 4.5G. The readers of this blog will remember that I introduced 4.5G back in June last year and followed it with another post in October when everyone else started using that term and making it complicated.

According to this presentation, 3GPP is looking to create a new brand from Release-13 that will supersede LTE-Advanced (LTE-A). Some of you may remember that the vendor/operator community tried this in the past by introducing LTE-B, LTE-C, etc. for the upcoming releases but they were slapped down by 3GPP. Huawei is calling this Release-13 as 4.5G but it would be re-branded based on what 3GPP comes up with.


Another interesting point are the data rates achieved in the labs, probably more than others. 10.32Gbps in sub-6GHz in a 200MHz bandwidth and 115.20Gbps using a 9.6GHz bandwidth in above 6GHz spectrum. The complete presentation as follows:



Another Huawei presentation that merits inclusion is the one from the last Cambridge Wireless Small Cells SIG event back in February by Egon Schulz. The presentation is embedded below but I want to highlight the different waveforms that being being looked at for 5G. In fact if someone has a list of the waveforms, please feel free to add it in comments


The above tweet from a recent IEEE event in Bangalore is another example of showing the research challenges in 5G, including the waveforms. The ones that I can obviously see from above is: FBMC, UFMC, GFDM, NOMA, SCMA, OFDM-opt, f-OFDM.

The presentation as follows:




Saturday 16 May 2015

Smart Homes of the Future and Technologies


Saw the above picture recently on Twitter. While its great to see how connected our future homes and even cities would be, it would be interesting to see what technologies are used for connecting these devices.

Cambridge Wireless had a smart homes event last month, there were some interesting presentations that I have detailed below.


The first of these technologies discussed is LoRa. As can be seen, its billed as ultimate long range (10 mile) and low power (10 year battery lifetime) technology. It uses spread-spectrum making it robust to channel noise. Here is the presentation:




The next technology is Zigbee 3.0. According to Zigbee Alliance:

The new standard unifies ZigBee standards found in tens of millions of devices delivering benefits to consumers today. The ZigBee 3.0 standard enables communication and interoperability among devices for home automation, connected lighting, energy efficiency and other markets so more diverse, fully interoperable solutions can be delivered by product developers and service providers. All device types, commands, and functionality defined in current ZigBee PRO-based standards are available to developers in the new standard.

ZigBee 3.0 defines the widest range of device types including home automation, lighting, energy management, smart appliance, security, sensors, and health care monitoring products. It supports both easy-to-use DIY installations as well as professionally installed systems. Based on IEEE 802.15.4, which operates at 2.4 GHz (a frequency available for use around the world), ZigBee 3.0 uses ZigBee PRO networking to enable reliable communication in the smallest, lowest-power devices. Current ZigBee Certified products based on ZigBee Home Automation and ZigBee Light Link are interoperable with ZigBee 3.0. A complete list of standards that have been merged to create ZigBee 3.0 can be seen on the website at www.ZigBee.org.

“The ZigBee Alliance has always believed that true interoperability comes from standardization at all levels of the network, especially the application level which most closely touches the user,” said Tobin J. M. Richardson, President and CEO of the ZigBee Alliance. “Lessons learned by Alliance members when taking products to market around the world have allowed us to unify our application standards into a single standard. ZigBee 3.0 will allow product developers to take advantage of ZigBee’s unique features such as mesh networking and Green Power to deliver highly reliable, secure, low-power, low-cost solutions to any market.”



Finally, we have Bluetooth Smart mesh.

CSRmesh enables Bluetooth® low energy devices not only to receive and act upon messages, but also to repeat those messages to surrounding devices thus extending the range of Bluetooth Smart and turning it into a mesh network for the Internet of Things.



While the CW event was not able to discuss all possible technologies (and believe me there are loads of them), there are other popular contenders. Cellular IoT (CIoT) is one if them. I have blogged about the LTE Cat-0 here and 5G here.

A new IEEE Wi-Fi standard 802.11ah using the 900MHz band has been in works and will solve the need of connectivity for a large number of things over long distances. A typical 802.11ah access point could associate more than 8,000 devices within a range of 1 km, making it ideal for areas with a high concentration of things. The Wi-Fi Alliance is committed to getting this standard ratified soon. With this, Wi-Fi has the potential to become a ubiquitous standard for IoT. See also this article by Frank Rayal on this topic.

Finally, there is SIGFOX. According to their website:

SIGFOX uses a UNB (Ultra Narrow Band) based radio technology to connect devices to its global network. The use of UNB is key to providing a scalable, high-capacity network, with very low energy consumption, while maintaining a simple and easy to rollout star-based cell infrastructure.

The network operates in the globally available ISM bands (license-free frequency bands) and co-exists in these frequencies with other radio technologies, but without any risk of collisions or capacity problems. SIGFOX currently uses the most popular European ISM band on 868MHz (as defined by ETSI and CEPT) as well as the 902MHz in the USA (as defined by the FCC), depending on specific regional regulations.

Communication on SIGFOX is secured in many ways, including anti-replay, message scrambling, sequencing, etc. The most important aspect of transmission security is however that only the device vendors understand the actual data exchanged between the device and the IT systems. SIGFOX only acts as a transport channel, pushing the data towards the customer's IT system.

An important advantage provided by the use of the narrow band technology is the flexibility it offers in terms of antenna design. On the network infrastructure end it allows the use of small and simple antennas, but more importantly, it allows devices to use inexpensive and easily customizable antennas.


Sigfox is also working on project Mustang, a three-year effort to build a hybrid satellite/terrestrial IoT (internet of things) network. According to Rethink Research:

The all-French group also contains aerospace firm Airbus, research institute CEA-Leti and engineering business Sysmeca. The idea is to use Sigfox as the terrestrial data link, with satellite backhaul and connections to planes and boats provided by a low-earth orbit (LEO) satellite constellation.
...
The satellite link could be added to either the end devices or the base station, so that if a device was unable to connect to the terrestrial Sigfox network, it could fall back to the satellite.

While the power requirements for this would be prohibitive for ultra-low power, battery-operated devices, for those with a wired power supply and critical availability requirements (such as smart meters, alarms, oil tankers and rigs) the redundancy would be an asset. These devices may transmit small amounts of data but when they do need to communicate, the signal must be assured.

The Sigfox base station could be fitted with a satellite uplink as a primary uplink as well as a redundancy measure in some scenarios where terrestrial network reach cannot be achieved. With a three-year lifecycle, Mustang’s participants are looking to create a seamless global network, and note that the planned dual-mode terrestrial/satellite terminal will enable switching between the two channels in response to resource availability.

The group says that the development of this terminal modem chipset is a priority, with later optimization of the communication protocols being the next step before an application demonstration using an airplane.

The project adds that the full potential of the IoT can only be achieved by offering affordable mobile communications at a global scale and reach. Key to this is adapting existing networks, according to the group, which explains why Sigfox has been chosen – given that the company stresses the affordability of its system.

Well, there are a lots of options available. We just have to wait and see which ones work in what scenarios.