Thursday 30 October 2014

Codecs and Quality across VoLTE and OTT Networks

Codecs play an important role in our smartphones. Not only are they necessary and must for encoding/decoding the voice packets but they increase the price of our smartphones too.

A $400 smartphone can have as much as $120 in IPR fees. If you notice in the picture above its $10.60 for the H.264 codec. So its important that the new codecs that will come as part of new generation of mobile technology is free, open source or costs very little.


The new standards require a lot of codecs, some for backward compatibility but this can significantly increase the costs. Its important to make sure the new codecs selected are royalty-free or free license.

The focus of this post is a presentation by Amir Zmora from AudioCodecs in the LTE Voice Summit. The presentation below may not be self-explanatory but I have added couple of links at the bottom of the post where he has shared his thoughts. Its worth a read.



A good explanation of Voice enhancement tools as follows (slide 15):

Adaptive Jitter Buffer (AJB) – Almost all devices today (Smartphones, IP phones, gateways, etc.) have built in jitter buffers. Legacy networks (which were LAN focused when designed) usually have older devices with less sophisticated jitter buffers. When designed they didn’t take into account traffic coming in from networks such as Wi-Fi with its frequent retransmissions and 3G with its limited bandwidth, in which the jitter levels are higher than those in wireline networks. Jitter buffers that may have been planned for, say, dozens of msec may now have to deal with peaks of hundreds of msec. Generally, if the SBC has nothing to mediate (assume the codecs are the same and the Ptime is the same on both ends) it just forwards the packets. But the unexpected jitter coming from the wireless network as described above, requires the AJB to take action. And even if the network is well designed to handle jitter, today’s OTT applications via Smart Phones add yet another variable to the equation. There are hundreds of such devices out there, and the audio interfaces of these devices (especially those of the Android phones) create jitter that is passed into the network. For these situations, too, the AJB is necessary.

To overcome this issue, there is a need for a highly advanced Adaptive Jitter Buffer (AJB) built into the SBC that neutralizes the incoming jitter so that it is handled without problem on the other side. The AJB can handle high and variable jitter rates.

Additionally, the AJB needs to work in what is called Tandem scenarios where the incoming and outgoing codec is the same. This scenario requires an efficient solution that will minimize the added delay. AudioCodes has built and patented solutions supporting this scenario.

Transcoding – While the description above discussed the ability to bypass the need to perform transcoding in the Adaptive Jitter Buffer context, there may very well be a need for transcoding between the incoming and outgoing packet streams. Beyond being able to mediate between different codecs on the different networks on either end of the SBC, the SBC can transcode an incoming codec that is less resilient to packet loss (such as narrowband G.729 or wideband G.722) to a more resilient codec (such as Opus). By transcoding to a more resilient codec, the SBC can lower the effects of packet loss. Transcoding can also lower the bandwidth on the network. Additionally, the SBC can transcode from narrowband (8Khz) to wideband (16Khz) (and vice versa) as well as wideband transcoding, where both endpoints support wideband codecs but are not using the same ones. For example, a wireless network may be using the AMR wideband codec while the wireline network on the other side may be using Opus. Had it not been for the SBC, these two networks would have negotiated a common narrowband codec.

Flexible RTP Redundancy – The SBC can also use RTP redundancy in which voice packets are sent several times to ensure they are received. Redundancy is used to balance networks which are characterized by high packet loss burst. While reducing the effect of packet loss, Redundancy increases the bandwidth (and delay). There are ways to get around this bandwidth issue that are supported by the SBC. One way is by sending only partial packet information (not fully redundant packets). The decoder on the receiving side will know how to handle the partial information. This process is called Forward Error Correction (FEC).

Transrating – Transrating is the process of having more voice payload ‘packed’ into a single RTP packet by increasing the packet intervals, thus changing the Packetization Time or Ptime. Ptime is the time represented by the compression of the voice signals into packets, generally at 20 msec intervals. In combining the payloads of two or more packets into one, the Transrating process causes a reduction in the overhead of the IP headers, lowering the bandwidth and reducing the stress on the CPU resources, however, it increases delay. It thus can be used not only to mediate between two end devices using different Ptimes, but also as a means of balancing the network by reducing bandwidth and reducing CPU pressure during traffic peaks.

Quality-based Routing – Another tool used by the SBC is Quality-based routing. The SBC, which is monitoring all the calls on the network all the time, can decide (based on pre-defined thresholds and parameters) to reroute calls over different links that have better quality.

Further reading:

Thursday 23 October 2014

Detailed whitepaper on Carrier Aggregation by 4G Americas

4G Americas has published a detailed whitepaper on Carrier Aggregation (CA). Its a very good detailed document for anyone wishing to study CA.


Two very important features that have come as part of CA enhancements were the multiple timing advance values that came as a part of Release-11 and TDD-FDD joint operation that came part of Release-12

While its good to see that up to 3 carriers CA is now possible as part of Rel-12 and as I mentioned in my last post, we need this to achieve the 'Real' 4G. We have to also remember at the same time that these CA makes the chipsets very complex and may affect the sensitivity of the RF receivers.

Anyway, here is the 4G Americas whitepaper.


LTE Carrier Aggregation Technology Development and Deployment Worldwide from Zahid Ghadialy

You can read more about the 4G Americas whitepaper in their press release here.

Sunday 19 October 2014

What is (pre-5G) 4.5G?

Before we look at what 4.5G is, lets look at what is not 4.5G. First and foremost, Carrier Aggregation is not 4.5G. Its the foundation for real 4G. I keep on showing this picture on Twitter


I am sure some people much be really bored by this picture of mine that I keep showing. LTE, rightly referred to as 3.9G or pre-4G by the South Korean and Japanese operators was the foundation of 'Real' 4G, a.k.a. LTE-Advanced. So who has been referring to LTE-A as 4.5G (and even 5G). Here you go:


So lets look at what 4.5G is.
Back in June, we published a whitepaper where we referred to 4.5G as LTE and WiFi working together. When we refer to LTE, it refers to LTE-A as well. The standards in Release-12 allow simultaneous use of LTE(-A) and WiFi with selected streams on WiFi and others on cellular.


Some people dont realise how much spectrum is available as part of 5GHz, hopefully the above picture will give an idea. This is exactly what has tempted the cellular community to come up with LTE-U (a.k.a LA-LTE, LAA)

In a recent event in London called 5G Huddle, Alcatel-Lucent presented their views on what 4.5G would mean. If you look at the slide above, it is quite a detailed view of what this intermediate step before 5G would be. Some tweets related to this discussion from 5G Huddle as follows:


Finally, in a recent GSMA event, Huawei used the term 4.5G to set out their vision and also propose a time-frame as follows:



While in Alcatel-Lucent slide, I could visualise 4.5G as our vision of LTE(-A) + WiFi + some more stuff, I am finding it difficult to visualise all the changes being proposed by Huawei. How are we going to see the peak rate of 10Gbps for example?

I have to mention that I have had companies that have told me that their vision of 5G is M2M and D2D so Huawei is is not very far from reality here.

We should keep in mind that this 4G, 4.5G and 5G are the terms we use to make the end users aware of what new cellular technology could do for them. Most of these people understand simple terms like speeds and latency. We may want to be careful what we tell them as we do not want to make things confusing, complicated and make false promises and not deliver on them.

xoxoxo Added on 2nd January 2015 oxoxox

Chinese vendor ZTE has said it plans to launch a ‘pre-5G’ testing base station in 2015, commercial use of which will be possible in 2016, following tests and adjustment. Here is what they think pre-5G means:


Tuesday 14 October 2014

'Real' Full Duplex (or No Division Duplex - NDD?)

We all know about the two type of transmission schemes which are FDD and TDD. Normally, this FDD and TDD schemes are known as full duplex schemes. Some people will argue that TDD is actually half-duplex but what TDD does is that it emulates a full duplex communication over a half duplex communication link. There is also a half-duplex FDD, which is a very interesting technology and defined for LTE, but not used. See here for details.


One of the technologies being proposed for 5G is referred to as Full Duplex. Here, the transmitter and the receiver both transmit and receive at the same frequency. Due to some very clever signal processing, the interference can be cancelled out. An interesting presentation from Kumu networks is embedded below:



The biggest challenge is self-interference cancellation because the transmitter and receiver are using the same spectrum and will cause interference to each other. There have been major advances in the self-interference cancellation techniques which could be seen in the Interdigital presentation embedded below:



Saturday 11 October 2014

A quick update on Antennas

There were couple of very interesting and useful presentations from the LTE World Summit 2014 that I have been thinking for a while to embed in the blog. The first is a market overview from Signals Research Group. The research is focussed more on the US market but it has some very interesting insights. The slideset is embedded below:



The other presentation is from Commscope on Base Station Antennas (BSA) for capacity improvement. I really liked the simplicity of the diagrams. Anyone interested in studying more indepth on the antennas are encouraged to check out my old post here. The complete slideset is below:



Thursday 2 October 2014

Envelope Tracking for improving PA efficiency of mobile devices

I am sure many people would have heard of ET (Envelope Tracking) by now. Its a technology that can help reduce the power consumption by our mobile devices. Less power consumption means longer battery life, especially with all these new features coming in the LTE-A devices.
As the slide says, there are already 12 phones launched with this technology, the most high profile being iPhone 6/6 Plus. Here is a brilliant presentation from Nujira on this topic:



For people who are interested in testing this feature may want to check this Rohde&Schwarz presentation here.

Saturday 27 September 2014

Elevation Beamforming / Full-Dimension MIMO


Four major Release-13 projects have been approved now that Release-12 is coming to a conclusion. One of them is Full dimension MIMO. From the 3GPP website:

Leveraging the work on 3D channel modeling completed in Release 12, 3GPP RAN will now study the necessary changes to enable elevation beamforming and high-order MIMO systems. Beamforming and MIMO have been identified as key technologies to address the future capacity demand. But so far 3GPP specified support for these features mostly considers one-dimensional antenna arrays that exploit the azimuth dimension. So, to further improve LTE spectral efficiency it is quite natural to now study two-dimensional antenna arrays that can also exploit the vertical dimension.
Also, while the standard currently supports MIMO systems with up to 8 antenna ports, the new study will look into high-order MIMO systems with up to 64 antenna ports at the eNB, to become more relevant with the use of higher frequencies in the future.
Details of the Study Item can be found in RP-141644.
There was also an interesting post by Eiko Seidel in the 5G standards group:

The idea is to introduce carrier and UE specific tilt/beam forming with variable beam widths. Improved link budget and reduced intra- and inter-cell interference might translate into higher data rates or increased coverage at cell edge. This might go hand in hand with an extensive use of spatial multiplexing that might require enhancements to today’s MU-MIMO schemes. Furthermore in active antenna array systems (AAS) the power amplifiers become part of the antenna further improving the link budget due to the missing feeder loss. Besides a potentially simplified installation the use of many low power elements might also reduce the overall power consumption. 

At higher frequencies the antenna elements can miniaturized and their number can be increased. In LTE this might be limited to 16, 32 or 64 elements while for 5G with higher frequency bands this might allow for “massive MIMO”. 

WG: Primary RAN1 (RP-141644) 
started 06/2014 (RAN#64), completion date 06/2015 (RAN#68)
work item might follow the study with target 12/2015 (RAN#70) 

Supporting companies
Samsung/NSN, all major vendors and operators 

Based on RAN1 Rel.12 Study Item on 3D channel model (TR36.873) 

Objectives 
Phase 1: antenna configurations and evaluation scenarios Rel.12 performance evaluation with 3D channel model 

Phase 2: study and simulate FD-MIMO enhancement identify and evaluate techniques, analyze specification impact performance evaluation for 16, 32, 64 antenna elements enhancements for SU-/MU-MIMO (incl. higher dimension MU-MIMO) (keep the maximum number of layer per UE unchanged to 8)


An old presentation from Samsung is embedded below that will provide more insight into this technology:



Related post:

Sunday 21 September 2014

NFV and 5G compatibility issues

There was an interesting discussion on Twitter that has been storified by Keith Dyer. Lets start by having a quick look at the C-RAN architecture that features in the discussion.


There are couple of excellent C-RAN presentations for anyone interested. This one by EE (with 9K+ views) and this from Orange (with 19K+ views).

Anyway, here is the story:


For anyone interested in exploring the discussion further, The Mobile Network has a more detailed comments here.

There are also an interesting article worth reading:

Thursday 18 September 2014

Update on Public Safety and Mission Critical communications

Its been a while since I wrote about Public Safety and Mission Critical communications, so here is a quick summary.


Iain Sharp have a good overview of whats happening in the standards in the LTE World Summit back in June. Embedded below is his complete presentation.



There is another slightly older presentation that I also thought was worthwhile adding here.

There is a lot of discussion centred around the use of commercial networks for mission critical communications, mainly die to cost. While this may make sense to an extent, there should be procedures put in place to give priority to public safety in case of emergency.



We are planning to run a one day training in Jan 2015 on public safety. If this is of interest to you then please get in touch with me for more details.

x-o-x-o-x-o-x-o-x-o-x-o-x
After the post someone brought these links to my attention so I am adding them below:

Tuesday 9 September 2014

LTE Device-to-device (D2D) Use Cases

Device-to-device is a popular topic. I wrote a post, back in March on LTE-Radar (another name) which has already had 10K+ views. Another post in Jan, last year has had over 13K views. In the LTE World Summit, Thomas Henze from Deutsche Telekom AG presented some use cases of 'proximity services via LTE device broadcast'


While there are some interesting use cases in his presentation (embedded below), I am not sure that they will necessarily achieve success overnight. While it would be great to have a standardised solution for applications that rely on proximity services, the apps have already come up with their own solutions in the meantime.

Image iTunes

The dating app Tinder, for example, finds a date near where you are. It relies on GPS and I agree that some people would say that GPS consumes more power but its already available today.



Another example is "Nearby Friends" from Facebook that allows to find your friends if they are nearby, perfect for a day when you have nothing better to do.

With an App, I can be sure that my location is being shared only for one App. With a standardised solution, all my Apps have info about location that I may not necessarily want. There are pros and cons, not sure which will win here.

Anyway, the complete presentation is embedded below:



For anyone interested in going a bit more in detail about D2D, please check this excellent article by Dr. Alastair Bryon, titled "Opportunities and threats from LTE Device-to-Device (D2D) communication"

Do let me know what you think about the use cases.