Pages

Join our LinkedIn group

Showing posts with label White Papers and Reports. Show all posts
Showing posts with label White Papers and Reports. Show all posts

Monday, 1 December 2014

Bringing Network Function Virtualization (NFV) to LTE

SDN and NFV have gained immense popularity recently. Not only are they considered important for reducing the Capex and Opex but are being touted as an important cog in the 4.5G/5G network. See here for instance.


I introduced NFV to the blog nearly a year back here. ETSI had just published their first specs around then. When I talked about SDN/NFV back in May, these ETSI standards were evolving into a significant reference documents. This is a reason 4G Americas recently published this whitepaper (embedded below), for the operators to start migrating to NFV architecture to reap long term benefits. The following is from the whitepaper:

The strategies and solutions explored in the 4G Americas report on NFV aim to address these issues and others by leveraging IT virtualization technology to consolidate many network equipment types onto industry standard high volume servers, networking and storage. NFV is about separating network functions from proprietary hardware and then consolidating and running those functions as virtualized applications on a commodity server. Broadly speaking, NFV will enable carriers to virtualize network functions and run them as software applications within their networks. NFV focuses on virtualizing network functions such as firewalls, Wide-Area Network (WAN) acceleration, network routers, border controllers (used in Voice over IP (VoIP) networks), Content Delivery Networks (CDNs) and other specialized network applications. NFV is applicable to a wide variety of networking functions in both fixed and mobile networks.
“NFV is making great progress throughout the world as operators work with their vendor partners to address the opportunities of increasing efficiency within their network infrastructure elements,” stated Chris Pearson, President of 4G Americas. “There is a great deal of collaborative innovation and cooperation between wireless carriers, IT vendors, networking companies and wireless infrastructure vendors making NFV for LTE possible.”
Global communication service providers, along with many leading vendors, are participating in the European Telecommunications Standards Institute’s (ETSI) Industry Specification Group for Network Functions Virtualization (NFV ISG) to address challenges such as:
  • An increasing variety of proprietary hardware appliances like routers, firewalls and switches
  • Space and power to accommodate these appliances
  • Capital investment challenges
  • Short lifespan
  • A long procure-design-integrate-deploy lifecycle
  • Increasing complexity and diversity of network traffic
  • Network capacity limitations
Three main benefits of NFV outlined in the 4G Americas paper include:
  • Improved capital efficiency: Provisioning capacity for all functions versus each individual function, providing more granular capacity, exploiting the larger economies of scale associated with Commercial Off-the-Shelf (COTS) hardware, centralizing Virtual Network Functions (VNFs) in data centers where latency requirements allow, and separately and dynamically scaling VNFs residing in the user (or data or forwarding) plane designed for execution in the cloud, control and user-plane functions as needed.
  • Operational efficiencies: Deploying VNFs as software using cloud management techniques which enables scalable automation at the click of an operator’s (or customer’s) mouse or in response to stimulus from network analytics. The ability to automate onboarding, provisioning and in-service activation of new virtualized network functions can yield significant savings. 
  • Service agility, innovation and differentiation: In deploying these new VNFs, time-to-market for new network services can be significantly reduced, increasing the operator’s ability to capture market share and develop market-differentiating services.
In particular, mobile operators can take advantage of NFV as new services are introduced. Evolved Packet Core (EPC), Voice over LTE (VoLTE), IP Multimedia System (IMS) and enhanced messaging services, among others, are examples of opportunities to use virtualized solutions. Some operators started deploying elements of NFV in 2013 with an expectation that many service areas could be mostly virtualized in the next decade.

The whitepaper as follows:


Tuesday, 11 November 2014

New Spectrum Usage Paradigms for 5G

Sometime back I wrote a post that talked about Dynamic Spectrum Access (DSA) techniques for Small Cells and WiFi to work together in a fair way. The Small Cells would be using the ISM bands and Wi-Fi AP's would also be contending for the same spectrum. For those who may not know, this is commonly referred to as LTE-U but the correct term that is being used in standards is LA-LTE, see here for details.

IEEE Comsoc has just published a whitepaper that details how the spectrum should be handled in 5G to make sure of efficient utilisation. The whitepaper covers the following:

Chapter 2 – Introduction, the traditional approach of repurposing spectrum and allocating it to Cellular Wireless systems is reaching its limits, at least below the 6GHz threshold. For this reason, novel approaches are required which are detailed in the sequel of this White Paper.

Chapter 3 - Spectrum Scarcity - an Alternate View provides a generic view on the spectrum scarcity issue and discusses key technologies which may help to alleviate the problem, including Dynamic Spectrum Management, Cognitive Radios, Cognitive Networks, Relaying, etc. 

Chapter 4 – mmWave Communications in 5G addresses a first key solution. While spectrum opportunities are running out at below 6 GHz, an abundance of spectrum is available in mmWave bands and the related technology is becoming mature. This chapter addresses in particular the heterogeneous approach in which legacy wireless systems are operated jointly with mmWave systems which allows to combine the advantages of both technologies. 

Chapter 5 – Dynamic Spectrum Access and Cognitive Radio: A Current Snapshot gives a detailed overview on state-of-the-art dynamic spectrum sharing technology and related standards activities. The approach is indeed complementary to the upper mmWave approach, the idea focuses on identifying unused spectrum in time, space and frequency. This technology is expected to substantially improve the usage efficiency of spectrum, in particular below the 6GHz range. 

Chapter 6 – Licensed Shared Access (LSA) enables coordinated sharing of spectrum for a given time period, a given geographic area and a given spectrum band under a license agreement. In contract to sporadic usage of spectrum on a secondary basis, the LSA approach will guarantee Quality-of-Service levels to both Incumbents and Spectrum Licensees. Also, a clear business model is available through a straightforward license transfer from relevant incumbents to licensees operating a Cellular Wireless network in the concerned frequency bands. 

Chapter 7 – Radio Environment Map details a technology which allows to gather the relevant (radio) context information which feed related decision making engines in the Network Infrastructure and/or Mobile Equipment. Indeed, tools for acquiring context information is critical for next generation Wireless Communication systems, since they are expected to be highly versatile and to constantly adapt. 

Chapter 8 – D2DWRAN: A 5G Network Proposal based on IEEE 802.22 and TVWS discusses the efficient exploitation of TV White Space spectrum bands building on the available IEEE 802.22 standard. TV White Spaces are indeed located in highly appealing spectrum bands below 1 GHz with propagation characteristics that are perfectly suited to the need of Wireless Communication systems. 

Chapter 9 – Conclusion presents some final thoughts. 

The paper is embedded as follows:



Thursday, 23 October 2014

Detailed whitepaper on Carrier Aggregation by 4G Americas

4G Americas has published a detailed whitepaper on Carrier Aggregation (CA). Its a very good detailed document for anyone wishing to study CA.


Two very important features that have come as part of CA enhancements were the multiple timing advance values that came as a part of Release-11 and TDD-FDD joint operation that came part of Release-12

While its good to see that up to 3 carriers CA is now possible as part of Rel-12 and as I mentioned in my last post, we need this to achieve the 'Real' 4G. We have to also remember at the same time that these CA makes the chipsets very complex and may affect the sensitivity of the RF receivers.

Anyway, here is the 4G Americas whitepaper.


LTE Carrier Aggregation Technology Development and Deployment Worldwide from Zahid Ghadialy

You can read more about the 4G Americas whitepaper in their press release here.

Saturday, 26 July 2014

Observed Time Difference Of Arrival (OTDOA) Positioning in LTE

Its been a while I wrote anything on Positioning. The network architecture for the positioning entities can be seen from my old blog post here
Qualcomm has recently released a whitepaper on the OTDOA (Observed Time Difference Of Arrival) positioning. Its quite a detailed paper with lots of technical insights.

There is also signalling and example of how reference signals are used for OTDOA calculation. Have a look at the whitepaper for detail, embedded below.



Tuesday, 18 February 2014

The Rise and Rise or '4G' - Update on Release-11 & Release-12 features

A recent GSMA report suggests that China will be a significant player in the field of 4G with upto 900 million 4G users by 2020. This is not surprising as the largest operator, China Mobile wants to desperately move its user base to 4G. For 3G it was stuck with TD-SCDMA or the TDD LCR option. This 3G technology is not as good as its FDD variant, commonly known as UMTS.

This trend of migrating to 4G is not unique to China. A recent report (embedded below) by 4G Americas predicts that by the end of 2018, HSPA/HSPA+ would be the most popular technology whereas LTE would be making an impact with 1.3 Billion connected devices. The main reason for HSPA being so dominant is due to the fact that HSPA devices are mature and are available now. LTE devices, even though available are still slightly expensive. At the same time, operators are taking time having a seamless 4G coverage throughout the region. My guess would be that the number of devices that are 4G ready would be much higher than 1.3 Billion.

It is interesting to see that the number of 'Non-Smartphones' remain constant but at the same time, their share is going down. It would be useful to breakdown the number of Smartphones into 'Phablets' and 'non-Phablets' category.

Anyway, the 4G Americas report from which the information above is extracted contains lots of interesting details about Release-11 and Release-12 HSPA+ and LTE. The only problem I found is that its too long for most people to go through completely.

The whitepaper contains the following information:

3GPP Rel-11 standards for HSPA+ and LTE-Advanced were frozen in December 2012 with the core network protocols stable in December 2012 and Radio Access Network (RAN) protocols stable in March 2013. Key features detailed in the paper for Rel-11 include:
HSPA+:
  • 8-carrier downlink operation (HSDPA)
  • Downlink (DL) 4-branch Multiple Input Multiple Output (MIMO) antennas
  • DL Multi-Flow Transmission
  • Uplink (UL) dual antenna beamforming (both closed and open loop transmit diversity)
  • UL MIMO with 64 Quadrature Amplitude Modulation (64-QAM)
  • Several CELL_FACH (Forward Access Channel) state enhancements (for smartphone type traffic) and non-contiguous HSDPA Carrier Aggregation (CA)
LTE-Advanced:
  • Carrier Aggregation (CA)
  • Multimedia Broadcast Multicast Services (MBMS) and Self Organizing Networks (SON)
  • Introduction to the Coordinated Multi-Point (CoMP) feature for enabling coordinated scheduling and/or beamforming
  • Enhanced Physical Control Channel (EPDCCH)
  • Further enhanced Inter-Cell Interference Coordination (FeICIC) for devices with interference cancellation
Finally, Rel-11 introduces several network and service related enhancements (most of which apply to both HSPA and LTE):
  • Machine Type Communications (MTC)
  • IP Multimedia Systems (IMS)
  • Wi-Fi integration
  • Home NodeB (HNB) and Home e-NodeB (HeNB)
3GPP started work on Rel-12 in December 2012 and an 18-month timeframe for completion was planned. The work continues into 2014 and areas that are still incomplete are carefully noted in the report.  Work will be ratified by June 2014 with the exception of RAN protocols which will be finalized by September 2014. Key features detailed in the paper for Rel-12 include:
HSPA+:
  • Universal Mobile Telecommunication System (UMTS) Heterogeneous Networks (HetNet)
  • Scalable UMTS Frequency Division Duplex (FDD) bandwidth
  • Enhanced Uplink (EUL) enhancements
  • Emergency warning for Universal Terrestrial Radio Access Network (UTRAN)
  • HNB mobility
  • HNB positioning for Universal Terrestrial Radio Access (UTRA)
  • Machine Type Communications (MTC)
  • Dedicated Channel (DCH) enhancements
LTE-Advanced:
  • Active Antenna Systems (AAS)
  • Downlink enhancements for MIMO antenna systems
  • Small cell and femtocell enhancements
  • Machine Type Communication (MTC)
  • Proximity Service (ProSe)
  • User Equipment (UE)
  • Self-Optimizing Networks (SON)
  • Heterogeneous Network (HetNet) mobility
  • Multimedia Broadcast/Multicast Services (MBMS)
  • Local Internet Protocol Access/Selected Internet Protocol Traffic Offload (LIPA/SIPTO)
  • Enhanced International Mobile Telecommunications Advanced (eIMTA) and Frequency Division Duplex-Time Division Duplex Carrier Aggregation (FDD-TDD CA)
Work in Rel-12 also included features for network and services enhancements for MTC, public safety and Wi-Fi integration, system capacity and stability, Web Real-Time Communication (WebRTC), further network energy savings, multimedia and Policy and Charging Control (PCC) framework.


Friday, 13 December 2013

Advancements in Congestion control technology for M2M


NTT Docomo recently published a new article (embedded below) on congestion control approaches for M2M. In their own words:

Since 3GPP Release 10 (Rel. 10) in 2010, there has been active study of technical specifications to develop M2M communications further, and NTT DOCOMO has been contributing proactively to creating these technical specifications. In this article, we describe two of the most significant functions standardized between 3GPP Rel. 10 and Rel. 11: the M2M Core network communications infrastructure, which enables M2M service operators to introduce solutions more easily, and congestion handling technologies, which improve reliability on networks accommodating a large number of terminals.

Complete article as follows:



Other related posts:

Monday, 4 November 2013

Key challenges with automatic Wi-Fi / Cellular handover

Recently in a conference I mentioned that the 3GPP standards are working on standards that will allow automatic and seamless handovers between Cellular and Wi-Fi. At the same time operators may want to have a control where they can automatically switch on a users Wi-Fi radio (if switched off) and offload to Wi-Fi whenever possible. It upset quite a few people who were reasoning against the problems this could cause and the issues that need to be solved.

I have been meaning to list the possible issues which could be present in this scenario of automatically handing over between Wi-Fi and cellular, luckily I found that they have been listed very well in the recent 4G Americas whitepaper. The whitepaper is embedded below but here are the issues I had been wanting to discuss:

In particular, many of the challenges facing Wi-Fi/Cellular integration have to do with realizing a complete intelligent network selection solution that allows operators to steer traffic in a manner that maximizes user experience and addresses some of the challenges at the boundaries between RATs (2G, 3G, LTE and Wi-Fi).
Figure 1 (see above) below illustrates four of the key challenges at the Wi-Fi/Cellular boundary.
1) Premature Wi-Fi Selection: As devices with Wi-Fi enabled move into Wi-Fi coverage, they reselect to Wi-Fi without comparative evaluation of existing cellular and incoming Wi-Fi capabilities. This can result in degradation of end user experience due to premature reselection to Wi-Fi. Real time throughput based traffic steering can be used to mitigate this.
2) Unhealthy choices: In a mixed wireless network of LTE, HSPA and Wi-Fi, reselection may occur to a strong Wi-Fi network, which is under heavy load. The resulting ‘unhealthy’ choice results in a degradation of end user experience as performance on the cell edge of a lightly loaded cellular network may be superior to performance close to a heavily loaded Wi-Fi AP. Real time load based traffic steering can be used to mitigate this.
3) Lower capabilities: In some cases, reselection to a strong Wi-Fi AP may result in reduced performance (e.g. if the Wi-Fi AP is served by lower bandwidth in the backhaul than the cellular base station presently serving the device). Evaluation of criteria beyond wireless capabilities prior to access selection can be used to mitigate this.
4) Ping-Pong: This is an example of reduced end user experience due to ping-ponging between Wi-Fi and cellular accesses. This could be a result of premature Wi-Fi selection and mobility in a cellular environment with signal strengths very similar in both access types. Hysteresis concepts used in access selection similar to cellular IRAT, applied between Wi-Fi and cellular accesses can be used to mitigate this.
Here is the paper:



Tuesday, 15 October 2013

What is Network Function Virtualisation (NFV)?


Software Defined Networking (SDN) and Network Function Virtualization (NFV) are the two recent buzzwords taking the telecoms market by storm. Every network vendor now has some kind of strategy to use this NFV and SDN to help operators save money. So what exactly is NFV? I found a good simple video by Spirent that explains this well. Here it is:


To add a description to this, I would borrow an explanation and a very good example from Wendy Zajack, Director Product Communications, Alcatel-Lucent in ALU blog:

Let’s take this virtualization concept to a network environment. For me cloud means I can get my stuff where ever I am and on any device –  meaning I can pull out my smart phone, my iPad, my computer – and show my mom the  latest pictures of  her grand kids.  I am not limited to only having one type of photo album I put my photos in – and only that. I can also show her both photos and videos together – and am not just limited to showing her the kids in one format and on one device.
Today in a telecom network is a lot of equipment that can only do one thing.  These machines are focused on what they are do and they do it really well – this is why telecom providers are considered so ‘trusted.’ Back in the days of landline phones even when the power was out you could always make a call.  These machines run alone with dedicated resources.  These machines are made by various different vendors and speak various languages or ‘protocols’ to exchange information with each other when necessary. Some don’t even talk at all – they are just set-up and then left to run.  So, every day your operator is running a mini United Nations and corralling that to get you to access all of your stuff.  But it is a United Nations with a fixed number of seats, and with only a specific nation allowed to occupy a specific seat, with the seat left unused if there was a no-show. That is a lot of underutilized equipment that is tough and expensive to manage.  It also has a shelf life of 15 years… while your average store-bought computer is doubling in speed every 18 months.
Virtualizing the network means the ability to run a variety of applications (or functions) on a standard piece of computing equipment, rather than on dedicated, specialized processors and equipment, to drive lower costs (more value), more re-use of the equipment between applications (more sharing), and a greater ability to change what is using the equipment to meet the changing user needs (more responsiveness).  This has already started in enterprises as a way to control IT costs and improve the performance and of course way greener.
To give this a sports analogy – imagine if in American football instead of having specialists in all the different positions (QB, LB, RB, etc), you had a bunch of generalists who could play any position – you might only need a 22 or 33 man squad (2 or 3 players for every position) rather than the normal squad of  53.   The management of your team would be much simpler as ‘one player fits all’ positions.   It is easy to see how this would benefit a service provider – simplifying the procurement and management of the network elements (team) and giving them the ability to do more, with less.

Dimitris Mavrakis from Informa wrote an excellent summary from the IIR SDN and NFV conference in Informa blog here. Its worth reading his article but I want to highlight one section that shows how the operators think deployment would be done:

The speaker from BT provided a good roadmap for implementing SDN and NFV:
  1. Start with a small part of the network, which may not be critical for the operation of the whole. Perhaps introduce incremental capacity upgrades or improvements in specific and isolated parts of the network.
  2. Integrate with existing OSS/BSS and other parts of the network.
  3. Plan a larger-scale rollout so that it fits with the longer-term network strategy.
Deutsche Telecom is now considered to be deploying in the first phase, with a small trial in Hrvatski Telecom, its Croatian subsidiary, called Project Terrastream. BT, Telefonica, NTT Communications and other operators are at a similar stage, although DT is considered the first to deploy SDN and NFV for commercial network services beyond the data center.
Stage 2 in the roadmap is a far more complicated task. Integrating with existing components that may perform the same function but are not virtualized requires east-west APIs that are not clearly defined, especially when a network is multivendor. This is a very active point of discussion, but it remains to be seen whether Tier-1 vendors will be willing to openly integrate with their peers and even smaller, specialist vendors. OSS/BSS is also a major challenge, where multivendor networks are controlled by multiple systems and introducing a new service may require risking several parameters in many of these OSS/BSS consoles. This is another area that is not likely to change rapidly but rather in small, incremental steps.
The final stage is perhaps the biggest barrier due to the financial commitment and resources required. Long-term strategy may translate to five or even 10 years ahead – when networks are fully virtualized – and the economic environment may not allow such bold investments. Moreover, it is not clear if SDN and NFV guarantee new services and revenues outside the data center or operator cloud. If they do not, both technologies – and similar IT concepts – are likely to be deployed incrementally and replace equipment that reaches end-of-life. Cost savings in the network currently do not justify forklift upgrades or the replacement of adequately functional network components.
There is also a growing realization that bare-metal platforms (i.e., the proprietary hardware-based platforms that power today’s networks) are here to stay for several years. This hardware has been customized and adapted for use in telecom networks, allowing high performance for radio, core, transport, fixed and optical networks. Replacing these high-capacity components with virtualized ones is likely to affect performance significantly and operators are certainly not willing to take the risk of disrupting the operation of their network.
A major theme at the conference was that proprietary platforms (particularly ATCA) will be replaced by common off-the-shelf (COTS) hardware. ATCA is a hardware platform designed specifically for telecoms, but several vendors have adapted the platform to their own cause, creating fragmentation, incompatibility and vendor lock-in. Although ATCA is in theory telecoms-specific COTS, proprietary extensions have forced operators to turn to COTS, which is now driven by IT vendors, including Intel, HP, IBM, Dell and others.


ETSI has just published first specifications on NFV. Their press release here says:

ETSI has published the first five specifications on Network Functions Virtualisation (NFV). This is a major milestone towards the use of NFV to simplify the roll-out of new network services, reduce deployment and operational costs and encourage innovation.
These documents clearly identify an agreed framework and terminology for NFV which will help the industry to channel its efforts towards fully interoperable NFV solutions. This in turn will make it easier for network operators and NFV solutions providers to work together and will facilitate global economies of scale.
The IT and Network industries are collaborating in ETSI's Industry Specification Group for Network Functions Virtualisation (NFV ISG) to achieve a consistent approach and common architecture for the hardware and software infrastructure needed to support virtualised network functions. Early NFV deployments are already underway and are expected to accelerate during 2014-15. These new specifications have been produced in less than 10 months to satisfy the high industry demand – NFV ISG only began work in January 2013.
NFV ISG was initiated by the world's leading telecoms network operators. The work has attracted broad industry support and participation has risen rapidly to over 150 companies of all sizes from all over the world, including network operators, telecommunication equipment vendors, IT vendors and technology providers. Like all ETSI standards, these NFV specifications have been agreed by a consensus of all those involved.
The five published documents (which are publicly available via www.etsi.org/nfv) include four ETSI Group Specifications (GSs) designed to align understanding about NFV across the industry. They cover NFV use cases, requirements, the architectural framework, and terminology. The fifth GS defines a framework for co-ordinating and promoting public demonstrations of Proof of Concept (PoC) platforms illustrating key aspects of NFV. Its objective is to encourage the development of an open ecosystem by integrating components from different players.
Work is continuing in NFV ISG to develop further guidance to industry, and more detailed specifications are scheduled for 2014. In addition, to avoid the duplication of effort and to minimise fragmentation amongst multiple standards development organisations, NFV ISG is undertaking a gap analysis to identify what additional work needs to be done, and which bodies are best placed to do it.
The ETSI specifications are available at: http://www.etsi.org/technologies-clusters/technologies/nfv

The first document that shows various use cases is embedded below:


Tuesday, 8 October 2013

SON in LTE Release-11


Very timely of 4G Americas to release a whitepaper on SON, considering that the SON conference just got over last week. This whitepaper contains lots of interesting details and the status from Rel-11 which is the latest complete release available. I will probably look at some features in detail later on as separate posts. The complete paper is embedded below and is available from 4G Americas website here.


Thursday, 3 October 2013

Case study of SKT deployment using the C-RAN architecture


Recently I came across this whitepaper by iGR, where they have done a case study on the SKT deployment using C-RAN. The main point can be summarised from the whitepaper as follows:

This approach created several advantages for SK Telecom – or for any operator that might implement a similar solution – including the:

  • Maximum re-use of existing fiber infrastructure to reduce the need for new fiber runs which ultimately reduced the time to market and capital costs.
  • Ability to quickly add more ONTs to the fiber rings so as to support additional RAN capacity when needed.
  • Support of multiple small cells on a single fiber strand. This is critical to reducing costs and having the flexibility to scale.
  • Reduction of operating expenses.
  • Increased reliability due to the use of fiber rings with redundancy.
  • Support for both licensed and unlicensed RAN solutions, including WiFi. Thus, the fronthaul architecture could support LTE and WiFi RANs on the same system.
As a result of its implementation, SK Telecom rolled out a new LTE network in 12 months rather than 24 and reduced operating expenses in the first year by approximately five percent. By 2014, SK Telecom expects an additional 50 percent OpEx savings due to the new architecture.

Anyway, the paper is embedded below for your perusal and is available to download from the iGR website here.



Thursday, 26 September 2013

Multi-stream aggregation (MSA): Key technology for future networks


In our recent 5G presentation here, we outlined multi-technology carrier aggregation as one of the technologies for the future networks. Some of the discussions that I had on this topic later on highlighted the following:
  1. This is generally referred to as Multi-stream aggregation (MSA)
  2. We will see this much sooner than 5G, probably from LTE-A Rel-13 onwards 


Huawei have a few documents on this topic. One such document is embedded below and aanother more technical document is available on slideshare here.



Friday, 13 September 2013

LTE for Utilities and Smart Grids

This has been an area of interest for the last couple of years. Discussions have been centred around, "Is LTE fit for IoT?", "Which technology for IoT", "Is it economical to use LTE for M2M?", "Would small cells be useful for M2M?", etc.

Ericsson has recently published a whitepaper titled "LTE for utilities - supporting smart grids". One of the table that caught my eye is as follows:


LTE would be ideally suited for some of the "Performance class" requirements where the transfer time requirements is less than 100ms. Again, it can always be debated if in many cases WiFi will meet the requirements so should WiFi be used instead of LTE, etc. I will let you form your own conclusions and if you are very passionate and have an opinion, feel free to leave comment.

The whitepaper is embedded below:



Related posts:


Saturday, 31 August 2013

VoLTE Bearers

While going through Anritsu whitepaper on VoLTE, I found this picture that explains the concepts of bearers in a VoLTE call well. From the whitepaper:

All networks and mobile devices are required to utilize a common access point name (APN) for VoLTE, namely, “IMS”. Unlike many legacy networks, LTE networks employ the “always-on” conception of packet connectivity: Devices have PDN connectivity virtually from the moment they perform their initial attach to the core network. During the initial attach procedure, some devices choose to name the access point through which they prefer to connect. However, mobile devices are not permitted to name the VoLTE APN during initial attach, i.e., to utilize the IMS as their main PDN, but rather to establish a connection with the IMS AP separately. Thus, VoLTE devices must support multiple simultaneous default EPS bearers.

Note that because the VoLTE APN is universal, mobile devices will always connect through the visited PLMN’s IMS PDN-GW. This architecture also implies the non-optionality of the P-CSCF:

As stated, VoLTE sessions employ two or three DRBs. This, in turn, implies the use of one default EPS bearer plus one or two dedicated EPS bearers. The default EPS bearer is always used for SIP signaling and exactly one dedicated EPS bearer is used for voice packets (regardless of the number of active voice media streams.) XCAP signaling may be transported on its own dedicated EPS bearer – for a total of three active EPS bearers – or it may be multiplexed with the SIP signaling on the default EPS bearer, in which case only two EPS bearers are utilized.

My understanding is that initially when the UE is switched on, a default bearer with QCI 9 (see old posts on QoS/QCI here) is established that would be used for all the signalling. Later on, another default bearer with QCI 5 is established with the IMS CN. When a VoLTE call is being setup, a dedicated bearer with QCI 1 is setup for the voice call. As the article says, another dedicated bearer may be needed for XCAP signalling. If a Video call on top of VoLTE is being used than an additional dedicated bearer with QCI 2 will be setup. Note that the voice pat will still be carried by dedicated bearer with QCI 1.

Do you disagree or have more insight, please feel free to add the comment at the end of the post.

The whitepaper is embedded below and is available to download from slideshare.



Related posts:

Wednesday, 17 July 2013

Decision Tree of Transmission Modes (TM) for LTE


4G Americas have recently published whitepaper titled "MIMO and Smart Antennas for Mobile Broadband Systems" (available here). The above picture and the following is from that whitepaper:

Figure 3 above shows the taxonomy of antenna configurations supported in Release-10 of the LTE standard (as described in 3GPP Technical Specification TS 36.211, 36.300). The LTE standard supports 1, 2, 4 or 8 base station transmit antennas and 2, 4 or 8 receive antennas in the User Equipment (UE), designated as: 1x2, 1x4, 1x8, 2x2, 2x4, 2x8, 4x2, 4x4, 4x8, and 8x2, 8x4, and 8x8 MIMO, where the first digit is the number of antennas per sector in the transmitter and the second number is the number of antennas in the receiver. The cases where the base station transmits from a single antenna or a single dedicated beam are shown in the left of the figure. The most commonly used MIMO Transmission Mode (TM4) is in the lower right corner, Closed Loop Spatial Multiplexing (CLSM), when multiple streams can be transmitted in a channel with rank 2 or more.

Beyond the single antenna or beamforming array cases diagrammed above, the LTE standard supports Multiple Input Multiple Output (MIMO) antenna configurations as shown on the right of Figure 3. This includes Single User (SU-MIMO) protocols using either open loop or closed loop modes as well as transmit diversity and Multi-User MIMO (MU-MIMO). In the closed loop MIMO mode, the terminals provide channel feedback to the eNodeB with Channel Quality Information (CQI), Rank Indications (RI) and Precoder Matrix Indications (PMI). These mechanisms enable channel state information at the transmitter which improves the peak data rates, and is the most commonly used scheme in current deployments. However, this scheme provides the best performance only when the channel information is accurate and when there is a rich multi-path environment. Thus, closed loop MIMO is most appropriate in low mobility environments such as with fixed terminals or at pedestrian speeds.

In the case of high vehicular speeds, Open Loop MIMO may be used, but because the channel state information is not timely, the PMI is not considered reliable and is typically not used. In TDD networks, the channel is reciprocal and thus the DL channel can be more accurately known based on the uplink transmissions from the terminal (the forward link’s multipath channel signature is the same as the reverse link’s – both paths use the same frequency block). Thus, MIMO improves TDD networks under wider channel conditions than in FDD networks.

One may visualize spatial multiplexing MIMO operation as subtracting the strongest received stream from the total received signal so that the next strongest signal can be decoded and then the next strongest, somewhat like a multi-user detection scheme. However, to solve these simultaneous equations for multiple unknowns, the MIMO algorithms must have relatively large Signal to Interference plus Noise ratios (SINR), say 15 dB or better. With many users active in a base station’s coverage area, and multiple base stations contributing interference to adjacent cells, the SINR is often in the realm of a few dB. This is particularly true for frequency reuse 1 systems, where only users very close to the cell site experience SINRs high enough to benefit from spatial multiplexing SU-MIMO. Consequently, SU-MIMO works to serve the single user (or few users) very well, and is primarily used to increase the peak data rates rather than the median data rate in a network operating at full capacity.

Angle of Arrival (AoA) beamforming schemes form beams which work well when the base station is clearly above the clutter and when the angular spread of the arrival is small, corresponding to users that are well localized in the field of view of the sector; in rural areas, for example. To form a beam, one uses co-polarized antenna elements spaced rather closely together, typically lamda/2, while the spatial diversity required of MIMO requires either cross-polarized antenna columns or columns that are relatively far apart. Path diversity will couple more when the antennas columns are farther apart, often about 10 wavelengths (1.5m or 5’ at 2 GHz). That is why most 2G and 3G tower sites have two receive antennas located at far ends of the sector’s platform, as seen in the photo to the right. The signals to be transmitted are multiplied by complex-valued precoding weights from standardized codebooks to form the antenna patterns with their beam-like main lobes and their nulls that can be directed toward sources of interference. The beamforming can be created, for example, by the UE PMI feedback pointing out the preferred precoder (fixed beam) to use when operating in the closed loop MIMO mode TM4.

For more details, see the whitepaper available here.

Related posts:


Monday, 1 July 2013

Is it too early to talk '5G'


While LTE/LTE-A (or 4G) is being rolled out, there is already a talk about 5G. Last week in the LTE World Summit in Amsterdam, there was a whole track on what should 5G be without much technical details. Couple of months back Samsung had announced that they have reached 5G breakthrough. In my talk back in May, I had suggested that 5G would be an evolution on the Radio Access but the core will evolve just little. Anyway, its too early to speculate what the access technology for 5G would be.

Ericsson has published a '5G' whitepaper where they talk about the vision and why and what of 5G rather than going into any technical details. It is embedded below:

Tuesday, 28 May 2013

NEC on 'Radio Access Network' (RAN) Sharing

Its been a while we looked at anything to do with Network Sharing. The last post with an embed from Dr. Kim Larsen presentation, has already crossed 11K+ views on slideshare. Over the last few years there has been a raft of announcements about various operators sharing their networks locally with the rivals to reduce their CAPEX as well as their OPEX. Even though I understand the reasons behind the network sharing I believe that the end consumers end up losing as they may not have a means of differentiating between the different operators on a macro cell.

Certain operators on the other hand offer differentiators like residential femtocells that can enhance indoor coverage or a tie up with WiFi hotspot providers which may provide them wi-fi access on the move. The following whitepaper from NEC is an interesting read to understanding how RAN sharing in the LTE would work.



Wednesday, 24 April 2013

eMBMS Release-11 enhancements

Continuing on the eMBMS theme. In the presentation in the last post, there was introduction to the eMBMS protocols and codecs and mention about the DASH protocol. This article from the IEEE Communications magazine provides insight into the working of eMBMS and what potential it holds.


Friday, 12 April 2013

Myths and Challenges in Future Wireless Access



Interesting article from the recent IEEE Comsoc magazine. Table 1 on page 5 is an interesting comparison of how different players reach the magical '1000x' capacity increase. Even though Huawei shows 100x, which may be more realistic, the industry is sticking with the 1000x figure. 

Qualcomm is touting a similar 1000x figure as I showed in a post earlier here.

Thursday, 31 January 2013