Showing posts with label Technical Details. Show all posts
Showing posts with label Technical Details. Show all posts

Tuesday 25 October 2011

Donor eNB (DeNB) and Relay Node (RN)

Extracted from 3GPP 36.300:

The eNB hosts the following functions:
- Functions for Radio Resource Management: Radio Bearer Control, Radio Admission Control, Connection Mobility Control, Dynamic allocation of resources to UEs in both uplink and downlink (scheduling);
- IP header compression and encryption of user data stream;
- Selection of an MME at UE attachment when no routing to an MME can be determined from the information provided by the UE;
- Routing of User Plane data towards Serving Gateway;
- Scheduling and transmission of paging messages (originated from the MME);
- Scheduling and transmission of broadcast information (originated from the MME or O&M);
- Measurement and measurement reporting configuration for mobility and scheduling;
- Scheduling and transmission of PWS (which includes ETWS and CMAS) messages (originated from the MME);
- CSG handling;
- Transport level packet marking in the uplink.
The DeNB hosts the following functions in addition to the eNB functions:
- S1/X2 proxy functionality for supporting RNs;
- S11 termination and S-GW/P-GW functionality for supporting RNs.

E-UTRAN supports relaying by having a Relay Node (RN) wirelessly connect to an eNB serving the RN, called Donor eNB (DeNB), via a modified version of the E-UTRA radio interface, the modified version being called the Un interface. The RN supports the eNB functionality meaning it terminates the radio protocols of the E-UTRA radio interface, and the S1 and X2 interfaces. From a specification point of view, functionality defined for eNBs, e.g. RNL and TNL, also applies to RNs unless explicitly specified. RNs do not support NNSF. In addition to the eNB functionality, the RN also supports a subset of the UE functionality, e.g. physical layer, layer-2, RRC, and NAS functionality, in order to wirelessly connect to the DeNB.

The architecture for supporting RNs is shown in Figure 4.7.2-1. The RN terminates the S1, X2 and Un interfaces. The DeNB provides S1 and X2 proxy functionality between the RN and other network nodes (other eNBs, MMEs and S GWs). The S1 and X2 proxy functionality includes passing UE-dedicated S1 and X2 signalling messages as well as GTP data packets between the S1 and X2 interfaces associated with the RN and the S1 and X2 interfaces associated with other network nodes. Due to the proxy functionality, the DeNB appears as an MME (for S1-MME), an eNB (for X2) and an S-GW (for S1-U) to the RN.

For more details see - 3GPP TS 36.300 : Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Overall description; Stage 2 (Release 10)

Wednesday 7 September 2011

Enhanced Voice Service (EVS) Codec for LTE Rel-10

Its been a while we talked about codecs.


The traditional (narrowband) AMR (Adaptive Multi-Rate) codec operates on narrowband 200-3400 Hz signals at variable bit rates in the range of 4.75 to 12.2 kbps. It provides toll quality speech starting at 7.4 kbps, with near-toll quality and better robustness at lower rates and better reproduction of non-speech sounds at higher rates. The AMR-WB (Wideband) codec provides improved speech quality due to a wider speech bandwidth of 50–7000 Hz compared to narrowband speech coders which in general are optimized for POTS wireline quality of 300–3400 Hz. Couple of years back Orange was in news because they were the first to launch phones that support HD-Voice (AMR-WB).

Extended Adaptive Multi-Rate – Wideband (AMR-WB+) is an audio codec that extends AMR-WB. It adds support for stereo signals and higher sampling rates. Another main improvement is the use of transform coding (transform coded excitation - TCX) additionally to ACELP. This greatly improves the generic audio coding. Automatic switching between transform coding and ACELP provides both good speech and audio quality with moderate bit rates.

As AMR-WB operates at internal sampling rate 12.8 kHz, AMR-WB+ also supports various internal sampling frequencies ranges from 12.8 kHz to 38.4 kHz. AMR-WB uses 16 kHz sampling frequency with a resolution of 14 bits left justified in a 16-bit word. AMR-WB+ uses 16/24/32/48 kHz sampling frequencies with a resolution of 16 bits in a 16-bit word.


Introduction of LTE (Long Term Evolution) brings enhanced quality for 3GPP multimedia services. The high throughput and low latency of LTE enable higher quality media coding than what is possible in UMTS. LTE-specific codecs have not yet been defined but work on them is ongoing in 3GPP. The LTE codecs are expected to improve the basic signal quality, but also to offer new capabilities such as extended audio bandwidth, stereo and multi-channels for voice and higher temporal and spatial resolutions for video. Due to the wide range of functionalities in media coding, LTE gives more flexibility for service provision to cope with heterogeneous terminal capabilities and transmission over heterogeneous network conditions. By adjusting the bit-rate, the computational complexity, and the spatial and temporal resolution of audio and video, transport and rendering can be optimised throughout the media path hence guaranteeing the best possible quality of service.

A feasibility study on Enhanced Voice Service (EVS) for LTE has recently been finalised in 3GPP with the results given in Technical Report 22.813 ‘‘Study of Use Cases and Requirements for Enhanced Voice Codecs in the Evolved Packet System (EPS)”. EVS is intended to provide substantially enhanced voice quality for conversational use, i.e. telephony. Improved transmission efficiency and optimised behaviour in IP environments are further targets. EVS also has potential for quality enhancement for non-voice signals such as music. The EVS study, conducted jointly by 3GPP SA4 (Codec) and SA1 (Services) working groups, identifies recommendations for key characteristics of EVS (system and service requirements, and high level technical requirements on codecs).

The study further proposes the development and standardization of a new EVS codec for LTE to be started. The codec is targeted to be developed by March 2011, in time for 3GPP Release 10.

Fig. above illustrates the concept of EVS. The EVS codec will not replace the existing 3GPP narrowband and wideband codecs AMR and AMR-WB but will provide a complementing high quality codec via the introduction of higher audio bandwidths, in particular super wideband (SWB: 50–14,000 Hz). It will also support narrowband (NB: 200–3400 Hz) and wideband (WB: 50–7000 Hz) and may support fullband audio (FB: 20–20,000 Hz).

More details available in the following whitepapers by Nokia [PDF]:

Friday 2 September 2011

Multipoint HSDPA / HSPA

The following is from 3GPP TR 25.872 - Technical Specification Group Radio Access Network; HSDPA Multipoint Transmission:

HSPA based mobile internet offerings are becoming very popular and data usage is increasing rapidly. Consequently, HSPA has begun to be deployed on more than one transmit antenna or more than one carrier. As an example, the single cell downlink MIMO (MIMO-Physical layer) feature was introduced in Release 7. This feature allowed a NodeB to transmit two transport blocks to a single UE from the same cell on a pair of transmit antennas thus improving data rates at high geometries and providing a beamforming advantage to the UE in low geometry conditions. Subsequently, in Release-8 and Release-9, the dual cell HSDPA (DC-HSDPA) and dual band DC-HSDPA features were introduced. Both these features allow the NodeB to serve one or more users by simultaneous operation of HSDPA on two different carrier frequencies in two geographically overlapping cells, thus improving the user experience across the entire cell coverage area. In Release 10 these concepts were extended so that simultaneous transmissions to a single UE could occur from four cells (4C-HSDPA).

When a UE falls into the softer or soft handover coverage region of two cells on the same carrier frequency, it would be beneficial for the non-serving cell to be able to schedule packets to this UE and thereby improving this particular user’s experience, especially when the non-serving cell is partially loaded. MultiPoint HSDPA allows two cells to transmit packets to the same UE, providing improved user experience and system load balancing. MultiPoint HSDPA can operate on one or two frequencies.

Click to enlarge

There is also an interesting Qualcomm Whitepaper on related topic that is available to view and download here. The following is from that whitepaper:

The simplest form of Multipoint HSPA, Single Frequency Dual Cell HSPA (SFDC-HSPA), can be seen as an extension to the existing DC-HSPA feature. While DC-HSPA allows scheduling of two independent transport blocks to the mobile device (UE) from one sector on two frequency carriers, SFDC-HSPA allows scheduling of two independent transport blocks to the UE from two different sectors on the same carrier. In other words, it allows for a primary and a secondary serving cell to simultaneously send different data to the UE. Therefore, the major difference between SFDC-HSPA and DC-HSPA operation is that the secondary transport block is scheduled to the UE from a different sector on the same frequency as the primary transport block. The UE also needs to have receive diversity (type 3i) to suppress interference from the other cell as it will receive data on the same frequecny from multiple serving cells.Figure 1 llustrates the high-level concept of SFDC-HSPA.

In the case where the two sectors involved in Multipoint HSPA transmission belong to the same NodeB (Intra-NodeB mode), as illustrated in Figure 2, there is only one transmission queue maintained at the NodeB and the RNC. The queue management and RLC layer operation is essentially the same as for DC-HSPA.

In the case where the two sectors belong to different NodeBs (Inter-NodeB mode), as illustrated in Figure 2, there is a separate transmission queue at each NodeB. RLC layer enhancements are needed at the RNC along with enhanced flow control on the Iub interface between RNC and NodeB in order to support Multipoint HSPA operation across NodeBs. These enhancements are discussed in more detail in Section 4. In both modes, combined feedback information (CQI and HARQ-ACK/ NAK) needs to be sent on the uplink for both data streams received from the serving cells. On the uplink, the UE sends CQIs seen on all sectors using the legacy channel structure, with timing aligned to the primary serving cell.

When two carriers are available in the network, there is an additional degree of freedom in the frequency domain. Dual Frequency Dual Cell HSPA (DFDC-HSPA) allows exploiting both frequency and spatial domains by scheduling two independent transport blocks to the UE from two different sectors on two different frequency carriers. For a DC-HSPA capable UE, this is equivalent to having independent serving cells on the two frequency carriers. In Figure 3, UE1 is in DC-HSPA mode, whereas UE2 is in DFDC-HSPA mode.

Dual Frequency Four-Cell HSPA (DF4C-HSPA) can be seen as a natural extension of DFDC-HSPA, suitable for networks with UEs having four receiver chains. DF4C-HSPA allows use of the four receiver chains by scheduling four independent transport blocks to the UE from two different sectors on two different frequency carriers. DF4C-HSPA is illustrated in Figure 4.

Like SFDC-HSPA; DFDC-HSPA and DF4C-HSPA can also be intra-NodeB or inter-NodeB, resulting in an impact on transmission queue management, Iub flow control and the RLC layer.

Advantages of Multipoint transmission:
* Cell Edge Performance Improvement
* Load balancing across sectors and frequency carriers
* Leveraging RRU and distributed NodeB technology

Multipoint HSPA improves the performance of cell edge users and helps balance the load disparity across neighboring cells. It leverages advanced receiver technology already available in mobile devices compatible with Release 8 and beyond to achieve this. The system impact of Multipoint HSPA on the network side is primarily limited to software upgrades affecting the upper layers (RLC and RRC).


Wednesday 3 August 2011

A look at "Idle state Signalling Reduction" (ISR)

The following is from 3GPP TS 23.401, Annex J:

General description of the ISR concept

Idle state Signalling Reduction (or ISR) aims at reducing the frequency of Tracking Area Updates (TAU, in EUtran) and Routing Area Updates (RAU, in UTRAN/GERAN) procedures caused by UEs reselecting between E-UTRAN and GERAN/UTRAN which are operated together. Especially the update signalling between UE and network is reduced. But also network internal signalling is reduced. To some extent the reduction of network internal signalling is also available when ISR is not used or not activated by the network.

UMTS described already RAs containing GERAN and UTRAN cells, which also reduces update signalling between UE and network. The combination of GERAN and UTRAN into the same RAs implies however common scaling, dimensioning and configuration for GERAN and UTRAN (e.g. same RA coverage, same SGSN service area, no GERAN or UTRAN only access control, same physical node for GERAN and UTRAN). As an advantage it does not require special network interface functionality for the purpose of update signalling reduction.

ISR enables signalling reduction with separate SGSN and MME and also with independent TAs and RAs. Thereby the interdependency is drastically minimized compared with the GERAN/UTRAN RAs. This comes however with ISR specific node and interface functionality. SGSN and MME may be implemented together, which reduces some interface functions but results also in some dependencies.

ISR support is mandatory for E-UTRAN UEs that support GERAN and/or UTRAN and optional for the network. ISR requires special functionality in both the UE and the network (i.e. in the SGSN, MME and Serving GW) to activate ISR for a UE. For this activation, the MME/SGSN detects whether S-GW supports ISR based on the configuration and activates ISR only if the S-GW supports the ISR. The network can decide for ISR activation individually for each UE. Gn/Gp SGSNs do not support ISR functionality. No specific HSS functionality is required to support ISR.

NOTE. A Release 7 HSS needs additional functionality to support the 'dual registration' of MME and SGSN. Without such an upgrade, at least PS domain MT Location Services and MT Short Messages are liable to fail.

It is inherent functionality of the MM procedures to enable ISR activation only when the UE is able to register via E-UTRAN and via GERAN/UTRAN. For example, when there is no E-UTRAN coverage there will be also no ISR activation. Once ISR is activated it remains active until one of the criteria for deactivation in the UE occurs, or until SGSN or MME indicate during an update procedure no more the activated ISR, i.e. the ISR status of the UE has to be refreshed with every update.

When ISR is activated this means the UE is registered with both MME and SGSN. Both the SGSN and the MME have a control connection with the Serving GW. MME and SGSN are both registered at HSS. The UE stores MM parameters from SGSN (e.g. P-TMSI and RA) and from MME (e.g. GUTI and TA(s)) and the UE stores session management (bearer) contexts that are common for E-UTRAN and GERAN/UTRAN accesses. In idle state the UE can reselect between E-UTRAN and GERAN/UTRAN (within the registered RA and TAs) without any need to perform TAU or RAU procedures with the network. SGSN and MME store each other's address when ISR is activated.

When ISR is activated and downlink data arrive, the Serving GW initiates paging processes on both SGSN and MME. In response to paging or for uplink data transfer the UE performs normal Service Request procedures on the currently camped-on RAT without any preceding update signalling (there are however existing scenarios that may require to perform a RAU procedure prior to the Service Request even with ISR is activated when GERAN/UTRAN RAs are used together, as specified in clause 6.13.1.3 of TS 23.060 [7]).

The UE and the network run independent periodic update timers for GERAN/UTRAN and for E-UTRAN. When the MME or SGSN do not receive periodic updates MME and SGSN may decide independently for implicit detach, which removes session management (bearer) contexts from the CN node performing the implicit detach and it removes also the related control connection from the Serving GW. Implicit detach by one CN node (either SGSN or MME) deactivates ISR in the network. It is deactivated in the UE when the UE cannot perform periodic updates in time. When ISR is activated and a periodic updating timer expires the UE starts a Deactivate ISR timer. When this timer expires and the UE was not able to perform the required update procedure the UE deactivates ISR.

Part of the ISR functionality is also available when ISR is not activated because the MM contexts are stored in UE, MME and SGSN also when ISR is not active. This results in some reduced network signalling, which is not available for Gn/Gp SGSNs. These SGSNs cannot handle MM and session management contexts separately. Therefore all contexts on Gn/Gp SGSNs are deleted when the UE changes to an MME. The MME can keep their MME contexts in all scenarios.

Note:
Gn = IP Based interface between SGSN and other SGSNs and (internal) GGSNs. DNS also shares this interface. Uses the GTP Protocol.
Gp = IP based interface between internal SGSN and external GGSNs. Between the SGSN and the external GGSN, there is the border gateway (which is essentially a firewall). Also uses the GTP Protocol.


"Temporary Identity used in Next update" (TIN)

The UE may have valid MM parameters both from MME and from SGSN. The "Temporary Identity used in Next update" (TIN) is a parameter of the UE's MM context, which identifies the UE identity to be indicated in the next RAU Request or TAU Request message. The TIN also identifies the status of ISR activation in the UE.

The TIN can take one of the three values, "P-TMSI", "GUTI" or "RAT-related TMSI". The UE sets the TIN when receiving an Attach Accept, a TAU Accept or RAU Accept message as specified in table 4.3.5.6-1.


"ISR Activated" indicated by the RAU/TAU Accept message but the UE not setting the TIN to "RAT-related TMSI" is a special situation. By maintaining the old TIN value the UE remembers to use the RAT TMSI indicated by the TIN when updating with the CN node of the other RAT.

Only if the TIN is set to "RAT-related TMSI" ISR behaviour is enabled for the UE, i.e. the UE can change between all registered areas and RATs without any update signalling and it listens for paging on the RAT it is camped on. If the TIN is set to "RAT-related TMSI", the UE's P-TMSI and RAI as well as its GUTI and TAI(s) remain registered with the network and valid in the UE.

When ISR is not active the TIN is always set to the temporary ID belonging to the currently used RAT. This guarantees that always the most recent context data are used, which means during inter-RAT changes there is always context transfer from the CN node serving the last used RAT. The UE identities, old GUTI IE and additional GUTI IE, indicated in the next TAU Request message, and old P-TMSI IE and additional P-TMSI/RAI IE, indicated in the next RAU Request message depend on the setting of TIN.

The UE indicates also information elements "additional GUTI" or "additional P-TMSI" in the Attach Request, TAU or RAU Request. These information elements permit the MME/SGSN to find the already existing UE contexts in the new MME or SGSN, when the "old GUTI" or "old P-TMSI" indicate values that are mapped from other identities.


ISR activation

The information flow in Figure below shows an example of ISR activation. For explanatory purposes the figure is simplified to show the MM parts only.

The process starts with an ordinary Attach procedure not requiring any special functionality for support of ISR. The Attach however deletes any existing old ISR state information stored in the UE. With the Attach request message, the UE sets its TIN to "GUTI". After attach with MME, the UE may perform any interactions via E-UTRAN without changing the ISR state. ISR remains deactivated. One or more bearer contexts are activated on MME, Serving GW and PDN GW, which is not shown in the figure.

The first time the UE reselects GERAN or UTRAN it initiates a Routing Area Update. This represents an occasion to activate ISR. The TIN indicates "GUTI" so the UE indicates a P-TMSI mapped from a GUTI in the RAU Request. The SGSN gets contexts from MME. When the MME sends the context to the SGSN, the MME includes the ISR supported indication only if the involved S-GW supports the ISR. After the ISR activated, both CN nodes keep these contexts because ISR is being activated. The SGSN establishes a control relation with the Serving GW, which is active in parallel to the control connection between MME and Serving GW (not shown in figure). The RAU Accept indicates ISR activation to the UE. The UE keeps GUTI and P-TMSI as registered, which the UE memorises by setting the TIN to "RAT-related TMSI". The MME and the SGSN are registered in parallel with the HSS.

After ISR activation, the UE may reselect between E-UTRAN and UTRAN/GERAN without any need for updating the network as long as the UE does not move out of the RA/TA(s) registered with the network.

The network is not required to activate ISR during a RAU or TAU. The network may activate ISR at any RAU or TAU that involves the context transfer between an SGSN and an MME. The RAU procedure for this is shown in Figure above. ISR activation for a UE, which is already attached to GERAN/UTRAN, with a TAU procedure from E-UTRAN works in a very similar way.

Reference: 3GPP TS 23.401: General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access

Friday 22 July 2011

Mobility Robustness Optimization to avoid Handover failures

The following is from 4G Americas Whitepaper on SON:


Mobility Robustness Optimization (MRO) encompasses the automated optimization of parameters affecting active mode and idle mode handovers to ensure good end-user quality and performance, while considering possible competing interactions with other SON features such as, automatic neighbor relation and load balancing.

There is also some potential for interaction with Cell Outage Compensation and Energy Savings as these could also potentially adjust the handover boundaries in a way that conflicts with MRO. While the goal of MRO is the same regardless of radio technology namely, the optimization of end-user performance and system capacity, the specific algorithms and parameters vary with technology.

The objective of MRO is to dynamically improve the network performance of HO (Handovers) in order to provide improved end-user experience as well as increased network capacity. This is done by automatically adapting cell parameters to adjust handover boundaries based on feedback of performance indicators. Typically, the objective is to eliminate Radio Link Failures and reduce unnecessary handovers. Automation of MRO minimizes human intervention in the network management and optimization tasks.

The scope of mobility robustness optimization as described here assumes a well-designed network with overlapping RF coverage of neighboring sites. The optimization of handover parameters by system operators typically involves either focused drive-testing, detailed system log collection and postprocessing, or a combination of these manual and intensive tasks. Incorrect HO parameter settings can negatively affect user experience and waste network resources by causing HO ping-pongs, HO failures and Radio Link Failures (RLF). While HO failures that do not lead to RLFs are often recoverable and invisible to the user, RLFs caused by incorrect HO parameter settings have a combined impact on user experience and network resources. Therefore, the main objective of mobility robustness optimization should be the reduction of the number of HO-related radio link failures. Additionally, sub-optimal configuration of HO parameters may lead to degradation of service performance, even if it does not result in RLFs. One example is the incorrect setting of HO hysteresis, which may results in ping-pongs or excessively delayed handovers to a target cell. Therefore, the secondary objective of MRO is the reduction of the inefficient use of network resources due to unnecessary or missed handovers.

Most problems associated with HO failures or sub-optimal system performance can ultimately be categorized, as either too-early or too-late triggering of the handover, provided that the required fundamental network RF coverage exists. Thus, poor HO-related performance can generally be categorized by the following events:

* Intra-RAT late HO triggering
* Intra-RAT early HO triggering
* Intra-RAT HO to an incorrect cell
* Inter-RAT too late HO
* Inter RAT unnecessary HO

Up to Release 9, a UE is required to send RLF report only in case of successful RRC re-establishment after a connection failure. Release 10 allows support for RLF reports to be sent even when the RRC reestablishment does not succeed. The UE is required to report additional information to assist the eNB in determining if the problem is coverage related (no strong neighbors) or handover problems (too early, too late or wrong cell). Furthermore, Release 10 allows for precise detection of too early / wrong cell HO.

Monday 13 June 2011

Home eNode B (HeNB) Architecture options

I blogged last year about the different LTE Home eNodeB architecture options, their advantages and disadvantages. Then there was the Qualcomm white paper architecture that listed these options as well. Now there is a white paper from the Femto Forum that discusses these architecture options in detail and their advantages and disadvantages. The presentation is embedded below.

Wednesday 8 June 2011

3GPP LTE Security Aspects

Regular readers may have realised that Security is one of my favourite topics. Having worked on Security extensively in UMTS and now in LTE, I am always keen to have a complete understanding of the Security aspects of UMTS / LTE.Here is a presentation from a 3GPP workshop held in Bangalore in May 2011.
3GPP LTESecurity Aspects
View more presentations from Zahid Ghadialy
This and other Security related presentations are available on 3G4G website.

Monday 6 June 2011

Billing based on QoS and QoE

With Spectrum coming at a price the operators are keen to make as much money as possible out of the data packages being provided to the consumers. The operators want to stop users using over the top (OTT) services like Skype thereby losing potential revenue. They also want the users to stop using services that are offered by the operator thereby maximising their revenue.

A valid argument put forward by the operators is that 90% of the bandwidth is used by just 10% of the users. This gives them the reason to look at the packets and restrict the rogue users.

As a result they are now turning to deep packet inspection (DPI) to make sure that the users are not using the services they are being restricted to use. AllOt is one such company offering this service.

The following presentation is from the LTE World Summit:



They also have some interesting Videos on the net that have been embedded below. They give a good idea on the services being offered to the operators.



Finally, a term QoS and QoE always causes confusion. Here is a simple explanation via Dan Warren on twitter:

QoS = call gets established and I can hear what is being said, everything else is QoE

Friday 3 June 2011

Carrier Aggregation with a difference

Click on picture to enlarge

Another one from the LTE World Summit. This is from a presentation by Ariela Zeira of Interdigital.

What is being proposed is that Carrier Aggregation can use both the licensed as well as unlicensed bands but the signalling should only happen in the licensed band to keep the operator in control.

Note that this is only proposed for Small Cells / Femtocells.

The only concern that I have with this approach is that this may cause interference with the other devices using the same band (especially ISM band). So the WiFi may not work while the LTE device is aggregating this ISM band and the same goes for bluetooth.

Comments welcome!

Friday 27 May 2011

Dual Radio Solution for Voice in LTE

I did mention in the Twitter conversations post from LTE World Summit 2011 that there are now certain analysts and players in the market who think that it should be possible to have two radios. Here is a slide from ZTE that shows that they are thinking in this direction as well.


Click on the pic to enlarge.

Tri-SIM phones have been available for quite a while but now there are Quad-Sim Shanzhai phones that are available in China. I am sure there is a market for these kind of phones.

With the battery life and the mobile technology improving, these are no longer the concerns when talking about dual radio possibility in the devices. Another common argument is that there may be additional interference due to multiple radios simultaneously receiving/transmitting. I am sure these can be managed without much problem.

Another problem mentioned is we may need multiple SIM cards but the SIM cards used is actually a UICC. There can be multiple SIM applications and IMSI's on it. The network may need some very minor modifications but they should be able to manage this with no problems. In the good old days, we used to have mobiles with built in Fax. The mobile number used to be different from the Fax number. It was a similar kind of problem but managed without problem.

So there may still be time to keep LTE simple by standardising the dual-radio solution rather than having CSFB, VoLTE, SRVCC, VoLGA, etc.

Any thoughts?

Wednesday 4 May 2011

New Security Algorithms in Release-11


I did mention in my earlier blog post about the new algorithm for 3GPP LTE-A Security. The good news is that this would be out hopefully in time for the Release-11.

The following from 3GPP Docs:


The current 3GPP specifications for LTE/SAE security support a flexible algorithm negotiation mechanism. There could be sixteen algorithms at most to support LTE/SAE confidentiality and integrity protection. In current phase, 3GPP defines that there are two algorithms used in EPS security, i.e. SNOW 3G and AES. The remaining values have been reserved for future use. So it is technically feasible for supporting new algorithm for LTE/SAE ciphering and integrity protection.

Different nations will have different policies for algorithm usage of communication system. The current defined EPS algorithm may not be used in some nations according to strict policies which depend on nation’s security laws. Meanwhile, operators shall implement their networks depending on national communication policies. To introduce a new algorithm for EPS security will give operators more alternatives to decide in order to obey national requirements.


Picture: Zu Chongzi
Picture Source: Wikipedia


Some work has been done to adapt LTE security to national requirements about cryptography of LTE/SAE system, i.e. designing a new algorithm of EPS security, which is named ZUC (i.e. Zu Chongzhi, a famous Chinese scientist name in history). Certainly the new algorithm should be fundamentally different from SNOW 3G and AES, so that an attack on one algorithm is very unlikely to translate into an attack on the other.

The objective of this work item is to standardise a new algorithm in EPS. This will include the following tasks:
To develop new algorithms for confidentiality and integrity protection for E-UTRAN
To enable operators to quickly start to support the new algorithm
Not to introduce any obstacle for R8 roaming UE

The following issues should at least be handled in the WI:
Agree requirement specification with ETSI SAGE for development of new algorithms
Delivery of algorithm specification, test data and design and evaluation reports

The algorithm is provided for 3GPP usage on royalty-free basis.

The algorithm shall undergo a sequential three-stage evaluation process involving first ETSI SAGE, then selected teams of cryptanalysts from academia and finally the general public.


The documents related to the EEA3 and EIA3 algorithm could be downloaded from here.

If you are new to LTE Security, the following can be used as starting point: http://www.3g4g.co.uk/Lte/LTE_Security_WP_0907_Agilent.pdf

Wednesday 30 March 2011

Quick Recap of MIMO in LTE and LTE-Advanced

I had earlier put up some MIMO presentations that were too technical heavy so this one is less heavy and more figures.

The following is from NTT Docomo Technical journal (with my edits):

MIMO: A signal transmission technology that uses multiple antennas at both the transmitter and receiver to perform spatial multiplexing and improve communication quality and spectral efficiency.

Spectral efficiency: The number of data bits that can be transmitted per unit time and unit frequency band.

In this blog we will first look at MIMO in LTE (Release 8/9) and then in LTE-Advanced (Release-10)

MIMO IN LTE

Downlink MIMO Technology

Single-User MIMO (SU-MIMO) was used for the downlink for LTE Rel. 8 to increase the peak data rate. The target data rates of over 100 Mbit/s were achieved by using a 20 MHz transmission bandwidth, 2 × 2 MIMO, and 64 Quadrature Amplitude Modulation (64QAM), and peak data rates of over 300 Mbit/s can be achieved using 4×4 SU-MIMO. The multi-antenna technology used for the downlink in LTE Rel. 8 is classified into the following three types.

1) Closed-loop SU-MIMO and Transmit Diversity: For closed-loop SU-MIMO transmission on the downlink, precoding is applied to the data carried on the Physical Downlink Shared Channel (PDSCH) in order to increase the received Signal to Interference plus Noise power Ratio (SINR). This is done by setting different transmit antenna weights for each transmission layer (stream) using channel information fed back from the UE. The ideal transmit antenna weights for precoding are generated from eigenvector(s) of the covariance matrix of the channel matrix, H, given by HHH, where H denotes the Hermitian transpose.

However, methods which directly feed back estimated channel state information or precoding weights without quantization are not practical in terms of the required control signaling overhead. Thus, LTE Rel. 8 uses codebook-based precoding, in which the best precoding weights among a set of predetermined precoding matrix candidates (a codebook) is selected to maximize the total throughput on all layers after precoding, and the index of this matrix (the Precoding Matrix Indicator (PMI)) is fed back to the base station (eNode B) (Figure 1).


LTE Rel. 8 adopts frequency-selective precoding, in which precoding weights are selected independently for each sub-band of bandwidth from 360 kHz to 1.44 MHz, as well as wideband precoding, with single precoding weights that are applied to the whole transmission band. The channel estimation used for demodulation and selection of the precoding weight matrix on the UE is done using a cell specific Reference Signal (RS) transmitted from each antenna. Accordingly, the specifications require the eNode B to notify the UE of the precoding weight information used for PDSCH transmission through the Physical Downlink Control Channel (PDCCH), and the UE to use this information for demodulation.

LTE Rel. 8 also adopts rank adaptation, which adaptively controls the number of transmission layers (the rank) according to channel conditions, such as the received SINR and fading correlation between antennas (Figure 2). Each UE feeds back a Channel Quality Indicator (CQI), a Rank Indicator (RI) specifying the optimal rank, and the PMI described earlier, and the eNode B adaptively controls the number of layers transmitted to each UE based on this information.

2) Open-loop SU-MIMO and Transmit Diversity: Precoding with closed-loop control is effective in low mobility environments, but control delay results in less accurate channel tracking ability in high mobility environments. The use of open-loop MIMO transmission for the PDSCH, without requiring feedback of channel information, is effective in such cases. Rank adaptation is used, as in the case of closed-loop MIMO, but rank-one transmission corresponds to open-loop transmit diversity. Specifically, Space-Frequency Block Code (SFBC) is used with two transmit antennas, and a combination of SFBC and Frequency Switched Transmit Diversity (FSTD) (hereinafter referred to as “SFBC+FSTD”) is used with four transmit antennas. This is because, compared to other transmit diversity schemes such as Cyclic Delay Diversity (CDD), SFBC and SFBC+FSTD achieve higher diversity gain, irrespective of fading correlation between antennas, and achieve the lowest required received SINR. On the other hand, for PDSCH transmission with rank of two or higher, fixed precoding is used regardless of channel variations. In this case, cyclic shift is performed before applying the precoding weights, which effectively switches precoding weights in the frequency domain, thereby averaging the received SINR is over layers.

3) Adaptive Beamforming: Adaptive beamforming uses antenna elements with a narrow antenna spacing of about half the carrier wavelength and it has been studied for use with base stations with the antennas mounted in a high location. In this case beamforming is performed by exploiting the UE Direction of Arrival (DoA) or the channel covariance matrix estimated from the uplink, and the resulting transmit weights are not selected from a codebook. In LTE Rel. 8, a UE-specific RS is defined for channel estimation in order to support adaptive beamforming. Unlike the cell-specific RS, the UE specific RS is weighted with the same weights as the data signals sent to each UE, and hence there is no need to notify the UE of the precoding weights applied at the eNode B for demodulation at the UE. However, its effectiveness is limited in LTE Rel. 8 because only one layer per cell is supported, and it is an optional UE feature for Frequency Division Duplex (FDD).

Uplink MIMO Technology

On the uplink in LTE Rel. 8, only one-layer transmission was adopted in order to simplify the transmitter circuit configuration and reduce power consumption on the UE. This was done because the LTE Rel. 8 target peak data rate of 50 Mbit/s or more could be achieved by using a 20 MHz transmission bandwidth and 64QAM and without using SU-MIMO. However, Multi-User MIMO (MU-MIMO) can be used to increase system capacity on the LTE Rel. 8 uplink, using multiple receiver antennas on the eNode B. Specifically, the specification requires orthogonalization of the demodulation RSs from multiple UEs by assigning different cyclic shifts of a Constant Amplitude Zero Auto-Correlation (CAZAC) sequence to the demodulation RSs, so that user signals can be reliably separated at the eNode B. Demodulation RSs are used for channel estimation for the user-signal separation process.


MIMO TECHNOLOGY IN LTE-ADVANCED

Downlink 8-Layer SU-MIMO Technology

The target peak spectral efficiency in LTE-Advanced is 30 bit/s/Hz. To achieve this, high-order SU-MIMO with more antennas is necessary. Accordingly, it was agreed to extend the number of layers of SU-MIMO transmission in the LTE-Advanced downlink to a maximum of 8 layers. The number of transmission layers is selected by rank adaptation. The most significant issue with the radio interface in supporting up to 8 layers is the RS structure used for CQI measurements and PDSCH demodulation.

1) Channel State Information (CSI)-RS: For CQI measurements with up-to-8 antennas, new CSI-RSs are specified in addition to cell-specific RS defined in LTE Rel. 8 for up-to-four antennas. However, in order to maintain backward compatibility with LTE Rel. 8 in LTE-Advanced, LTE Rel. 8 UE must be supported in the same band as in that for LTE-Advanced. Therefore, in LTE Advanced, interference to the PDSCH of LTE Rel. 8 UE caused by supporting CSI-RS must be minimized. To achieve this, the CSI-RS are multiplexed over a longer period compared to the cell-specific RS, once every several subframes (Figure 3). This is because the channel estimation accuracy for CQI measurement is low compared to that for demodulation, and the required accuracy can be obtained as long as the CSIRS is sent about once per feedback cycle. A further reason for this is that LTE-Advanced, which offers higher data-rate services, will be developed to complement LTE Rel. 8, and is expected to be adopted mainly in low-mobility environments.


2) UE-specific RS: To allow demodulation of eight-layer SU-MIMO, the UE-specific RS were extended for SU-MIMO transmission, using a hybrid of Code Division Multiplexing (CDM) and Frequency Division Multiplexing (FDM) (Figure 4). The UE-specific RS pattern for each rank (number of layers) is shown in Figure 5. The configuration of the UE-specific RS in LTE-Advanced has also been optimized differently from those of LTE Rel.8, extending it for SU-MIMO as well as adaptive beamforming, such as by applying twodimensional time-frequency orthogonal CDM to the multiplexing between transmission layers.


Downlink MU-MIMO Technology

In addition to the peak data rate, the system capacity and cell-edge user throughput must also be increased in LTE-Advanced compared to LTE Rel. 8. MU-MIMO is an important technology for satisfying these requirements. With MU-MIMO and CoMP transmission (described earlier), various sophisticated signal processing techniques are applied at the eNode B to reduce the interference between transmission layers, including adaptive beam transmission (zero-forcing, block diagonalization, etc.), adaptive transmission power control and simultaneous multi-cell transmission. When these sophisticated transmission techniques are applied, the eNode B multiplexes the UE-specific RS described above with the PDSCH, allowing the UE to demodulate the PDSCH without using information about transmission technology applied by the eNode B. This increases flexibility in applying sophisticated transmission techniques on the downlink. On the other hand, PMI/CQI/RI feedback extensions are needed to apply these sophisticated transmission techniques, and this is currently being discussed actively at the 3GPP.

Uplink SU-MIMO Technology

To reduce the difference in peak data rates achievable on the uplink and downlink for LTE Rel. 8, a high target peak spectral efficiency of 15 bit/s/Hz was specified for the LTE-Advanced uplink. To achieve this, support for SU-MIMO with up to four transmission antennas was agreed upon. In particular, the two-transmission-antenna SU-MIMO function is required to satisfy the peak spectral efficiency requirements of IMT-Advanced.

For the Physical Uplink Shared Channel (PUSCH), it was agreed to apply SU-MIMO with closed-loop control using multiple antennas on the UE, as well as codebook-based precoding and rank adaptation, as used on the downlink. The eNode B selects the precoding weight from a codebook to maximize achievable performance (e.g., received SINR or user throughput after precoding) based on the sounding RS, which is used for measuring the quality of the channel transmitted by the UE. The eNode B notifies the UE of the selected precoding weight together with the resource allocation information used by the PDCCH. The precoding for rank one contributes to antenna gain, which is effective in increasing cell edge user throughput. However, considering control-information overhead and increases in Peak-to-Average Power Ratio (PAPR), frequency-selective precoding is not very effective in increasing system throughput, so only wideband precoding has been adopted.

Also, for rank two or higher, when four transmission antennas are used, the codebook has been designed not to increase the PAPR. The demodulation RS, which is used for channel estimation, is weighted with the same precoding weight as is used for the user data signal transmission. Basically, orthogonalization is achieved by applying a different cyclic shift to each layer, but orthogonalizing the code region using block spread together with this method is adopted.


Uplink Transmit Diversity Technology

Closed-loop transmit diversity is applied to PUSCH as described above for SU-MIMO. Application of transmit diversity to the Physical Uplink Control Channel (PUCCH) is also being studied. For sending retransmission request Acknowledgment (ACK) and Negative ACK (NAK) signals as well as scheduling request signals, application of Spatial Orthogonal-Resource Transmit Diversity (SORTD) using differing resource blocks per antenna or an orthogonalizing code sequence (cyclic shift, block spread sequence) has been agreed upon (Figure 6). However, with LTE-Advanced, the cell design must be done so that LTE Rel. 8 UE get the required quality at cell-edges, so applying transmit diversity to the control channels cannot contribute to increasing the coverage area, but only to reducing the transmission power required.

Tuesday 22 March 2011

3GPP Official 'MBMS support in E-UTRAN' - Mar 2011

Last month I blogged about the MBMS feature in Rel-9. The 3GPP official presentation on MBMS is now available. Embedded below:

Presentation can be downloaded from Slideshare.

This presentation was a part of Joint one hour session of 3GPP RAN and 3GPP CT on March 16th 2011, 11.00 am – 12.00 p.m. More on this coming soon.

Monday 21 March 2011

A quick primer on Coordinated Multi-point (CoMP) Technology

From NTT Docomo Technical Journal:

CoMP is a technology which sends and receives signals from multiple sectors or cells to a given UE. By coordinating transmission among multiple cells, interference from other cells can be reduced and the power of the desired signal can be increased.

Coordinated Multi-point Transmission/Reception:

The implementation of intracell/inter-cell orthogonalization on the uplink and downlink in LTE Rel. 8 contributed to meeting the requirements of capacity and cell-edge user throughput. On the downlink, simultaneously connected UE are orthogonalized in the frequency domain. On the other hand, they are orthogonalized on the uplink, in the frequency domain as well as the code domain, using cyclic shift and block spreading. It is possible to apply fractional frequency reuse (A control method which assigns different frequency ranges for cell-edge UE) to control interference between cells semi-statically, but this is done based on randomization in LTE Rel. 8. Because of this, we are planning to study CoMP technology, which performs signal processing for coordinated transmission and reception by multiple cells to one or more UE, as a technology for Rel. 11 and later in order to extend the intracell/ inter-cell orthogonalization in LTE Rel. 8 to operate between cells.


Independent eNode B and Remote Base Station Configurations:

There are two ways to implement CoMP technology: autonomous distributed control based on an independent eNode B configuration, or centralized control based on Remote Radio Equipment (RRE) (Figure 7). With an independent eNode B configuration, signaling over wired transmission paths is used between eNode B to coordinate among cells. Signaling over wired transmission paths can be done with a regular cell configuration, but signaling delay and overhead become issues, and ways to increase signaling speed or perform high-speed signaling via UE need study. With RRE configurations, multiple RREs are connected via an optical fiber carrying a baseband signal between cells and the central eNode B, which performs the baseband signal processing and control, so the radio resources between the cells can be controlled at the central eNode B. In other words, signaling delay and overhead between eNode B, which are issues in independent eNode B configurations, are small in this case, and control of high speed radio resources between cells is relatively easy. However, high capacity optical fiber is required, and as the number of RRE increases, the processing load on the central eNode B increases, so there are limits on how this can be applied. For these reasons, it is important to use both distributed control based on independent eNode B configurations and centralized control based on RRE configurations as appropriate, and both are being studied in preparation for LTE-Advanced.

Downlink Coordinated Multi-point Transmission:

Downlink coordinated multi-point transmission can be divided into two categories: Coordinated Scheduling/ Coordinated Beamforming (CS/CB), and joint processing (Figure 8). With CS/CB, a given subframe is transmitted from one cell to a given UE, as shown in Fig. 8 (a), and coordinated beamforming and scheduling is done between cells to reduce the interference caused to other cells. On the other hand, for joint processing, as shown in Fig. 8 (b-1) and (b-2), joint transmission by multiple cells to a given UE, in which they transmit at the same time using the same time and frequency radio resources, and dynamic cell selection, in which cells can be selected at any time in consideration of interference, are being studied. For joint transmission, two methods are being studied: non-coherent transmission, which uses soft-combining reception of the OFDM signal; and coherent transmission, which does precoding between cells and uses in-phase combining at the receiver.

Uplink Multi-cell Reception:

With uplink multi-cell reception, the signal from a UE is received by multiple cells and combined. In contrast to the downlink, the UE does not need to be aware of whether multi-cell reception is occurring, so it should have little impact on the radio interface specifications.

Friday 18 March 2011

Roadmap to Operational Excellence for Next Generation Mobile Networks


This presentation is from:

FP7 SOCRATES Final Workshop on Self-Organisation in Mobile Networks February 22, 2011 - Karlsruhe, Germany

This and all other presentations from this workshop are available to download from here.

Monday 14 March 2011

LTE Physical Layer Measurements of RSRP and RSRQ

One of the things on my mind for long time was to find a bit more about RSRP and RSRQ.

The following is from Agilent Whitepaper:

The UE and the eNB are required to make physical layer measurements of the radio characteristics. The measurement definitions are specified in 3GPP TS 36.214. Measurements are reported to the higher layers and are used for a variety of purposes including intra- and inter-frequency handover, inter-radio access technology (inter-RAT) handover, timing measurements, and other purposes in support of RRM.

Reference signal receive power (RSRP):

RSRP is the most basic of the UE physical layer measurements and is the linear average (in watts) of the downlink reference signals (RS) across the channel bandwidth. Since the RS exist only for one symbol at a time, the measurement is made only on those resource elements (RE) that contain cell-specific RS. It is not mandated for the UE to measure every RS symbol on the relevant subcarriers. Instead, accuracy requirements have to be met. There are requirements for both absolute and relative RSRP. The absolute requirements range from ±6 to ±11 dB depending on the noise level and environmental conditions. Measuring the difference in RSRP between two cells on the same frequency (intra-frequency measurement) is a more accurate operation for which the requirements vary from ±2 to ±3 dB. The requirements widen again to ±6 dB when the cells are on different frequencies (inter-frequency measurement).

Knowledge of absolute RSRP provides the UE with essential information about the strength of cells from which path loss can be calculated and used in the algorithms for determining the optimum power settings for operating the network. Reference signal receive power is used both in idle and connected states. The relative RSRP is used as a parameter in multi-cell scenarios.

Reference signal receive quality (RSRQ):

Although RSRP is an important measure, on its own it gives no indication of signal quality. RSRQ provides this measure and is defined as the ratio of RSRP to the E-UTRA carrier received signal strength indicator (RSSI). The RSSI parameter represents the entire received power including the wanted power from the serving cell as well as all cochannel power and other sources of noise. Measuring RSRQ becomes particularly important near the cell edge when decisions need to be made, regardless of absolute RSRP, to perform a handover to the next cell. Reference signal receive quality is used only during connected states. Intra- and inter-frequency absolute RSRQ accuracy varies from ±2.5 to ±4 dB, which is similar to the interfrequency relative RSRQ accuracy of ±3 to ±4 dB.

The following is from R&S white paper:


The RSRP is comparable to the CPICH RSCP measurement in WCDMA. This measurement of the signal strength of an LTE cell helps to rank between the different cells as input for handover and cell reselection decisions. The RSRP is the average of the power of all resource elements which carry cell-specific reference signals over the entire bandwidth. It can therefore only be measured in the OFDM symbols carrying reference symbols.

The RSRQ measurement provides additional information when RSRP is not sufficient to make a reliable handover or cell reselection decision. RSRQ is the ratio between the RSRP and the Received Signal Strength Indicator (RSSI), and depending on the measurement bandwidth, means the number of resource blocks. RSSI is the total received wideband power including all interference and thermal noise. As RSRQ combines signal strength as well as interference level, this measurement value provides additional help for mobility decisions.

Assume that only reference signals are transmitted in a resource block, and that data and noise and interference are not considered. In this case RSRQ is equal to -3 dB. If reference signals and subcarriers carrying data are equally powered, the ratio corresponds to 1/12 or -10.79 dB. At this point it is now important to prove that the UE is capable of detecting and decoding the downlink signal under bad channel conditions, including a high noise floor and different propagation conditions that can be simulated by using different fading profiles.

I will be adding some conformance test logs at the 3G4G website for Measurement and Cell Selection/Re-selection that will give some more information about this.

In case you can provide a much simpler explanation or reference please feel free to add in the comment.

Wednesday 9 March 2011

ETWS detailed in LTE and UMTS

Its been couple of years since the introductory post on 3GPP Earthquake and Tsunami Warning service (ETWS). The following is more detailed post on ETWS from the NTT Docomo technical journal.

3GPP Release 8 accepted the standard technical specification for warning message distribution platform such as Area Mail, which adopts pioneering technology for faster distribution, in order to fulfil the requirements concerning the distribution of emergency information e.g. earthquakes, tsunamis and so on in LTE/EPC. The standard specifies the delivery of emergency information in two levels. The Primary Notification contains the minimum, most urgently required information such as “An earthquake occurred”; the Secondary Notification includes supplementary information not contained in the Primary Notification, such as seismic intensity, epicentre, and so on. This separation allows implementation of excellent information distribution platforms that can achieve the theoretically fastest speed of the warning distribution.

The purpose of the ETWS is to broadcast emergency information such as earthquake warnings provided by a local or national governments to many mobile terminals as quickly as possible by making use of the characteristic of the widespread mobile communication networks.

The ETWS, in the same way as Area Mail, detects the initial slight tremor of an earthquake, the Primary Wave (P wave - The first tremor of an earthquake to arrive at a location), and sends a warning message that an earthquake is about to happen to the mobile terminals in the affected area. ETWS can deliver the first notification to mobile terminals in the shortest theoretical time possible in a mobile communication system (about four seconds after receiving the emergency information from the local or national government), which is specified as a requirement by 3GPP.

The biggest difference between Area Mail and the ETWS is the disaster notification method (Figure 1). Earthquake warnings in Area Mail have a fixed-length message configuration that notifies of an earthquake. ETWS, on the other hand, achieves distribution of the highest priority information in the shortest time by separating out the minimum information that is needed with the most urgency, such as “Earthquake about to happen,” for the fastest possible distribution as a Primary Notification; other supplementary information (seismic intensity, epicentre, etc.) is then distributed in a Secondary Notification. This distinction thus implements a flexible information distribution platform that prioritizes information distribution according to urgency.

The Primary Notification contains only simple patterned disaster information, such as “Earthquake.” When a mobile terminal receives a Primary Notification, it produces a pre-set alert sound and displays pre-determined text on the screen according to the message content to notify users of the danger. The types of disaster that a Primary Notification can inform about are specified as “Earthquake,” “Tsunami,” “Tsunami + Earthquake,” “Test” and “Other,” regardless of the type of radio access.

The Secondary Notification contains the same kind of message as does the existing Area Mail service, which is, for example, textual information distributed from the network to the mobile terminal to inform of the epicentre, seismic intensity and other such information. That message also contains, in addition to text, a Message Identifier and Serial Number that identifies the type of disaster.

A major feature of the ETWS is compatibility with international roaming. Through standardization, mobile terminals that can receive ETWS can receive local emergency information when in other countries if the local network provides the ETWS service. These services are provided in a manner that is common to all types of radio access (3G, LTE, etc.).

Network Architecture

The ETWS platform is designed based on the Cell Broadcast Service (CBS). The ETWS network architecture is shown in Figure 2. Fig. 2 also shows the architecture for 3G network to highlight the features differences between LTE and 3G.

In the ETWS architecture for 3G, a Cell Broadcast Centre (CBC), which is the information distribution server, is directly connected to the 3G Radio Network Controller (RNC). The CBC is also connected to the Cell Broadcast Entity (CBE), which distributes information from the Meteorological Agency and other such sources.

In an LTE radio access network, however, the eNodeB (eNB) is directly connected to the core network, and eNB does not have a centralized radio control function as the one provided by the RNC of 3G. Accordingly, if the same network configuration as used for 3G were to be adopted, the number of eNB connected to the CBC would increase and add to the load on the CBC. To overcome that issue, ETWS for LTE adopts a hierarchical architecture in which the CBC is connected to a Mobility Management Entity (MME).

The MME, which acts as a concentrator node, is connected to a number of eNBs. This architecture gives advantages to the network, such as reducing the load in the CBC and reducing the processing time, and, thus preventing delay in distribution.

Message Distribution Area

In the 3G ETWS and Area Mail systems, the distribution area can be specified only in cell units, which creates the issue of huge distribution area database in CBC. In LTE ETWS, however, the distribution area is specified in three different granularities (Figure 3). This allows the operator to perform area planning according to the characteristic of the warning/emergency occasions, e.g. notice of an earthquake with a certain magnitude needs to be distributed in a certain width of area, thus allowing efficient and more flexible broadcast of the warning message.

1) Cell Level Distribution Area: The CBC designates the cell-level distribution areas by sending a list of cell IDs. The emergency information is broadcasted only to the designated cells. Although this area designation has the advantage of being able to pinpoint broadcast distribution to particular areas, it necessitates a large processing load in the network node (CBC, MME and eNB) especially when the list is long.

2) TA Level Distribution Area: In this case, the distribution area is designated as a list of Tracking Area Identities (TAIs). TAI is an identifier of a Tracking Area (TA), which is an LTE mobility management area. The warning message broadcast goes out to all of the cells in the TAIs. This area designation has the advantage of less processing load when the warning message has to be broadcast to relatively wide areas.

3) EA Level Distribution Area: The Emergency Area (EA) can be freely defined by the operator. An EA ID can be assigned to each cell, and the warning message can be broadcasted to the relevant EA only. The EA can be larger than a cell and is independent of the TA. EA is a unit of mobility management. EA thus allows flexible design for optimization of the distribution area for the affected area according to the type of disaster.




Message Distribution

The method of distributing emergency information to LTE radio networks is shown in Figure 4. When the CBC receives a request for emergency information distribution from CBE, it creates the text to be sent to the terminals and specifies the distribution area from the information in the request message (Fig. 4 (1) (2)).

Next, the CBC sends a Write-Replace Warning Request message to the MME of the specified area. This message contains information such as disaster type, warning message text, message distribution area, Primary Notification information, etc. (Fig. 4 (3)). When the MME receives this message, it sends a response message to the CBC to notify that the message was correctly received. The CBC then notifies the CBE that the distribution request was received and the processing has begun (Fig. 4 (4) (5)). At the same time, the MME checks the distribution area information in the received message (Fig. 4 (6)) and, if a TAI list is included, it sends the Write-Replace Warning Request message only to the eNB that belong to the TAI in the list (Fig. 4 (7)). If the TAI list is not included, the message is sent to all of the eNB to which the MME is connected.

When the eNB receives the Write-Replace Warning Request message from the MME, it determines the message distribution area based on the information included in the Write-Replace Warning Request message (Fig. 4 (8)) and starts the broadcast (Fig. 4 (9) (10)). The following describes how the eNB processes each of the specified information elements.

1) Disaster Type Information (Message Identifier/Serial Number): If an on-going broadcast of a warning message exists, this information is used by the eNB to decide whether it shall discard the newly received message or overwrite the ongoing warning message broadcast with the newly received one. Specifically, if the received request message has the same type as the message currently being broadcasted, the received request message is discarded. If the type is different from the message currently being broadcast, the received request message shall overwrite the ongoing broadcast message and the new warning message is immediately broadcasted.

2) Message Distribution Area (Warning Area List): When a list of cells has been specified as the distribution area, the eNB scans the list for cells that it serves and starts warning message broadcast to those cells. If the message distribution area is a list of TAIs, the eNB scans the list for TAIs that it serves and starts the broadcast to the cells included in those TAIs. In the same way, if the distribution area is specified as an EA (or list of EAs), the eNB scans the EA ID list for EA IDs that it serves and starts the broadcast to the cells included in the EA ID.

If the received Write-Replace Warning Request message does not contain distribution area information, the eNB broadcasts the warning message to all of the cells it serves.

3) Primary Notification Information: If Primary Notification information indication exists, that information is mapped to a radio channel that is defined for the broadcast of Primary Notification.

4) Message Text: The eNB determines whether or not there is message text and thus whether or not a Secondary Notification needs to be broadcasted. If message text exists, that text is mapped to a radio channel that is defined for the broadcast of Secondary Notification. The Secondary Notification is broadcast according to the transmission intervals and number of transmissions specified by the CBC. Upon the completion of a broadcast, the eNB returns the result to the MME (Fig. 4 (11)).


Radio Function Specifications

Overview : In the previous Area Mail service, only mobile terminals in the standby state (RRC_IDLE) could receive emergency information, but in ETWS, emergency information can be received also by mobile terminals in the connected state (RRC_CONNECTED), and hence the information can be delivered to a broader range of users. In LTE, when delivering emergency information to mobile terminals, the eNB sends a bit in the paging message to notify that emergency information is to be sent (ETWS indication), and sends the emergency information itself as system information broadcast. In 3G, on the other hand, the emergency information is sent through the paging message and CBS messages.

Message Distribution method for LTE: When the eNB begins transmission of the emergency information, a paging message in which the ETWS indication is set is sent to the mobile terminal. ETWS-compatible terminals, whether in standby or connected, try to receive a paging message at least once per default paging cycle, whose value is specified by the system information broadcast and can be set to 320 ms, 640 ms, 1.28 s or 2.56 s according to the 3GPP specifications. If a paging message that contains an ETWS indication is received, the terminal begins receiving the system information broadcast that contains the emergency information. The paging message that has the ETWS indication set is sent out repeatedly at every paging opportunity, thus increasing the reception probability at the mobile terminal.

The ETWS message itself is sent as system information broadcast. Specifically, the Primary Notification is sent as the Warning Type in System Information Block Type 10 (SIB10) and the Secondary Notification is sent as a Warning Message in SIB11. By repeated sending of SIB10 and SIB11 (at an interval that can be set to 80 ms, 160 ms, 320 ms, 640 ms, 1.28 s, 2.56 s, or 5.12 s according to the 3GPP specifications), the probability of the information being received at the residing mobile terminal can be increased. In addition, the SIB10 and SIB11 scheduling information is included in SIB1 issued at 80-ms intervals, so mobile terminals that receive the ETWS indication try to receive SIB10 and SIB11 after first having received the SIB1. By checking the disaster type information (Message Identifier and Serial Number) contained in SIB10 and SIB11, the mobile terminal can prevent the receiving of multiple messages that contain the same emergency information.

3G Message Distribution Method: For faster information delivery and increased range of target uers in 3G also, the CBS message distribution control used in Area Mail was enhanced. An overview of the 3G radio system is shown in Figure 5.

In the Area Mail system, a Common Traffic Channel (CTCH) logical channel is set up in the radio link, and emergency information distribution is implemented by sending CBS messages over that channel. To inform the mobile terminals that the CTCH logical channel has been set up, the RNC orders the base station (BTS) to set the CTCH Indicator information element in the system information broadcast to TRUE, and transmits the paging message indicating a change in the system information broadcast to the mobile terminals. When the mobile terminal receives the CTCH Indicator, it begins monitoring the CTCH logical channel and can receive CBS messages.

In ETWS, by including the Warning Type in the paging message indicating a change in the system information broadcast, processing for a pop-up display and alert sound processing (Primary Notification) at the mobile terminals according to the Warning Type can be executed in parallel to the processing at the mobile terminals to start receiving the CBS messages. This enhancement allows users whose terminals are in the connected state (RRC_CONNECTED) to also receive emergency information. In the previous system, it was not possible for these users to receive emergency information. Also including disaster type information (Message Identifier and Serial Number) in this paging message makes it possible to prevent receiving multiple messages containing the same emergency information at the mobile terminal.

More detailed information (Secondary Notification) is provided in CBS messages in the same way as in the conventional Area Mail system, thus achieving an architecture that is common to ETWS users and Area Mail users.