Sunday 10 November 2013

SIPTO Evolution


Couple of years back I did a post on SIPTO (Selected IP Traffic Offload) and related technologies coming as part of Rel-10. I also put up a comparison for SIPTO, LIPA and IFOM here. Having left it for couple of years, I found that there have been some enhancements to the architecture from the basic one described here.

I have embedded the NEC paper below for someone wanting to investigate further the different options shown in the picture above. I think that even though the operator may offload certain type of traffic locally, they would still consider that data as part of the bundle and would like to charge for it. At the same time there would be a requirement on the operator for lawful interception, so not sure how this will be managed for different architectures. Anyway, feel free to leave comments if you have any additional info.



Wednesday 6 November 2013

The Relentless Rise of Mobile Technology


Mobiles have been rising and rising. Couple of weeks back I read 'Mobile is considered the first and most important screen by nearly half of the 18- to 34-year-old demographic, according to research commissioned by Weve.'


The finding placed mobile ahead of laptops or PCs (chosen by 30.6 per cent) and way ahead of TV (12.4 per cent) as the first and most important screen in the lives of people between the ages of 18 and 34. 
Just 5.8 per cent of those surveyed in the age group chose a tablet as their "first screen".
The research also found that 45 per cent of 18- to 34-year-olds consider their mobile their first choice of device when interacting with online content, placing the platform just ahead of laptops and PCs, which scored 43 per cent. 
Among the wider 18 to 55 age group surveyed, a PC or laptop was seen as the "first screen" with 39.8 per cent naming either computer as their most important screen, while smartphones came second on 28 per cent. 
TV was in third place with 27 per cent of people naming it as their most important screen. Five per cent of the total group said they considered a tablet their "first screen". 
Only a quarter of the 18 to 55 age group said mobile would be their first choice platform if they wanted to access the internet, while nearly two thirds preferred to use a PC or laptop.
Tomi Ahonen has always been referring to Mobile as the 7th Mass Media.

So when I saw this above picture (and there are more of them) in Ben Evaans slide deck (embedded below), it just reiterated my belief that Mobile will take over the world sooner or later. Anyway, the slides are interesting to go through.



Monday 4 November 2013

Key challenges with automatic Wi-Fi / Cellular handover

Recently in a conference I mentioned that the 3GPP standards are working on standards that will allow automatic and seamless handovers between Cellular and Wi-Fi. At the same time operators may want to have a control where they can automatically switch on a users Wi-Fi radio (if switched off) and offload to Wi-Fi whenever possible. It upset quite a few people who were reasoning against the problems this could cause and the issues that need to be solved.

I have been meaning to list the possible issues which could be present in this scenario of automatically handing over between Wi-Fi and cellular, luckily I found that they have been listed very well in the recent 4G Americas whitepaper. The whitepaper is embedded below but here are the issues I had been wanting to discuss:

In particular, many of the challenges facing Wi-Fi/Cellular integration have to do with realizing a complete intelligent network selection solution that allows operators to steer traffic in a manner that maximizes user experience and addresses some of the challenges at the boundaries between RATs (2G, 3G, LTE and Wi-Fi).
Figure 1 (see above) below illustrates four of the key challenges at the Wi-Fi/Cellular boundary.
1) Premature Wi-Fi Selection: As devices with Wi-Fi enabled move into Wi-Fi coverage, they reselect to Wi-Fi without comparative evaluation of existing cellular and incoming Wi-Fi capabilities. This can result in degradation of end user experience due to premature reselection to Wi-Fi. Real time throughput based traffic steering can be used to mitigate this.
2) Unhealthy choices: In a mixed wireless network of LTE, HSPA and Wi-Fi, reselection may occur to a strong Wi-Fi network, which is under heavy load. The resulting ‘unhealthy’ choice results in a degradation of end user experience as performance on the cell edge of a lightly loaded cellular network may be superior to performance close to a heavily loaded Wi-Fi AP. Real time load based traffic steering can be used to mitigate this.
3) Lower capabilities: In some cases, reselection to a strong Wi-Fi AP may result in reduced performance (e.g. if the Wi-Fi AP is served by lower bandwidth in the backhaul than the cellular base station presently serving the device). Evaluation of criteria beyond wireless capabilities prior to access selection can be used to mitigate this.
4) Ping-Pong: This is an example of reduced end user experience due to ping-ponging between Wi-Fi and cellular accesses. This could be a result of premature Wi-Fi selection and mobility in a cellular environment with signal strengths very similar in both access types. Hysteresis concepts used in access selection similar to cellular IRAT, applied between Wi-Fi and cellular accesses can be used to mitigate this.
Here is the paper:



Tuesday 29 October 2013

ANDSF: Evolution and Roaming with Hotspot 2.0


Access Network Discovery and Selection Function (ANDSF) is still evolving and with the introduction of Hotspot 2.0 (HS 2), there is a good possibility to provide seamless roaming from Cellular to Wi-Fi, Wi-Fi to Wi-Fi and Wi-Fi to Cellular.


There is a good paper (not very recent) by Alcatel-Lucent and BT that explains these roaming scenarios and other ANDSF policies related information very well. Its embedded below:




Sunday 27 October 2013

TDD-FDD Joint CA


From a recent NTT Docomo presentation (embedded below). Whereas right now 3GPP has only been working on FDD or TDD scenarios, this proposal is a combination of FDD as P-Cell and TDD as S-Cell. Inter-Technology carrier aggregation is another possible option. Anyway, the complete presentation is below.


LTE-Advanced Enhancements and Future Radio Access Toward 2020 and Beyond from Zahid Ghadialy

Updated on 29/10/2013

3GPP has already started working on this work item. See RP-131399 for details.

Tuesday 22 October 2013

Korea Telecom ‘Route Decision System’ for midnight buses

Interesting presentation from Korea Telecom in LTE Asia 2013 about how they use Big Data to decide the night bus routes. Here are two pics which are self explanatory


We will soon start seeing operators using the data being collected from users and this can also be a nice little earner for them.

Tuesday 15 October 2013

What is Network Function Virtualisation (NFV)?


Software Defined Networking (SDN) and Network Function Virtualization (NFV) are the two recent buzzwords taking the telecoms market by storm. Every network vendor now has some kind of strategy to use this NFV and SDN to help operators save money. So what exactly is NFV? I found a good simple video by Spirent that explains this well. Here it is:


To add a description to this, I would borrow an explanation and a very good example from Wendy Zajack, Director Product Communications, Alcatel-Lucent in ALU blog:

Let’s take this virtualization concept to a network environment. For me cloud means I can get my stuff where ever I am and on any device –  meaning I can pull out my smart phone, my iPad, my computer – and show my mom the  latest pictures of  her grand kids.  I am not limited to only having one type of photo album I put my photos in – and only that. I can also show her both photos and videos together – and am not just limited to showing her the kids in one format and on one device.
Today in a telecom network is a lot of equipment that can only do one thing.  These machines are focused on what they are do and they do it really well – this is why telecom providers are considered so ‘trusted.’ Back in the days of landline phones even when the power was out you could always make a call.  These machines run alone with dedicated resources.  These machines are made by various different vendors and speak various languages or ‘protocols’ to exchange information with each other when necessary. Some don’t even talk at all – they are just set-up and then left to run.  So, every day your operator is running a mini United Nations and corralling that to get you to access all of your stuff.  But it is a United Nations with a fixed number of seats, and with only a specific nation allowed to occupy a specific seat, with the seat left unused if there was a no-show. That is a lot of underutilized equipment that is tough and expensive to manage.  It also has a shelf life of 15 years… while your average store-bought computer is doubling in speed every 18 months.
Virtualizing the network means the ability to run a variety of applications (or functions) on a standard piece of computing equipment, rather than on dedicated, specialized processors and equipment, to drive lower costs (more value), more re-use of the equipment between applications (more sharing), and a greater ability to change what is using the equipment to meet the changing user needs (more responsiveness).  This has already started in enterprises as a way to control IT costs and improve the performance and of course way greener.
To give this a sports analogy – imagine if in American football instead of having specialists in all the different positions (QB, LB, RB, etc), you had a bunch of generalists who could play any position – you might only need a 22 or 33 man squad (2 or 3 players for every position) rather than the normal squad of  53.   The management of your team would be much simpler as ‘one player fits all’ positions.   It is easy to see how this would benefit a service provider – simplifying the procurement and management of the network elements (team) and giving them the ability to do more, with less.

Dimitris Mavrakis from Informa wrote an excellent summary from the IIR SDN and NFV conference in Informa blog here. Its worth reading his article but I want to highlight one section that shows how the operators think deployment would be done:

The speaker from BT provided a good roadmap for implementing SDN and NFV:
  1. Start with a small part of the network, which may not be critical for the operation of the whole. Perhaps introduce incremental capacity upgrades or improvements in specific and isolated parts of the network.
  2. Integrate with existing OSS/BSS and other parts of the network.
  3. Plan a larger-scale rollout so that it fits with the longer-term network strategy.
Deutsche Telecom is now considered to be deploying in the first phase, with a small trial in Hrvatski Telecom, its Croatian subsidiary, called Project Terrastream. BT, Telefonica, NTT Communications and other operators are at a similar stage, although DT is considered the first to deploy SDN and NFV for commercial network services beyond the data center.
Stage 2 in the roadmap is a far more complicated task. Integrating with existing components that may perform the same function but are not virtualized requires east-west APIs that are not clearly defined, especially when a network is multivendor. This is a very active point of discussion, but it remains to be seen whether Tier-1 vendors will be willing to openly integrate with their peers and even smaller, specialist vendors. OSS/BSS is also a major challenge, where multivendor networks are controlled by multiple systems and introducing a new service may require risking several parameters in many of these OSS/BSS consoles. This is another area that is not likely to change rapidly but rather in small, incremental steps.
The final stage is perhaps the biggest barrier due to the financial commitment and resources required. Long-term strategy may translate to five or even 10 years ahead – when networks are fully virtualized – and the economic environment may not allow such bold investments. Moreover, it is not clear if SDN and NFV guarantee new services and revenues outside the data center or operator cloud. If they do not, both technologies – and similar IT concepts – are likely to be deployed incrementally and replace equipment that reaches end-of-life. Cost savings in the network currently do not justify forklift upgrades or the replacement of adequately functional network components.
There is also a growing realization that bare-metal platforms (i.e., the proprietary hardware-based platforms that power today’s networks) are here to stay for several years. This hardware has been customized and adapted for use in telecom networks, allowing high performance for radio, core, transport, fixed and optical networks. Replacing these high-capacity components with virtualized ones is likely to affect performance significantly and operators are certainly not willing to take the risk of disrupting the operation of their network.
A major theme at the conference was that proprietary platforms (particularly ATCA) will be replaced by common off-the-shelf (COTS) hardware. ATCA is a hardware platform designed specifically for telecoms, but several vendors have adapted the platform to their own cause, creating fragmentation, incompatibility and vendor lock-in. Although ATCA is in theory telecoms-specific COTS, proprietary extensions have forced operators to turn to COTS, which is now driven by IT vendors, including Intel, HP, IBM, Dell and others.


ETSI has just published first specifications on NFV. Their press release here says:

ETSI has published the first five specifications on Network Functions Virtualisation (NFV). This is a major milestone towards the use of NFV to simplify the roll-out of new network services, reduce deployment and operational costs and encourage innovation.
These documents clearly identify an agreed framework and terminology for NFV which will help the industry to channel its efforts towards fully interoperable NFV solutions. This in turn will make it easier for network operators and NFV solutions providers to work together and will facilitate global economies of scale.
The IT and Network industries are collaborating in ETSI's Industry Specification Group for Network Functions Virtualisation (NFV ISG) to achieve a consistent approach and common architecture for the hardware and software infrastructure needed to support virtualised network functions. Early NFV deployments are already underway and are expected to accelerate during 2014-15. These new specifications have been produced in less than 10 months to satisfy the high industry demand – NFV ISG only began work in January 2013.
NFV ISG was initiated by the world's leading telecoms network operators. The work has attracted broad industry support and participation has risen rapidly to over 150 companies of all sizes from all over the world, including network operators, telecommunication equipment vendors, IT vendors and technology providers. Like all ETSI standards, these NFV specifications have been agreed by a consensus of all those involved.
The five published documents (which are publicly available via www.etsi.org/nfv) include four ETSI Group Specifications (GSs) designed to align understanding about NFV across the industry. They cover NFV use cases, requirements, the architectural framework, and terminology. The fifth GS defines a framework for co-ordinating and promoting public demonstrations of Proof of Concept (PoC) platforms illustrating key aspects of NFV. Its objective is to encourage the development of an open ecosystem by integrating components from different players.
Work is continuing in NFV ISG to develop further guidance to industry, and more detailed specifications are scheduled for 2014. In addition, to avoid the duplication of effort and to minimise fragmentation amongst multiple standards development organisations, NFV ISG is undertaking a gap analysis to identify what additional work needs to be done, and which bodies are best placed to do it.
The ETSI specifications are available at: http://www.etsi.org/technologies-clusters/technologies/nfv

The first document that shows various use cases is embedded below:


Sunday 13 October 2013

Handset Antenna Design


Came across this presentation on Handset Antenna design from a recent Cambridge Wireless event here. Its interesting to see how the antenna technology has evolved and is still evolving. Another recent whitepaper from 4G Americas on meeting the 1000x challenge (here) showed how the different wavelengths are affecting the antenna design.


Maybe its better to move to higher frequencies from the handset design point of view. Anyway, the Cambridge Wireless presentation is embedded below:


Friday 11 October 2013

3GPP Rel-12 SON Status


Considering how popular the Release-11 SON post have been, here is Rel-12 status that was presented in the SON Conference in October 2013. Complete presentation embedded below:



You may also be interested in reading a comprehensive report prepared by David Chambers here.

Tuesday 8 October 2013

SON in LTE Release-11


Very timely of 4G Americas to release a whitepaper on SON, considering that the SON conference just got over last week. This whitepaper contains lots of interesting details and the status from Rel-11 which is the latest complete release available. I will probably look at some features in detail later on as separate posts. The complete paper is embedded below and is available from 4G Americas website here.


Sunday 6 October 2013

China Mobile: A peek at 5G


I was hoping to draw a line under 5G for the time being after a prolonged discussion on my earlier post here and then after clarifying about MSA here. Then this CMCC lecture was brought to my attention and I thought this is a good lecture to listen to so I have embedded the video and slides below. Let me know what you think in the comments below.





Thursday 3 October 2013

Case study of SKT deployment using the C-RAN architecture


Recently I came across this whitepaper by iGR, where they have done a case study on the SKT deployment using C-RAN. The main point can be summarised from the whitepaper as follows:

This approach created several advantages for SK Telecom – or for any operator that might implement a similar solution – including the:

  • Maximum re-use of existing fiber infrastructure to reduce the need for new fiber runs which ultimately reduced the time to market and capital costs.
  • Ability to quickly add more ONTs to the fiber rings so as to support additional RAN capacity when needed.
  • Support of multiple small cells on a single fiber strand. This is critical to reducing costs and having the flexibility to scale.
  • Reduction of operating expenses.
  • Increased reliability due to the use of fiber rings with redundancy.
  • Support for both licensed and unlicensed RAN solutions, including WiFi. Thus, the fronthaul architecture could support LTE and WiFi RANs on the same system.
As a result of its implementation, SK Telecom rolled out a new LTE network in 12 months rather than 24 and reduced operating expenses in the first year by approximately five percent. By 2014, SK Telecom expects an additional 50 percent OpEx savings due to the new architecture.

Anyway, the paper is embedded below for your perusal and is available to download from the iGR website here.



Sunday 29 September 2013

Telecom API's: The why and what

It felt like with the focus on LTE/4G and Small Cells and everything else in the mobile industry, the API's vanished in the background...or so it seemed. Telco API's are alive and kicking and there is a renewed focus on them.

This is from an AT&T press release not so long back:

AT&T*, already the leading carrier deploying network Application Programming Interfaces (APIs) to developers,1 today announced it has launched an enterprise-focused API program that allows enterprise customers, wholesale collaborators and solution providers to innovate using AT&T network APIs.
Led by industry thought leader Laura Merling, VP of Ecosystem Development and Platform Solutions, AT&T is pursuing a telecommunications industry API opportunity expected to grow to $157 billion in global revenues by 2018.2
APIs are software interfaces that provide access to data and core functions within AT&T’s network. By opening up its APIs to customers, AT&T believes it can help them meet three key challenges: do more without spending more; harness technology to gain competitive advantage; and support their ability to create and deploy applications that can be used on almost any device around the world.
...
Some examples of how enterprises can use AT&T APIs include:
  • Content formatting: Using APIs, video content from a company’s video library stored in the cloud can be easily optimized in near real-time for users to watch on almost any device and network.
  • Communications services: To bring more efficiency and productivity to business operations, businesses can use APIs to automate voice and video calls, integrating speech and video services into applications.
Sometime back, Martin Geddes (MG) posted his discussion on this topic with Alan Quayle (AQ) here:

I interviewed Alan earlier this week, and here is our joint “state of the telecom API nation” report.

MG: My early telecom API project crashed and burned, and past industry initiatives like ParlayX never took off. What has changed since the early 2000s that is triggering new and rapid growth?

AQ: Both the technology and the market have evolved. Large new developer communities have been created by Apple and Google, delivering value through those ecosystems. The need for such ecosystems and partnerships in telecoms is now driven by business demand, not technology supply, and thus is no longer seen as unusual or controversial.

Ten years ago there were developers, but the developer platforms were not as sophisticated. The technology was complex to consume, so you had to be a hardcore developer to use what was on offer. Today we have a mass developer market of people with Web development skills, and an Independent Software Vendor (ISV) market able to consume telecoms capabilities using their existing skills base.

The whole ICT industry – including ancillary services like consulting and equipment – is around a $5tn annual market. Yet it notably lacks a large-scale profitable developer ecosystem for networked service delivery. Why has it failed? Historically there have been too many silos, and too much friction to engage with them. What we are now seeing are companies like Apidaze, Bandwidth, OpenCloud, Plivo, Telestax, Tropo, and Twilio eliminating both of these. Lots of money is being spent on marketing to developers, creating a new business opportunity that telcos and broader ecosystem can take advantage of.

Notably this ecosystem is about more than just APIs. There's also the whole free and open source software arena too. Tools like FreeSWITCH, OpenCloud, Mobicents and WebRTC are becoming core to service innovation. Platforms like Tropo’s Ameche open up new opportunities for value-added voice services. We will be looking at the whole development stack at the Summit in Bangkok.

Who are the key consumers of telecoms APIs and what for?

Telecoms APIs are generally used by enterprises that are embedding communications into their core processes. The term “Communications Enabled Business Processes” was used in the past, but the name never took off, even if the concept did. As such, there is a quiet enterprise communications revolution going on. (See my recent articlefor more information.)

Lots of businesses are doing cool stuff, often to sell to other enterprises. These projects and platforms may not get much press individually, but collectively they add up to a significant market.

For example, Turkcell are a leader in this area of enterprise API delivery. However, they don’t talk about APIs, because it’s about the end user and the value from a better customer experience. They focus on promoting their enterprise services, all of which are (crucially) backed by sales team with technical support. Example services include FreeURL, where customers surf on your pages for free; customer device model and mobile number to support efficient and effective interaction regardless of end user device type; a “find the nearest store” capability to drive sales; and click to call services to capture leads.

That these telecoms services use APIs is about as interesting as them using electricity. The business value and innovation is in the enhanced customer experiences they enable.

Who makes money from producing telecoms APIs and how?

Everyone can! Telcos, intermediaries who work with the developers, enterprises and systems integrators. To make progress, however, telcos have to accept they can't do everything for themselves. For instance, you have to know what developers want – and that means Web scripting, not REST APIs. We will for the foreseeable future need middlemen who translate the value of telecoms APIs into a consumable form.

The greatest value is in customer interaction APIs. The need to communicate with suppliers and customers is fundamental to the human condition, we have been doing it for millennia, and will not stop any time soon. There are long-established markets like bulk SMS and automated calling, and these are ripe for new growth with new capabilities to interact and transact with customers.

What are the most promising areas for future growth?

The growth is around value-added services, notably around the current voice cash cow. It’s time for telcos to remember their heritage: you're the phone company. The distracting “digital lifestyle” stuff only makes money for the content companies. There are too many adjacent businesses being built where the telco doesn't have enough competence, and are competing against low-end competition (e.g. cheap webcams vs managed CCTV or home monitoring services).

Lots of consultants are selling future billion-dollar markets that don't exist. Telcos need to stick to the basic nuts and bolts of communications services, and do them better.

What are the key challenges facing this space?

The key challenge is that this game is that it requires an ecosystem, and telcos are islands. That doesn't mean they should copy Apple and Android, but instead they need to focus on segments where they have credible value and an advantage. A $5tn industry should be able to do this.

What it requires is a whole offering, including sales, business development and support. API-enablement is just a piece of technology, and this cannot be led from a network or IT function; it’s a line of business. The improvement and value to the customers has to come first, and getting the mindset right is hard. We have proof points that you can make money, thanks to companies like Telestax, Tropo and Twilio, if you build a whole supply chain.


Finally, Alan Quayle has posted his independent review of Telecom API's which is embedded below:



Do you have an opinion on Telecom API's? Feel free to add it in the comments.

Thursday 26 September 2013

Multi-stream aggregation (MSA): Key technology for future networks


In our recent 5G presentation here, we outlined multi-technology carrier aggregation as one of the technologies for the future networks. Some of the discussions that I had on this topic later on highlighted the following:
  1. This is generally referred to as Multi-stream aggregation (MSA)
  2. We will see this much sooner than 5G, probably from LTE-A Rel-13 onwards 


Huawei have a few documents on this topic. One such document is embedded below and aanother more technical document is available on slideshare here.



Monday 23 September 2013

Push to talk (PTT) via eMBMS


I was talking about push to share back in 2007 here. Now, in a recent presentation (embedded below) from ALU, eMBMS has been suggested as a a solution for PTT like services in case of Public safety case. Not sure if or when we will see this but I hope that its sooner rather than later. Anyway, the presentation is embedded below. Feel free to add your comments:



Monday 16 September 2013

#5G: Your Questions Answered

This is our view on what 5G is, please feel free to add your comments here or if you want a much wider audience to discuss your comments, please add them to the Cisco Communities here.


Friday 13 September 2013

LTE for Utilities and Smart Grids

This has been an area of interest for the last couple of years. Discussions have been centred around, "Is LTE fit for IoT?", "Which technology for IoT", "Is it economical to use LTE for M2M?", "Would small cells be useful for M2M?", etc.

Ericsson has recently published a whitepaper titled "LTE for utilities - supporting smart grids". One of the table that caught my eye is as follows:


LTE would be ideally suited for some of the "Performance class" requirements where the transfer time requirements is less than 100ms. Again, it can always be debated if in many cases WiFi will meet the requirements so should WiFi be used instead of LTE, etc. I will let you form your own conclusions and if you are very passionate and have an opinion, feel free to leave comment.

The whitepaper is embedded below:



Related posts:


Monday 9 September 2013

LTE TDD - universal solution for unpaired spectrum?



TDD deployments are gathering pace. An earlier GSA report I posted here, highlighted the many devices that are TD-LTE ready.
The main thing that is being emphasised is that from the standards point of view, not much additional efforts are required for a TDD device as compared to an FDD device. Of course in practice the physical layer would be different and that could be a challenge in itself.

Qualcomm published a presentation on this topic that is embedded below. Available to download from here.



Thursday 5 September 2013

Throughput Comparison for different wireless technologies

Merged various slides from the recent 4G Americas presentation to get a complete picture of data throughput speeds for various technologies.

Saturday 31 August 2013

VoLTE Bearers

While going through Anritsu whitepaper on VoLTE, I found this picture that explains the concepts of bearers in a VoLTE call well. From the whitepaper:

All networks and mobile devices are required to utilize a common access point name (APN) for VoLTE, namely, “IMS”. Unlike many legacy networks, LTE networks employ the “always-on” conception of packet connectivity: Devices have PDN connectivity virtually from the moment they perform their initial attach to the core network. During the initial attach procedure, some devices choose to name the access point through which they prefer to connect. However, mobile devices are not permitted to name the VoLTE APN during initial attach, i.e., to utilize the IMS as their main PDN, but rather to establish a connection with the IMS AP separately. Thus, VoLTE devices must support multiple simultaneous default EPS bearers.

Note that because the VoLTE APN is universal, mobile devices will always connect through the visited PLMN’s IMS PDN-GW. This architecture also implies the non-optionality of the P-CSCF:

As stated, VoLTE sessions employ two or three DRBs. This, in turn, implies the use of one default EPS bearer plus one or two dedicated EPS bearers. The default EPS bearer is always used for SIP signaling and exactly one dedicated EPS bearer is used for voice packets (regardless of the number of active voice media streams.) XCAP signaling may be transported on its own dedicated EPS bearer – for a total of three active EPS bearers – or it may be multiplexed with the SIP signaling on the default EPS bearer, in which case only two EPS bearers are utilized.

My understanding is that initially when the UE is switched on, a default bearer with QCI 9 (see old posts on QoS/QCI here) is established that would be used for all the signalling. Later on, another default bearer with QCI 5 is established with the IMS CN. When a VoLTE call is being setup, a dedicated bearer with QCI 1 is setup for the voice call. As the article says, another dedicated bearer may be needed for XCAP signalling. If a Video call on top of VoLTE is being used than an additional dedicated bearer with QCI 2 will be setup. Note that the voice pat will still be carried by dedicated bearer with QCI 1.

Do you disagree or have more insight, please feel free to add the comment at the end of the post.

The whitepaper is embedded below and is available to download from slideshare.



Related posts:

Thursday 29 August 2013

New Mobile related terms added in Oxford dictionary

The Oxford dictionary has just added some new words in its dictionary. Here is a summary of the words related to mobiles.

BYOD: n.: abbreviation of 'bring your own device': the practice of allowing the employees of an organisation to use their own computers, smartphones, or other devices for work purposes. Wikipedia also calls it bring your own technology (BYOT), bring your own phone (BYOP), and bring your own PC (BYOPC).

digital detox, n.: a period of time during which a person refrains from using electronic devices such as smartphones or computers, regarded as an opportunity to reduce stress or focus on social interaction in the physical world.

Another term called "Nomophobia" which has unfortunately not yet entered the dictionary refers to as the fear of being out of mobile phone contact. The term, an abbreviation for "no-mobile-phone phobia". According to a recent survey some 54% of Brits have experienced this. If someone is getting affected by Nomophobia, its time they undergo a 'digital detox' to sort their life out.

emoji, n: a small digital image or icon used to express an idea or emotion in electronic communication.


Everyone using OTT applications would know them well. They are very useful in communicating emotions. I generally think this as one of the drawbacks of SMS that we cant use emoji's. On the other hand OTT apps can be making money by providing extended emoji's for a premium but I havent seen anyone do this yet.

FOMO, n.: fear of missing out: anxiety that an exciting or interesting event may currently be happening elsewhere, often aroused by posts seen on a social media website

'FOMO' is big and I personally know people who suffer from this. In the good old days this was known as jealousy where one would be jealous that someone was going on more holidays, have a bigger house/car, etc. In this connected world where we can get Facebook updates and notifications on the phones and tablets the digital term is FOMO. A slide from Mary Meeker's presentation that I put here shows that a typical user checks their phone 150 times every day and social media is not very far from the top.

internet of things, n.: a proposed development of the Internet in which everyday objects have network connectivity, allowing them to send and receive data.

This 'Internet of Things' or 'IoT' has been covered in the blog more than enough times.

phablet, n.: a smartphone having a screen which is intermediate in size between that of a typical smartphone and a tablet computer.


Earlier this year I put a post here that talked all about feature phones, smartphones, phablets, etc. Other terms like Tabphones and Phonetabs didn't make it.

selfie, n. (informal): a photograph that one has taken of oneself, typically one taken with a smartphone or webcam and uploaded to a social media website.


Here is a selfie of me using my phone today to end this post :-)

Sunday 25 August 2013

Centralized SON


I was going through the presentation by SKT that I blogged about here and came across this slide above. SKT is clearly promoting the benefits of their C-SON (centralized SON) here.


The old 4G Americas whitepaper (here) explained the differences between the three approaches; Centralized (C-SON), Distributed (D-SON) and Hybrid (H-SON). An extract from that paper here:

In a centralized architecture, SON algorithms for one or more use cases reside on the Element Management System (EMS) or a separate SON server that manages the eNB's. The output of the SON algorithms namely, the values of specific parameters, are then passed to the eNB's either on a periodic basis or when needed. A centralized approach allows for more manageable implementation of the SON algorithms. It allows for use case interactions between SON algorithms to be considered before modifying SON parameters. However, active updates to the use case parameters are delayed since KPIs and UE measurement information must be forwarded to a centralized location for processing. Filtered and condensed information are passed from the eNB to the centralized SON server to preserve the scalability of the solution in terms of the volume of information transported. Less information is available at the SON server compared to that which would be available at the eNB. Higher latency due to the time taken to collect UE information restricts the applicability of a purely centralized SON architecture to those algorithms that require slower response time. Furthermore, since the centralized SON server presents a single point of failure, an outage in the centralized server or backhaul could result in stale and outdated parameters being used at the eNB due to likely less frequent updates of SON parameters at the eNB compared to that is possible in a distributed solution.

In a distributed approach, SON algorithms reside within the eNB’s, thus allowing autonomous decision making at the eNB's based on UE measurements received on the eNB's and additional information from other eNB's being received via the X2 interface. A distributed architecture allows for ease of deployment in multi-vendor networks and optimization on faster time scales. Optimization could be done for different times of the day. However, due to the inability to ensure standard and identical implementation of algorithms in a multi-vendor network, careful monitoring of KPIs is needed to minimize potential network instabilities and ensure overall optimal operation.

In practical deployments, these architecture alternatives are not mutually exclusive and could coexist for different purposes, as is realized in a hybrid SON approach. In a hybrid approach, part of a given SON optimization algorithm are executed in the NMS while another part of the same SON algorithm could be executed in the eNB. For example, the values of the initial parameters could be done in a centralized server and updates and refinement to those parameters in response to the actual UE measurements could be done on the eNB's. Each implementation has its own advantages and disadvantages. The choice of centralized, distributed or hybrid architecture needs to be decided on a use-case by use case basis depending on the information availability, processing and speed of response requirements of that use case. In the case of a hybrid or centralized solution, a practical deployment would require specific partnership between the infrastructure vendor, the operator and possibly a third party tool company. Operators can choose the most suitable approach depending upon the current infrastructure deployment.

Finally, Celcite CMO recently recently gave an interview on this topic on Thinksmallcell here. An extract below:

SON software tunes and optimises mobile network performance by setting configuration parameters in cellsites (both large and small), such as the maximum RF power levels, neighbour lists and frequency allocation. In some cases, even the antenna tilt angles are updated to adjust the coverage of individual cells.

Centralised SON (C-SON) software co-ordinates all the small and macrocells, across multiple radio technologies and multiple vendors in a geographic region - autonomously updating parameters via closed loop algorithms. Changes can be as frequent as every 15 minutes– this is partly limited by the bottlenecks of how rapidly measurement data is reported by RAN equipment and also the capacity to handle large numbers of parameter changes. Different RAN vendor equipment is driven from the same SON software. A variety of data feeds from the live network are continuously monitored and used to update system performance, allowing it to adapt automatically to changes throughout the day including outages, population movement and changes in services being used.

Distributed SON (D-SON) software is autonomous within each small cell (or macrocell) determining for itself the RF power level, neighbour lists etc. based on signals it can detect itself (RF sniffing) or by communicating directly with other small cells.

LTE has many SON features already designed in from the outset, with the X.2 interface specifically used to co-ordinate between small and macrocell layers whereas 3G lacks SON standards and requires proprietary solutions.
C-SON software is available from a relatively small number of mostly independent software vendors, while D-SON is built-in to each small cell or macro node provided by the vendor. Both C-SON and D-SON will be needed if network operators are to roll out substantial numbers of small cells quickly and efficiently, especially when more tightly integrated into the network with residential femtocells.

Celcite is one of the handful of C-SON software solution vendors. Founded some 10 years ago, it has grown organically by 35% annually to 450 employees. With major customers in both North and South America, the company is expanding from 3G UMTS SON technology and is actively running trials with LTE C-SON.

Quite a few companies are claiming to be in the SON space, but Celcite would argue that there are perhaps only half a dozen with the capabilities for credible C-SON solutions today. Few companies can point to live deployments. As with most software systems, 90% of the issues arise when something goes wrong and it's those "corner cases" which take time to learn about and deal with from real-world deployment experience.

A major concern is termed "Runaway SON" where the system goes out of control and causes tremendous negative impact on the network. It's important to understand when to trigger SON command and when not to. This ability to orchestrate and issue configuration commands is critical for a safe, secure and effective solution.

Let me know your opinions via comments below.