Pages

WebRTC Training Course

Showing posts with label QoS. Show all posts
Showing posts with label QoS. Show all posts

Sunday, 30 June 2013

Multi-RAT mobile backhaul for Het-Nets

Recently got another opportunity to hear from Andy Sutton, Principal Network Architect, Network Strategy, EE. His earlier presentation from our Cambridge Wireless event is here. There were many interesting bits in this presentation and some of the ones I found interesting is as follows:

Interesting to see in the above that the LTE traffic in the backhaul is separated by the QCI (QoS Class Identifiers - see here) as opposed to the 2G/3G traffic.




This is EE's implementation. As you may notice 2G and 4G use SRAN (Single RAN) while 3G is separate. As I mentioned a few times, I think 3G networks will probably be switched off before the 2G networks, mainly because there are a lot more 2G M2M devices that requires little data to be sent and not consume lots of energy (which is an issue in 3G), so this architecture may be suited well.


Finally, a practical network implementation which looks different from the text book picture and the often touted 'flat' architecture. Andy did mention that they see a ping latency of 30-50ms in the LTE network as opposed to around 100ms in the UMTS networks.


Mark Gilmour was able to prove this point practically.

Here is the complete presentation:



Wednesday, 15 August 2012

QoS Strategies for IMS & VoLTE

Another brilliant presentation from the LTE World Summit early this year



Wednesday, 16 May 2012

Monday, 6 June 2011

Billing based on QoS and QoE

With Spectrum coming at a price the operators are keen to make as much money as possible out of the data packages being provided to the consumers. The operators want to stop users using over the top (OTT) services like Skype thereby losing potential revenue. They also want the users to stop using services that are offered by the operator thereby maximising their revenue.

A valid argument put forward by the operators is that 90% of the bandwidth is used by just 10% of the users. This gives them the reason to look at the packets and restrict the rogue users.

As a result they are now turning to deep packet inspection (DPI) to make sure that the users are not using the services they are being restricted to use. AllOt is one such company offering this service.

The following presentation is from the LTE World Summit:
They also have some interesting Videos on the net that have been embedded below. They give a good idea on the services being offered to the operators.

Smart Pipes
View more videos from alenaor
Finally, a term QoS and QoE always causes confusion. Here is a simple explanation via Dan Warren on twitter:

QoS = call gets established and I can hear what is being said, everything else is QoE

Friday, 29 April 2011

Service Layer Optimization element to Improve Utilisation of Network Capacity


The following is an extract from 4G Americas whitepaper, "Optimizing the Mobile Application Ecosystem":


Applications have diverse requirements on the mobile network in terms of throughput, relative use of uplink vs. downlink, latency and variability of usage over time. While the underlying IP based Layer 3 infrastructure attempts to meet the needs of all the applications, significant network capacity is lost to inefficient use of the available resources. This inefficiency stems primarily from the non-deterministic nature of the aggregate requirements on the network from the numerous applications and their traffic flows live at any time.

This reduction in network utilization can be mitigated by incorporating application awareness into network traffic management through use of Application or Service Layer optimization technologies. A Service Layer optimization solution would incorporate awareness of:

1) device capabilities such as screen size and resolution;
2) user characteristics such as billing rates and user location;
3) network capabilities such as historic and instantaneous performance and;
4) application characteristics such as the use of specific video codecs and protocols by an application such as Video on Demand (VOD) to ensure better management of network resources.

Examples of Service Layer optimization technologies include:
* Real-time transcoding of video traffic to avoid downlink network congestion and ensure better Quality of Experience (QoE) through avoidance of buffering
* Shaping of self-adapting traffic such as Adaptive Streaming traffic through packet delay to avoid downlink network congestion
* Shaping of error-compensating flows such as video conferencing through use of packet drops to avoid uplink network congestion
* Shaping of large flows such as file uploads on the uplink through packet delays to conserve responsiveness of interactive applications such as web browsing
* Explicit caching of frequently accessed content such as video files on in-network CDNs to minimize traffic to backbone
* Implicit caching of frequently accessed content such as images in web content on in-network caches to improve web page retrieval speeds

Service Layer optimization technologies may be incorporated in the data path in many locations:
1) the origin server;
2) the UE device;
3) as a cloud-hosted offering through which devices and/or applications and/or networks route traffic or;
4) as a network element embedded in a service provider’s network.

Further, in a service provider’s network the optimization function may be deployed in either the core network and/or edge aggregation locations. When Service Layer optimization entities in the network are deployed at both core and edge locations, they may operate in conjunction with each other to form a hierarchy with adequate level of processing to match the traffic volume and topology. Such a hierarchy of network entities is especially effective in the case of caching.

The 3GPP standard network architecture defines a number of elements such as QoS levels that are understood and implemented in the network infrastructure. However, much of this network capability is not known or packaged for use in the Service Layer by application developers. One approach to resolving this discrepancy may be to publish standard Service Layer APIs that enable application developers to request network resources with specific capabilities and also to get real-time feedback on the capabilities of network resources that are in use by the applications. Such APIs may be exposed by the network to the cloud or may be exposed to application clients resident on mobile devices through device application platforms and SDKs. The network APIs being defined by the Wholesale Application Community are an example of the recognition of the need for such Service Layer visibility into network capabilities. Future versions of the WAC standards will likely incorporate and expose network Quality of Service (QoS) capabilities.



Pic Source: Aria Networks


Why does Optimization matter? A good answer to this question is provided in Telecoms.com article as follows:

For many people, says Constantine Polychronopoulos, founder and chief technology officer of mobile internet infrastructure specialist Bytemobile, the definition of optimisation as it relates to mobile networks is too narrow; restricted to compressing data or to the tweaking of the radio access network in a bid to improve throughput. While these are key elements of optimisation, he says, the term ought to be interpreted far more broadly. “The best way for us to think of optimisation,” he says, “is as a set of synergistic technologies that come together to address everything that has to do with improving network and spectrum utilisation and user experience. If you stretch the argument, it includes pretty much every thing that matters. This holistic, end-to-end approach to optimisation is the hallmark of Bytemobile’s solutions. Point products tend to be costly and difficult or impossible to evolve and maintain.”

And optimisation matters, he says, because the boom in mobile data traffic experienced in some of the world’s most advanced mobile markets represents a serious threat to carrier performance and customer satisfaction. US operator and pioneer iPhone partner AT&T is a case in point, Polychronopoulos says.

“If you look at what’s been said by Ralph de la Vega (president and CEO of AT&T Mobility) and John Donovan (the firm’s CTO), they have seen a 5,000- per cent increase in data traffic over the past two years. The data points from other operators are similar,” he continues. “They see an exponential growth of data traffic with the introduction of smartphones, in particular the iPhone.”

Operators may have received what they’d been wishing for but the scale of the uptake has taken them by surprise, Polychronopoulos says. The type of usage consumers are exhibiting can be problematic as well. Bytemobile is seeing a great deal of video-based usage, which can often be a greater drain on network resource than web browsing. Given the increasing popularity of embedding video content within web pages, the problem is becoming exacerbated.

Dr. Polychronopoulos is keen to point out that there are optimisation opportunities across different layers of the OSI stack—Bytemobile offers solutions that will have an impact on layers three (the IP layer) through seven (the application layer). But he stresses that some of the most effective returns from optimisation technologies come from addressing the application layer, where the bulk of the data is to be found.

“An IP packet can be up to 1,500 bytes long,” he says. “So at layer three, while you can balance packet by packet, there is only so much you can do to optimise 1,500 bytes. At the top layer, the application can be multiple megabytes or gigabytes if you’re watching video. And when you’re dealing with those file sizes in the application layer, there is a whole lot more you can do to reduce the amount of data or apply innovative delivery algorithms to make the content more efficient,” he says.

By optimising content such as video, Polychronopoulos says, significant gains can be made in spectral and backhaul network utilisation. A range of options are open to operators, he says, with some techniques focused on optimising the transport protocol, and others designed to reduce the size of the content.

“With video, we can resize the frame, we can reduce the number of frames, we can reduce the resolution of the frame or apply a combination of the above in a way that does not affect the video quality but greatly improves network efficiencies,” he says. “So if you go to a site like YouTube and browse a video, you might download something like 100MB of data. But if you were to go through a platform like ours, you may download only 50MB when the network is congested and still experience not only the same video quality, but also fluid video playback without constant re-buffering stalls.”

It is possible, he explains, to run these solutions in a dynamic way such that data reduction engages only when the network is congested. If a user seeks to access high-volume data like video during the network’s quiet time, the reduction technologies are not applied. But when things are busier, they kick in automatically and gradually. This could have an application in tiered pricing strategies. Operators are looking at such options in a bid to better balance the cost of provisioning mobile data services with the limited revenue stream that they currently generate because of the flat rate tariffs that were used to stimulate the market in the first place. Being able to dynamically alter data reduction and therefore speed of delivery depending on network load could be a useful tool to operators looking to charge premium prices for higher quality of service, Polychronopoulos says.

If it is possible to reduce video traf- fic in such a way that data loads are halved but the end user experience does not suffer proportionally, the question arises as to why operators would not simply reduce everything, whether the network was busy or not. Polychronopoulos argues that in quiet times there are no savings to be made by reducing the size of content being transported.

“The operator has already provisioned the network one way or another,” he says, “so there is a certain amount of bandwidth and a certain amount of backhaul capacity. When the network is not congested, the transport cost is already sunk. When it becomes congested, though, you get dropped calls and buffering and stalled videos and the user experience suffers. That’s where optimisation shines. Alternatively, media optimisation can be factored in during toplevel network provisioning when the savings in CAPEX can be extremely compelling.”

While LTE is held up by some within the industry as the panacea to growing demand for more mobile broadband service, Polychronopoulos is unconvinced. If anything, he says, the arrival of the fourth generation will serve only to exacerbate the situation.

“LTE is going to make this problem far more pronounced, for a number of reasons,” he says. “As soon as you offer improved wireless broadband, you open the door to new applications and services. People are always able to come up with new ways of inundating any resource, including bandwidth. We’re going to see more data-driven applications on mobile than we see on the typical desktop, because the mobile device is always with you.” And while LTE promises greater spectral efficiency than its 3G forebears, Polychronopoulos says, the fact that spectrum remains a finite resource will prove ever more problematic as services evolve.

“We’re reaching the limits of spectral efficiency,” he says. “Shannon’s Law defines the limit as six bits per Hertz, and while we may be moving to higher-bandwidth wireless broadband, spectrum remains finite. To offer 160Mbps, you have to allocate twice the amount of spectrum than in 3G, and it’s a very scarce and very expensive resource.”

Operators have been wrong to focus exclusively on standards-based solutions to network optimisation issues, Polychronopoulos says. In restricting themselves to 3GPP-based solutions, he argues that they have missed what he describes as “the internet component of wireless data.” Internet powerhouses like Google, Yahoo and Microsoft (which he dubs ‘the GYM consortium’) have established a model that he says is a great threat to the mobile operator community in that it establishes a direct consumer relationship and disregards the “pipe” (wireless broadband connection) used to maintain that relationship.

“The operators have to accelerate the way they define their models around wireless data so that they’re not only faster than the GYM consortium in terms of enabling popular applications, but smarter and more efficient as well,” he says. Dr. Polychronopoulos then makes a popular case for the carriers’ success: “The operators have information about the subscriber that no other entity in the internet environment can have; for example, they know everything the subscriber has done over the lifetime of their subscription and the location of each event. They don’t have to let this data outside of their networks, so they are very well positioned to win the race for the mobile internet.”


Thursday, 10 February 2011

QoS Control based on Subscriber Spending Limits (QOS_SSL)

Quality of Service (QoS) is very important in LTE/LTE-A and the operators are taking extra efforts to maintain the QoS in the next generation of networks. They are resorting in some cases to Deep packet Inspections (DPI) based controlling of packets and in some cases throttling of data for bandwidth hogs.

The following is from a recent 4G Americas report I blogged about here:

This work item aims to provide a mechanism to allow a mobile operator to have a much finer granularity of control of the subscriber’s usage of the network resources by linking the subscriber’s data session QoS with a spending limit. This gives the operator the ability to deny a subscriber access to particular services if the subscriber has reached his/her allocated spending limit within a certain time period. It would be useful if, in addition, the bandwidth of a subscriber’s data session could be modified when this spending level is reached. This could be done depending on, for example, the type of service being used by the subscriber, the subscriber’s spending limit and amount already spent and operator’s charging models. This allows the operator to have an additional means of shaping the subscriber’s traffic in order to avoid subscribers monopolizing the network resource at any given time. Since support for roaming scenarios is needed, the possibility to provide support for roaming subscribers without having dedicated support in the visited network is needed.

Upon triggers based on the operator’s charging models, the subscriber could be given the opportunity to purchase additional credit that increases the spending limit.

The objective of this study is to provide use cases/service requirements and specs that allow:
* Modification of QoS based on subscriber’s spending limits
* Enforcing of spending limits for roaming subscribers without having dedicated support in the visited network

For further details see: 3GPP TS 22.115 Service aspects; Charging and billing (Release 11)

Monday, 25 October 2010

NGMN Top 10 Operational Efficiency Recommendations

Setting up and running networks is a complex task that requires many activities, including planning, configuration, Optimization, dimensioning, tuning, testing, recovery from failures, failure mitigation, healing and maintenance. These activities are critical to successful network operation and today they are extremely labour-intensive and hence, costly, prone to errors, and can result in customer dissatisfaction. This project focuses on ensuring that the operators’ recommendations are incorporated into the specification of the 3GPP O&M (and similar groups in other standardisation bodies) so that this critical task moves towards full automation.

The overall objective is to provide operators with the capability to purchase, deploy, operate and maintain a network consisting of Base Stations (BTS) and “Access Gateways (AGw)” from multiple vendors. The NGMN project Operational Efficiency OPE has taken the task to elaborate solutions and recommendations for pushing the operational efficiency in NGMN networks and has produced recommendations on standards and implementations. The NGMN OPE project also influenced strongly the setup of a TOP10 document reflecting main recommendations in operational area. This document (embedded below) binds these two sources which are anyhow strongly linked together into one common NGMN recommendation document.


Thursday, 12 August 2010

Whitepaper: Traffic Management Techniques for Mobile Broadband Networks


The report, Traffic Management Techniques for Mobile Broadband Networks: Living in an Orthogonal World,focuses on 3GPP networks and concerns itself specifically with traffic management, including the handling of traffic flows on 3GPP networks in contrast with other network management techniques that operators may deploy (such as offloading, compression, network optimization and other important mechanisms).

Mobile broadband networks are confronted by a number of challenges. In particular, the physical layer in mobile networks is subject to a unique confluence of unpredictable and unrelated, or “orthogonal,” influences. Moreover, mobile broadband networks have some important differences from their fixed brothers and sisters, which lead to different traffic management requirements. Among the most significant differences for purposes of traffic management is the need for more granular visibility to circumstances on the ground. Optimally, traffic management for mobile broadband networks requires visibility to what is occurring (by device or application) at the cell site level and in a timeframe that enables as far as feasible near-time reactions to resolve issues.

With the consumer in mind, an End-to-End (E2E) view of mobile service is critical for traffic management. For example, a consumer using a mobile phone to look up movie listings and purchase tickets considers the E2E service as the ability to see what movie is playing and execute a transaction to purchase tickets. 3GPP has endeavored to standardize increasingly more robust traffic management (Quality of Service, or QoS) techniques for mobile broadband networks with a consumer’s E2E view of QoS. It must be considered, however, that mobile operators typically do not have full control over E2E provisioning of services that depend on mobile broadband Internet access.

Global standards organizations like 3GPP play an important role in the development of traffic management through provisions for addressing QoS, particularly regarding interworking with non-3GPP access mechanisms. These are important new innovations, and the 3G Americas white paper notes that the efforts of standards development organizations should be intensified.

In addition, the configuration of end-user devices and content and applications not provisioned by the network operator not only impacts the experience of the particular user, but potentially other users in a particular cell as well. Efforts to drive further QoS innovations should be mindful of potentially adverse impacts from these sources and support and foster interoperability of third party applications with existing network platforms.

More innovations are needed throughout the mobile broadband ecosystem, in particular by application developers, in order to realize E2E quality of service. Furthermore, transparency in network management practices is important in fostering innovation, but requires a careful balancing to ensure consumer comprehension while safeguarding network reliability. Organizations with technical expertise such as 3G Americas are prepared to help to illuminate and progress the development of these new technologies.

“3G Americas stands ready to assist interested parties in the ongoing development and understanding of traffic management techniques,” said Chris Pearson, President of 3G Americas. “We are mindful that in this hemisphere and elsewhere, the industry has accepted an increasingly active role in addressing questions about service levels and innovation on mobile broadband networks.”

The white paper, Traffic Management Techniques for Mobile Broadband Networks: Living in an Orthogonal World, was written collaboratively by members of 3G Americas and is available for free download on the 3G Americas website at www.3gamericas.org.

Friday, 23 July 2010

Shunning mobiles in favour of Landlines


I guess its time to clean the cobwebs off the landlines. I was reading David Chambers analysis on Homezone tarrifs and it reminded me of the time when I would get big bundle of voice minutes to call using my mobile from home. In those days the voice quality seemed better, signal strength indicator was high and there were hardly any dropped calls.

Nowadays, the signal strength seems to have gone worse whether I am in the office or at home, the voice on the calls keeps breaking, there are too many dropped calls.

To give you an idea of what's going wrong; My phone kept stationary at the table has 4 bars strength of 3G/HSPA, it suddenly becomes 1 bar after 2-3 minutes then hands me over to what the phone says GPRS then the phone says EDGE. If the phone says EDGE then my calls drop within 2 minutes. If my phone says GPRS then I am worried that if it hands over to 3G then my call will drop. If the phone says 3G then unless there are 3 bars, the voice breaks.

Last week I used my landline phone after maybe a year or so and that reminded me how good the voice quality is. In theory the voice quality using mobile phone should be as good as the landline but in practice that may not be true. Of course the wideband AMR can offer much better HD voice but I need reliable voice more than HD voice.

So for the time being, I am going to be sticking with the landlines as far as possible due to reliable and clear communications and wait for the mobiles/networks to catch up.

Tuesday, 25 May 2010

Quality of Service (QoS) and Deep Packet Inspection (DPI)

One of the things I mentioned in my presentation in the LTE World Summit was that differentiation of Services based on Quality of Services is required to be able to charge the users more.
This QoS can be varied based on deep inspection of the packets which can tell the operator as to what service a particular packet belongs to. The operators can thus give higher priority to the services and applications that are recommended by them and also block certain services that can be deemed as illegal or unproductive (like file sharing or P2P).

Continuous Computing claims to be one of the market leaders in producing the DPI systems. You can read this article by Mike Coward who is the CTO and Co-founder of Continuous computing here.

There is also this very interesting paper on QoS control in 3GPP EPS which is available freely here.

Please feel free to comment or suggest how do you see DPI being used in the future.

Thursday, 18 June 2009

LTE QCI and End-to-end bearer QoS in EPC



Gary Leonard, Director Mobile Solutions, IP Division, Alcatel-Lucent in a presentation at the LTE World Summit

Wednesday, 27 May 2009

Service Specific Access Control (SSAC) in 3GPP Release 9


In an emergency situation, like Earthquake or Tsunami, degradation of quality of service may be experienced. Degradation in service availability and performance can be accepted in such situations, but mechanisms are desirable to minimize such degradation and maximize the efficiency of the remaining resources.

When Domain Specific Access Control (DSAC) mechanism was introduced for UMTS, the original motivation was to enable PS service continuation during congestion in CS Nodes in the case of major disaster like an Earthquake or a Tsunami.

In fact, the use case of DSAC in real UMTS deployment situation has been to apply access control separately on different types of services, such as voice and other packet-switched services.

For example, people’s psychological behaviour is to make a voice call in emergency situations and it is not likely to change. Hence, a mechanism will be needed to separately restrict voice calls and other services.

As EPS is a PS-Domain only system, DSAC access control does not apply.

The SSAC Technical Report (see Reference) identifies specific features useful when the network is subjected to decreased capacity and functionality. Considering the characteristics of voice and non-voice calls in EPS, requirements of the SSAC could be to restrict the voice calls and non-voice calls separately.

For a normal paid service there are QoS requirements. The provider can choose to shut down the service if the requirements cannot be met. In an emergency situation the most important thing is to keep communication channels uninterrupted, therefore the provider should preferably allow for a best effort (degradation of) service in preference to shutting the service down. During an emergency situation there should be a possibility for the service provider also to grant services, give extended credit to subscribers with accounts running empty. Under some circumstances (e.g. the terrorist attack in London on the 7 of July in 2005), overload access control may be invoked giving access only to authorities or a predefined set of users. It is up to national authorities to define and implement such schemes.

Reference: 3GPP TR 22.986 - Study on Service Specific Access Control