Showing posts with label Data Traffic Management. Show all posts
Showing posts with label Data Traffic Management. Show all posts

Friday 3 June 2011

Carrier Aggregation with a difference

Click on picture to enlarge

Another one from the LTE World Summit. This is from a presentation by Ariela Zeira of Interdigital.

What is being proposed is that Carrier Aggregation can use both the licensed as well as unlicensed bands but the signalling should only happen in the licensed band to keep the operator in control.

Note that this is only proposed for Small Cells / Femtocells.

The only concern that I have with this approach is that this may cause interference with the other devices using the same band (especially ISM band). So the WiFi may not work while the LTE device is aggregating this ISM band and the same goes for bluetooth.

Comments welcome!

Friday 29 April 2011

Service Layer Optimization element to Improve Utilisation of Network Capacity


The following is an extract from 4G Americas whitepaper, "Optimizing the Mobile Application Ecosystem":


Applications have diverse requirements on the mobile network in terms of throughput, relative use of uplink vs. downlink, latency and variability of usage over time. While the underlying IP based Layer 3 infrastructure attempts to meet the needs of all the applications, significant network capacity is lost to inefficient use of the available resources. This inefficiency stems primarily from the non-deterministic nature of the aggregate requirements on the network from the numerous applications and their traffic flows live at any time.

This reduction in network utilization can be mitigated by incorporating application awareness into network traffic management through use of Application or Service Layer optimization technologies. A Service Layer optimization solution would incorporate awareness of:

1) device capabilities such as screen size and resolution;
2) user characteristics such as billing rates and user location;
3) network capabilities such as historic and instantaneous performance and;
4) application characteristics such as the use of specific video codecs and protocols by an application such as Video on Demand (VOD) to ensure better management of network resources.

Examples of Service Layer optimization technologies include:
* Real-time transcoding of video traffic to avoid downlink network congestion and ensure better Quality of Experience (QoE) through avoidance of buffering
* Shaping of self-adapting traffic such as Adaptive Streaming traffic through packet delay to avoid downlink network congestion
* Shaping of error-compensating flows such as video conferencing through use of packet drops to avoid uplink network congestion
* Shaping of large flows such as file uploads on the uplink through packet delays to conserve responsiveness of interactive applications such as web browsing
* Explicit caching of frequently accessed content such as video files on in-network CDNs to minimize traffic to backbone
* Implicit caching of frequently accessed content such as images in web content on in-network caches to improve web page retrieval speeds

Service Layer optimization technologies may be incorporated in the data path in many locations:
1) the origin server;
2) the UE device;
3) as a cloud-hosted offering through which devices and/or applications and/or networks route traffic or;
4) as a network element embedded in a service provider’s network.

Further, in a service provider’s network the optimization function may be deployed in either the core network and/or edge aggregation locations. When Service Layer optimization entities in the network are deployed at both core and edge locations, they may operate in conjunction with each other to form a hierarchy with adequate level of processing to match the traffic volume and topology. Such a hierarchy of network entities is especially effective in the case of caching.

The 3GPP standard network architecture defines a number of elements such as QoS levels that are understood and implemented in the network infrastructure. However, much of this network capability is not known or packaged for use in the Service Layer by application developers. One approach to resolving this discrepancy may be to publish standard Service Layer APIs that enable application developers to request network resources with specific capabilities and also to get real-time feedback on the capabilities of network resources that are in use by the applications. Such APIs may be exposed by the network to the cloud or may be exposed to application clients resident on mobile devices through device application platforms and SDKs. The network APIs being defined by the Wholesale Application Community are an example of the recognition of the need for such Service Layer visibility into network capabilities. Future versions of the WAC standards will likely incorporate and expose network Quality of Service (QoS) capabilities.



Pic Source: Aria Networks


Why does Optimization matter? A good answer to this question is provided in Telecoms.com article as follows:

For many people, says Constantine Polychronopoulos, founder and chief technology officer of mobile internet infrastructure specialist Bytemobile, the definition of optimisation as it relates to mobile networks is too narrow; restricted to compressing data or to the tweaking of the radio access network in a bid to improve throughput. While these are key elements of optimisation, he says, the term ought to be interpreted far more broadly. “The best way for us to think of optimisation,” he says, “is as a set of synergistic technologies that come together to address everything that has to do with improving network and spectrum utilisation and user experience. If you stretch the argument, it includes pretty much every thing that matters. This holistic, end-to-end approach to optimisation is the hallmark of Bytemobile’s solutions. Point products tend to be costly and difficult or impossible to evolve and maintain.”

And optimisation matters, he says, because the boom in mobile data traffic experienced in some of the world’s most advanced mobile markets represents a serious threat to carrier performance and customer satisfaction. US operator and pioneer iPhone partner AT&T is a case in point, Polychronopoulos says.

“If you look at what’s been said by Ralph de la Vega (president and CEO of AT&T Mobility) and John Donovan (the firm’s CTO), they have seen a 5,000- per cent increase in data traffic over the past two years. The data points from other operators are similar,” he continues. “They see an exponential growth of data traffic with the introduction of smartphones, in particular the iPhone.”

Operators may have received what they’d been wishing for but the scale of the uptake has taken them by surprise, Polychronopoulos says. The type of usage consumers are exhibiting can be problematic as well. Bytemobile is seeing a great deal of video-based usage, which can often be a greater drain on network resource than web browsing. Given the increasing popularity of embedding video content within web pages, the problem is becoming exacerbated.

Dr. Polychronopoulos is keen to point out that there are optimisation opportunities across different layers of the OSI stack—Bytemobile offers solutions that will have an impact on layers three (the IP layer) through seven (the application layer). But he stresses that some of the most effective returns from optimisation technologies come from addressing the application layer, where the bulk of the data is to be found.

“An IP packet can be up to 1,500 bytes long,” he says. “So at layer three, while you can balance packet by packet, there is only so much you can do to optimise 1,500 bytes. At the top layer, the application can be multiple megabytes or gigabytes if you’re watching video. And when you’re dealing with those file sizes in the application layer, there is a whole lot more you can do to reduce the amount of data or apply innovative delivery algorithms to make the content more efficient,” he says.

By optimising content such as video, Polychronopoulos says, significant gains can be made in spectral and backhaul network utilisation. A range of options are open to operators, he says, with some techniques focused on optimising the transport protocol, and others designed to reduce the size of the content.

“With video, we can resize the frame, we can reduce the number of frames, we can reduce the resolution of the frame or apply a combination of the above in a way that does not affect the video quality but greatly improves network efficiencies,” he says. “So if you go to a site like YouTube and browse a video, you might download something like 100MB of data. But if you were to go through a platform like ours, you may download only 50MB when the network is congested and still experience not only the same video quality, but also fluid video playback without constant re-buffering stalls.”

It is possible, he explains, to run these solutions in a dynamic way such that data reduction engages only when the network is congested. If a user seeks to access high-volume data like video during the network’s quiet time, the reduction technologies are not applied. But when things are busier, they kick in automatically and gradually. This could have an application in tiered pricing strategies. Operators are looking at such options in a bid to better balance the cost of provisioning mobile data services with the limited revenue stream that they currently generate because of the flat rate tariffs that were used to stimulate the market in the first place. Being able to dynamically alter data reduction and therefore speed of delivery depending on network load could be a useful tool to operators looking to charge premium prices for higher quality of service, Polychronopoulos says.

If it is possible to reduce video traf- fic in such a way that data loads are halved but the end user experience does not suffer proportionally, the question arises as to why operators would not simply reduce everything, whether the network was busy or not. Polychronopoulos argues that in quiet times there are no savings to be made by reducing the size of content being transported.

“The operator has already provisioned the network one way or another,” he says, “so there is a certain amount of bandwidth and a certain amount of backhaul capacity. When the network is not congested, the transport cost is already sunk. When it becomes congested, though, you get dropped calls and buffering and stalled videos and the user experience suffers. That’s where optimisation shines. Alternatively, media optimisation can be factored in during toplevel network provisioning when the savings in CAPEX can be extremely compelling.”

While LTE is held up by some within the industry as the panacea to growing demand for more mobile broadband service, Polychronopoulos is unconvinced. If anything, he says, the arrival of the fourth generation will serve only to exacerbate the situation.

“LTE is going to make this problem far more pronounced, for a number of reasons,” he says. “As soon as you offer improved wireless broadband, you open the door to new applications and services. People are always able to come up with new ways of inundating any resource, including bandwidth. We’re going to see more data-driven applications on mobile than we see on the typical desktop, because the mobile device is always with you.” And while LTE promises greater spectral efficiency than its 3G forebears, Polychronopoulos says, the fact that spectrum remains a finite resource will prove ever more problematic as services evolve.

“We’re reaching the limits of spectral efficiency,” he says. “Shannon’s Law defines the limit as six bits per Hertz, and while we may be moving to higher-bandwidth wireless broadband, spectrum remains finite. To offer 160Mbps, you have to allocate twice the amount of spectrum than in 3G, and it’s a very scarce and very expensive resource.”

Operators have been wrong to focus exclusively on standards-based solutions to network optimisation issues, Polychronopoulos says. In restricting themselves to 3GPP-based solutions, he argues that they have missed what he describes as “the internet component of wireless data.” Internet powerhouses like Google, Yahoo and Microsoft (which he dubs ‘the GYM consortium’) have established a model that he says is a great threat to the mobile operator community in that it establishes a direct consumer relationship and disregards the “pipe” (wireless broadband connection) used to maintain that relationship.

“The operators have to accelerate the way they define their models around wireless data so that they’re not only faster than the GYM consortium in terms of enabling popular applications, but smarter and more efficient as well,” he says. Dr. Polychronopoulos then makes a popular case for the carriers’ success: “The operators have information about the subscriber that no other entity in the internet environment can have; for example, they know everything the subscriber has done over the lifetime of their subscription and the location of each event. They don’t have to let this data outside of their networks, so they are very well positioned to win the race for the mobile internet.”


Tuesday 25 January 2011

MAPCON - Multi Access PDN Connectivity

On Monday, I read Bernard Herscovich, CEO, BelAir Networks saying the following in RCR Wireless:

Wi-Fi is obviously a way to offload data to alleviate congestion, but it also contributes to overall network profitability by delivering data at a lower cost per megabit that traditional macrocells. ABI Research estimates that carrier Wi-Fi can deliver data at 5% the cost of adding cellular capacity. Perhaps the most important driver, though, is the fact that, properly designed and architected, a carrier Wi-Fi network will deliver a consistently great user experience. The implications of that on attracting and retaining subscribers are obvious.

We've also seen cable operators taking advantage of their broadband HFC infrastructure to mount Wi-Fi APs throughout their coverage areas, offering free Wi-Fi as a sticky service to attract and retain home broadband subscribers.

At the GSMA Mobile Asia Congress, back in mid-November, 2010, KDDI's president and chairman explained that while they would be migrating to LTE, which would double their network capacity, data demand in Japan was forecast to increase by 15 times over the next five years. So LTE alone, he admitted, would not be enough. A few weeks before that, European operators, including Deutsche Telekom and Telefonica, were making similar statements at the Broadband World Forum in Paris.

It is clear that LTE alone will not be sufficient to meet ongoing mobile data demand. Technical innovation has resulted in huge capacity gains, but we're now at a point where additional bandwidth is more of a by-product of incremental spectrum. And, we all realize the finite nature of that resource. So, based on this new spectrum, LTE macrocells could deliver a 2 – 4X capacity increase. Meanwhile, ABI estimates that data capacity requirements are increasing 150% per year.

So, it's pretty clear that carriers are going to need more than just an LTE swap out to keep delivering a great user experience. They need to, as many already realize, augment their licensed spectrum with Wi-Fi. KT, the second largest mobile carrier in South Korea, claims to be offloading 67% of their mobile data traffic onto Wi-Fi. There may also be additional unlicensed spectrum made available, at least in the U.S. and the U.K., through the release of so-called white space spectrum, freed up through the switch from analog to digital TV.

It is obvious from the technology point of view that Multiple PDN connections would need to be supported when the UE is using LTE for part of data connection and Wi-Fi for other part. In fact these two (or multiple) connections should be under the control of the same EPC core that can help support seamless mobility once you move out of the WiFi hotspot.

One of the items in 3GPP Release-10 is to do with supporting of multiple Packet Data Networks (PDN) connections for a device. A Release-9 network and the UE can only support 3GPP access based connection via EPC. In Release-10 support for upto 1 non-3GPP access has been added.

FMC100044 specifies the following requirements:

  • The Evolved Packet System supports the following scenarios: a single Operator offering both fixed and mobile access; different Operators collaborating to deliver services across both networks.
  • The Evolved Packet System shall support the access of services from mobile network through fixed access network via interworking.
  • The Evolved Packet System shall be able to support functions for connectivity, subscriber authentication, accounting, Policy Control and quality of service for interworking between the fixed broadband access and Evolved Packet Core.
  • The Evolved Packet System shall optimize QoS and Policy management meaning that it shall offer minimal signalling overhead, while interworking between the fixed broadband access and Evolved Packet Core.
  • The Evolved Packet System shall be able to provide an equivalent experience to users consuming services via different accesses.

The Rel-10 work item extends Rel-9 EPC to allow a UE equipped with multiple network interfaces to establish multiple PDN connections to different APNs via different access systems. The enhancements enable:

  • Establishment of PDN connections to different APNs over multiple accesses. A UE opens a new PDN connection on an access that was previously unused or on one of the accesses it is already simultaneously connected to.
  • Selective transfer of PDN connections between accesses. Upon inter-system handover a UE transfers only a subset of the active PDN connections from the source to the target access, with the restriction that multiple PDN connections to the same APN shall be kept in one access.
  • Transfer of all PDN connections out of a certain access system. A UE that is simultaneously connected to multiple access systems moves all the active PDN connections from the source to target access, e.g. in case the UE goes out of the coverage of the source access.

This work also provides mechanisms enabling operator's control on routing of active PDN connections across available accesses.

The scope of the work is restricted to scenarios where the UE is simultaneously connected to one 3GPP access and one, and only one, non-3GPP access. The non-3GPP access can be either trusted or untrusted.

The design of the required extensions to Rel-9 EPC is based on TR 23.861 Annex A, that provides an overview of the changes that are expected in TS 23.401 and TS 23.402 for the UE to simultaneously connect to different PDNs via different access systems.

See Also:

3GPP TR 23.861: Multi access PDN connectivity and IP flow mobility

3GPP TS 22.278: Service requirements for the Evolved Packet System (EPS)

Old Blog post on Multiple PDN Connectivity

Tuesday 14 December 2010

What are Heterogeneous Networks (HetNets)?

HetNets are hot. I hear about them in various contexts. Its difficult to find exactly what they are and how they will work though. There is a HetNets special issue in IEEE Communications Magazine coming out next year but that's far away.

I found an interesting summary on HetNets in Motorola Ezine that is reproduced below:


“The bigger the cell site, the less capacity per person you have,” said Peter Jarich, research director with market intelligence firm Current Analysis. “If you shrink coverage to a couple of blocks, you are having that capacity shared with a fewer number of people, resulting in higher capacity and faster data speeds.”

This is a topic the international standards body, the Third Generation Partnership Project (3GPP), has been focusing on to make small cells part of the overall cellular network architecture.

“What we’re seeing is a natural progression of how the industry is going to be addressing some of these capacity concerns,” said Joe Pedziwiatr, network systems architect with Motorola. “There is a need to address the next step of capacity and coverage by introducing and embracing the concepts of small cells and even looking at further advances such as better use of the spectrum itself.”

As such, discussion regarding this small-cell concept has emerged into what is called heterogeneous networks, or Het-Net, for short. The idea is to have a macro wireless network cooperating with intelligent pico cells deployed by operators to work together within the macro network and significantly improve coverage and augment overall network capacity. Small cells can also be leveraged to improve coverage and deliver capacity inside buildings. Indoor coverage has long been the bane of mobile operators. Some mobile operators are already leveraging this concept, augmenting their cellular service offering with WiFi access to their subscriber base in order to address the in-building coverage and capacity challenges facing today’s cellular solutions.

Pedziwiatr said this Het-Net structure goes far beyond what is envisioned for femtocells or standard pico cells for that matter. Introducing a pico cell into the macro network will address but just one aspect of network congestion, namely air interface congestion. The backhaul transport network may become the next bottleneck. Finally, if all this traffic hits the core network, the congestion will just have shifted from the edge to the core.

“This requires a system focus across all aspects of planning and engineering,” Pedziwiatr said. “We’re trying to say it goes beyond that of a femto. If someone shows up at an operator and presents a pico cell, that is just one percent of what would be needed to provide true capacity relief for the macro network.”

Femtocells, otherwise known as miniature take-home base stations, are obtained by end users and plugged into a home or office broadband connection to boost network signals inside buildings. A handful of 3G operators worldwide are selling femtocells as a network coverage play. For the LTE market, the Femtocell Forum is working to convince operators of the value of a femtocell when it comes to better signal penetration inside buildings and delivering high-bandwidth services without loading the mobile network. This is possible, because the backhaul traffic runs over the fixed line connection. However, this femtocell proposition largely relies on end user uptake of them—not necessarily where operators need them, unless they install femtocells themselves or give end users incentives to acquire them.

As with any new concept, there are challenges to overcome before Het-Nets can become reality. Het-Nets must come to market with a total cost of ownership that is competitive for an operator to realize the benefit of providing better capacity, higher data speeds, and most of all, a better end-user experience said Chevli.

“The level of total cost of ownership has to be reduced. That is where the challenge is for vendors to ensure that any new solution revalidates every existing tenet of cellular topology and evolve it to the new paradigm being proposed,” Chevli said. “You can’t increase the number of end nodes by 25X and expect to operate or manage this new network with legacy O&M paradigms and a legacy backhaul approach.”

One of the issues is dealing with interference and Het-Net network traffic policies. “How do you manage all of these small cell networks within the macrocell network?” asked Jarich. “Right now if you have a bunch of femtocells inside a house, there is this concept that the walls stop the macrocell signals from getting in and out. You get a separation between the two. Go outdoors with small cells underlying bigger cells and you get a lot more interference and hand-off issues because devices will switch back and forth based on where the stronger signal is.”

Pedziwiatr said for a Het-Net to work, it would require a change in node management, whereby an operator isn’t burdened with managing big clusters of small cells on an individual basis. “We see elements of SON (self organizing networks), self discovery and auto optimization that will have to be key ingredients in these networks. Otherwise operators can’t manage them and the business case will be a lot less attractive,” he said.

Fortunately, the industry has already been working with and implementing concepts of SON in LTE network solutions. In the femtocell arena also, vendors have been incorporating some elements and concepts of SON so that installing them is a plug-and-play action that automatically configures the device and avoids interference. But even then, Het-Nets will require further SON enhancement to deal with new use cases, such as overlay (macro deployment) to underlay (pico deployments) mobility optimization.

When it comes to LTE, SON features are built into the standard, and are designed to offer the dual benefit of reducing operating costs while optimizing performance. SONs will do this by automating many of the manual processes used when deploying a network to reduce costs and help operators commercialize their new services faster. SON will also automate many routine, repetitive tasks involved in day-to-day network operations and management such as neighbor list discovery and management.

Other key sticking points are deployment and backhaul costs. If operators are to deploy many small cells in a given area, deploying them and backhauling their traffic should not become monumental tasks.

Chevli and Pedziwiatr envision Het-Nets being deployed initially in hot zone areas – where data traffic is the highest – using street-level plug-and-play nodes that can be easily installed by people with little technical know-how.

“Today, macro site selection, engineering, propagation analysis, rollout and optimization are long and expensive processes, which must change so that installers keep inventories of these units in their trucks, making rollout simple installations and power-ups,” said Pedziwiatr. “These will be maintained at a minimum with quick optimization.”

The notion of backhauling traffic coming from a large cluster of Het-Net nodes could also stymie Het-Nets altogether. Chevli said that in order to keep costs down, Het-Net backhaul needs to be a mix of cost-effective wireless or wired backhaul technology to aggregate traffic from what likely will be nodes sitting on lamp posts, walls, in-building and other similar structures. The goal then is to find a backhaul point of presence to aggregate the traffic and then put that traffic on an open transport network in the area.

Backhaul cost reductions may also be a matter of finding ways to reduce the amount of backhaul forwarded to the core network, Pedziwiatr said. These types of solutions are already being developed in the 3G world to cope with the massive data traffic that is beginning to crush networks. For traffic such as Internet traffic, which doesn’t need to travel through an operator’s core network, offloading that traffic as close to the source as possible would further drive down the cost of operation through the reduction of backhaul and capacity needs of the core network.

In the end, with operators incorporating smaller cells as an underlay to their macro network layer rather than relying on data offloading techniques such as femtocells and WiFi that largely depend on the actions of subscribers and impacted by the surrounding cell operating in the same unlicensed frequency, Het-Nets in licensed spectrum will soon become the keystone in attacking the ever-present congestion issue that widely plagues big cities and this is only likely to get worse over time.

Image Source: Dr. Daichi Imamura, Panasonic presentation.

Thursday 25 November 2010

LIPA, SIPTO and IFOM Comparison

Enhancing macro radio access network capacity by offloading mobile video traffic will be essential for mobile communications industry to reduce its units costs to match its customer expectations. Two primary paths to achieve this are the use of femtocells and WiFi offloading. Deployment of large scale femtocells for coverage enhancement has been a limited success so far. Using them for capacity enhancements is a new proposition for mobile operators. They need to assess the necessity of using them as well as decide how to deploy them selectively for their heavy users.

Three alternative architectures that are being standardized by 3GPP have various advantages and shortcomings. They are quite distinct in terms of their dependencies and feasibility. Following table is a summary of comparison among these three approaches for traffic offloading.


Looking at the relative strengths of the existing traffic offload proposals, it is difficult to pick an outright winner. SIPTO macro-network option is the most straight-forward and most likely to be implemented rather quickly. However, it doesn't solve the fundamental capacity crunch in the radio access network. Therefore its value is limited to being an optimization of the packet core/transport network. Some other tangible benefits would be reduction in latency to increase effective throughput for customers as well as easier capacity planning since transport facilities don't need to be dimensioned for large number of radio access network elements anymore.

LIPA provides a limited benefit of allowing access to local premises networks without having to traverse through the mobile operator core. Considering it is dependent on the implementation of femtocell, this benefit looks rather small and has no impact on the macro radio network capacity. If LIPA is extended to access to Internet and Intranet, then the additional offload benefit would be on the mobile operator core network similar to the SIPTO macro-network proposal. Femtocell solves the macro radio network capacity crunch. However, the pace of femtocell deployments so far doesn't show a significant momentum. LIPA's market success will be limited until cost of femtocell ownership issues are resolved and mobile operators decide why (coverage or capacity) to deploy femtocells.

IFOM is based upon a newer generation of Mobile IP that has been around as a mobile VPN technology for more than 10 years. Unfortunately success record of mobile IP so far has been limited to enterprise applications. It hasn't become a true consumer-grade technology. Introduction of LTE may change this since many operators spearheading LTE deployments are planning to use IPv6 in handsets and adopt a dual-stack approach of having both IPv4 and IPv6 capability. Since many WiFi access networks will stay as IPv4, DSMIPv6 will be the best tunneling mechanism to hide IPv6 from the access network. Having dual-stack capability will allow native access to both legacy IPv4 content and native IPv6 content from major companies such as Google, Facebook, Yahoo, etc. without the hindrance of Network Address Translation (NAT). Considering the popularity of smartphones such as iPhone, Blackberry and various Android phones, they will be the proving ground for the feasibility of DSMIPv6.

Source of the above content: Whitepaper - Analysis of Traffic Offload : WiFi to Rescue


Wednesday 24 November 2010

IP Flow Mobility and Seamless Offload (IFOM)

Unlike LIPA or SIPTO that are dependent on upstream network nodes to provide the optimization of routing different types of traffic, IFOM relies on the handset to achieve this functionality. It explicitly calls for the use of simultaneous connections to both macro network, e.g., LTE, UMTS and WiFi. Therefore, IFOM, unlike LIPA and SIPTO, is truly a release 10-onward only technology and it is not applicable for user terminals pre-Release 10. IFOM is being specified via 3GPP TS 23.261 [1]. Following diagram shows the interconnectivity model for IFOM capable UE.


IFOM uses an Internet Engineering Task Force (IETF) Request For Comments (RFC), Dual Stack Mobile IPv6 (DSMIPv6) (RFC-5555) [2].

Since IFOM is based on DSMIPv6, it is independent of the macro network flavor. It can be used for a green-field LTE deployment as well as a legacy GPRS packet core.

Earlier on we looked at the mobile network industry attempts of integration between packet core and WLAN networks. Common characteristic of those efforts was the limitation of the UE, its ability to use one radio interface at a time. Therefore, in earlier interworking scenarios UE was forced to use/select one radio network and make a selection to move to an alternative radio for all its traffic. Today many smartphones, data cards with connection managers already have this capability, i.e., when the UE detects the presence of an alternative access network such as a home WiFi AP, it terminates the radio bearers on the macro network and initiates a WiFi connection. Since WiFi access network and packet core integration is not commonly implemented, user typically loses her active data session and re-establishes another one.

Similarly access to some operator provided services may not be achieved over WiFi. Considering this limitation both iPhone IOS and Android enabled smartphones to have simultaneous radio access but limited this functionality to sending MMS over the macro network while being connected to WiFi only.

IFOM provides simultaneous attachment to two alternate access networks. This allows fine granularity of IP Flow mobility between access networks. Using IFOM, it will be possible to select particular flows per UE and bind them to one of two different tunnels between the UE and the DSMIPv6 Home Agent (HA) that can be implemented within a P-GW or GGSN. DSMIPv6 requires a dual-stack (IPv4 or IPv6) capable UE. It is independent of the access network that can be IPv4 or IPv6.

[1] 3GPP TS 23.261: IP flow mobility and seamless Wireless Local Area Network (WLAN) offload; Stage 2

[2] RFC-5555: Mobile IPv6 Support for Dual Stack Hosts and Routers

[3] 3GPP TS 23.327: Mobility between 3GPP-Wireless Local Area Network (WLAN) interworking and 3GPP systems

Content Source: Analysis of Traffic Offload : WiFi to Rescue

Friday 10 September 2010

Selected IP Traffic Offload (SIPTO)

The industry is developing a new standard called Selected IP Traffic Offload (SIPTO). SIPTO allows internet traffic to flow from the femtocell directly to the internet, bypassing the operator’s core network, as shown in Figure 8 below.


More information on LIPA and SIPTO can be obtained from:
1. 3GPP TR 23.829: Local IP Access and Selected IP Traffic Offload (http://www.3gpp1.eu/ftp/Specs/archive/23_series/23.829/)

Thursday 9 September 2010

Local IP Access (LIPA) for Femtocells

I blogged about data offload earlier, for Femtocells. This traffic offload can be done via a feature called Local IP Access (LIPA). If you have LIPA support in your Home NodeB (HNB) or Home eNodeB (HeNB) then once you have camped on your Femtocell then you can access your local network as well as the network's IP network.

This would mean that you can directly print from your mobile to the local printer or access other PC's on your LAN. Note that I am also referring to access via Dongle as Mobile access though in practice I dont see much point of people just using dongles when they are in their Home Zone. Every laptop/notebook/netbook is now Wifi enabled so this situation doesnt benefit much for the dongle access.

I am sure there are quite a few unresolved issues with regards to the Security of the data, the IP address allocation, QoS, etc.

Continuous computing have a white paper on LIPA available that can be obtained by registering here. Anyway, enough information is available even without getting the PDF.

There is also a small presentation here that gives a bit of idea on LIPA.
As usual any comments, insights and references welcome.