Showing posts with label Cloud Computing. Show all posts
Showing posts with label Cloud Computing. Show all posts

Tuesday, May 31, 2022

Transitioning from Cloud-native to Edge-Native Infrastructure

We have looked at what we mean by cloud-native in an earlier post here. Recently we also looked at edge-native infrastructure here. While we have been debating between cloud and edge for a while, in a new presentation (embedded below), Gorkem Yigit, Principal Analyst, Analysys Mason argues that the new, distributed IT/OT applications will drive the shift from cloud-native to edge-native infrastrcuture.

The talk by Gorkem on '5G and edge network clouds: industry progress and the shape of the new market' from Layer123 World Congress 2021 is as follows:

A blog post by ADVA has a nice short summary of the image on the top that was also presented at a webinar earlier. The following is an extract from that blog post: 

The diagram compares hyperscale (“cloud-native infrastructure”) on the left with hyper-localized (“edge-native infrastructure”) on the right.

  • Computing: The traditional hyperscale cloud is built on centralized and pooled resources. This approach enables unlimited scalability. In contrast, compute at the edge has limited scalability, and may require additional equipment to grow applications. But the initial cost at the edge is correspondingly low, and grows linearly with demand. That compares favorably to the initial cost for a hyperscale data center, which may be tens of millions of dollars.
  • Location sensitivity and latency: Users of the hyperscale data center assume their workloads can run anywhere, and latency is not a major consideration. In contrast, hyper-localized applications are tied to a particular location. This might be due to new laws and regulations on data sovereignty that require that information doesn’t leave the premises or country. Or it could be due to latency restrictions as with 5G infrastructure. In either case, shipping data to a remote hyperscale data center is not acceptable.
  • Hardware: Modern hyperscale data centers are filled with row after row of server racks – all identical. That ensures good prices from bulk purchases, as well as minimal inventory requirements for replacements. The hyper-localized model is more complicated. Each location must be right-sized, and supply-chain considerations come into play for international deployments. There also may be a menagerie of devices to manage.
  • Connectivity: Efficient use of hyperscale data centers depends on reliable and high-bandwidth connectivity. That is not available for some applications. Or they may be required to operate when connectivity is lost. An interesting example of this case is data processing in space, where connectivity is slow and intermittent.
  • Cloud stack: Hyperscale and hyper-localized deployments can host VMs and containers. In addition, hyper-localized edge clouds can host serverless applications, which are ideal for small workloads.
  • Security: Hyperscale data centers use a traditional perimeter-based security model. Once you are in, you are in. Hyper-localized deployments can provide a zero-trust model. Each site is secured as with a hyperscale model, but each application can also be secured based on specific users and credentials.

You don’t have to choose upfront

So, which do you pick? Hyperscale or hyper-localized?

The good news is that you can use both as needed, if you make some good design choices.

  • Cloud-native: You should design for cloud-native portability. That means using technologies such as containers and a micro-services architecture.
  • Cloud provider supported edge clouds: Hyperscale cloud providers are now supporting local deployments. These tools enable users to move workloads to different sites based on the criteria discussed above. Examples include IBM Cloud Satellite, Amazon Outposts, Google Anthos, Azure Stack and Azure Arc.

You can also learn more about this topic in the Analysys Mason webinar, “From cloud-native to edge-native computing: defining the cloud platform for new use cases.”. You can also download the slides from there after registration.

Related Posts

Tuesday, August 24, 2021

3GPP's 5G-Advanced Technology Evolution from a Network Perspective Whitepaper


China Mobile, along with a bunch of other organizations including China Unicom, China Telecom, CAICT, Huawei, Nokia, Ericsson, etc., produced a white paper on what technology evolutions will we see as part of 5G-Advanced. This comes not so long after the 3GPP 5G-Advanced Workshop which a blogged about here.

The abstract of the whitepaper says:

The commercialization of 5G networks is accelerating globally. From the perspective of industry development drivers, 5G communications are considered the key to personal consumption experience upgrades and digital industrial transformation. Major economies around the world require 5G to be an essential part of long-term industrial development. 5G will enter thousands of industries in terms of business, and technically, 5G needs to integrate DOICT (DT - Data Technology, OT - Operational Technology, IT - Information Technology and CT - Communication Technology) and other technologies further. Therefore, this white paper proposes that continuous research on the follow-up evolution of 5G networks—5G-Advanced is required, and full consideration of architecture evolution and function enhancement is needed.

This white paper first analyzes the network evolution architecture of 5G-Advanced and expounds on the technical development direction of 5G-Advanced from the three characteristics of Artificial Intelligence, Convergence, and Enablement. Artificial Intelligence represents network AI, including full use of machine learning, digital twins, recognition and intention network, which can enhance the capabilities of network's intelligent operation and maintenance. Convergence includes 5G and industry network convergence, home network convergence and space-air-ground network convergence, in order to realize the integration development. Enablement provides for the enhancement of 5G interactive communication and deterministic communication capabilities. It enhances existing technologies such as network slicing and positioning to better help the digital transformation of the industry.

The paper can be downloaded from China Mobile's website here or from Huawei's website here. A video of the paper launch is embedded below:

Nokia's Antti Toskala wrote a blog piece providing the first real glimpse of 5G-Advanced, here.

Related Posts

Saturday, October 10, 2020

What is Cloud Native and How is it Transforming the Networks?


Cloud native is talked about so often that it is assumed everyone knows what is means. Before going any further, here is a short introductory tutorial here and video by my Parallel Wireless colleague, Amit Ghadge.  

If instead you prefer a more detailed cloud native tutorial, here is another one from Award Solutions.

Back in June, Johanna Newman, Principal Cloud Transformation Manager, Group Technology Strategy & Architecture at Vodafone spoke at the Cloud Native World with regards to Vodafone's Cloud Native Journey 


Roz Roseboro, a former Heavy Reading analyst who covered the telecom market for nearly 20 years and currently a Consulting Analyst at Light Reading wrote a fantastic summary of that talk here. The talk is embedded below and selective extracts from the Light Reading article as follows:

While vendors were able to deliver some cloud-native applications, there were still problems ensuring interoperability at the application level. This means new integrations were required, and that sent opex skyrocketing.

I was heartened to see that Newman acknowledged that there is a difference between "cloud-ready" and "cloud-native." In the early days, many assumed that if a function was virtualized and could be managed using OpenStack, that the journey was over.

However, it soon became clear that disaggregating those functions into containerized microservices would be critical for CSPs to deploy functions rapidly and automate management and achieve the scalability, flexibility and, most importantly, agility that the cloud promised. Newman said as much, remarking that the jump from virtualized to cloud-native was too big a jump for hardware and software vendors to make.

The process of re-architecting VNFs to containerize them and make them cloud-native is non-trivial, and traditional VNF suppliers have not done so at the pace CSPs would like to see. I reference here my standard chicken and egg analogy: Suppliers will not go through the cost and effort to re-architect their software if there are no networks upon which to deploy them. Likewise, CSPs will not go through the cost and effort to deploy new cloud networks if there is no software ready to run on them. Of course, some newer entrants like Rakuten have been able to be cloud-native out of the gate, demonstrating that the promise can be realized, in the right circumstances.

Newman also discussed the integration challenges – which are not unique to telecom, of course, but loom even larger in their complex, multivendor environments. During my time as a cloud infrastructure analyst, in survey after survey, when asked what the most significant barrier to faster adoption of cloud-native architectures, CSPs consistently ranked integration as the most significant.

Newman spent a little time discussing the work of the Common NFVi Telco Taskforce (CNTT), which is charged with developing a handful of reference architectures that suppliers can then design to which will presumably help mitigate many of these integration challenges, not to mention VNF/CNF life cycle management (LCM) and ongoing operations.

Vodafone requires that all new software be cloud-native – calling it the "Cloud Native Golden Rule." This does not come as a surprise, as many CSPs have similar strategies. What did come as a bit of a surprise, was the notion that software-as-a-service (SaaS) is seen as a viable alternative for consuming telco functions. While the vendor with the SaaS offering may not itself be cloud-native (for example, it could still have hardware dependencies), from Vodafone's point of view, it ends up performing as such, given the lower operational and maintenance costs and flexibility of a SaaS consumption model.

If you have some other fantastic links, videos, resources on this topic, feel free to add in the comments.

Related Posts:

Saturday, November 21, 2015

'Mobile Edge Computing' (MEC) or 'Fog Computing' (fogging) and 5G & IoT


Picture Source: Cisco

The clouds are up in the sky whereas the fog is low, on the ground. This is how Fog Computing is referred to as opposed to the cloud. Fog sits at the edge (that is why edge computing) to reduce the latency and do an initial level of processing thereby reducing the amount of information that needs to be exchanged with the cloud.

The same paradigm is being used in case of 5G to refer to edge computing, which is required when we are referring to 1ms latency in certain cases.

As this whitepaper from Ovum & Eblink explains:

Mobile Edge Computing (MEC): Where new processing capabilities are introduced in the base station for new applications, with a new split of functions and a new interface between the baseband unit (BBU) and the remote radio unit (RRU).
...
Mobile Edge Computing (MEC) is an ETSI initiative, where processing and storage capabilities are placed at the base station in order to create new application and service opportunities. This new initiative is called “fog computing” where computing, storage, and network capabilities are deployed nearer to the end user.

MEC contrasts with the centralization principles discussed above for C-RAN and Cloud RAN. Nevertheless, MEC deployments may be built upon existing C-RAN or Cloud RAN infrastructure and take advantage of the backhaul/fronthaul links that have been converted from legacy to these new centralized architectures.

MEC is a long-term initiative and may be deployed during or after 5G if it gains support in the 5G standardization process. Although it is in contrast to existing centralization efforts, Ovum expects that MEC could follow after Cloud RAN is deployed in large scale in advanced markets. Some operators may also skip Cloud RAN and migrate from C-RAN to MEC directly, but MEC is also likely to require the structural enhancements that C-RAN and Cloud RAN will introduce into the mobile network.

The biggest challenge facing MEC in the current state of the market is its very high costs and questionable new service/revenue opportunities. Moreover, several operators are looking to invest in C-RAN and Cloud RAN in the near future, which may require significant investment to maintain a healthy network and traffic growth. In a way, MEC is counter to the centralization principle of Centralized/Cloud RAN and Ovum expects it will only come into play when localized applications are perceived as revenue opportunities.

And similarly this Interdigital presentation explains:

Extends cloud computing and services to the edge of the network and into devices. Similar to cloud, fog provides network, compute, storage (caching) and services to end users. The distinguishing feature of Fog reduces latency & improves QoS resulting in a superior user experience

Here is a small summary of the patents with IoT and Fog Computing that has been flied.



Tuesday, January 11, 2011

Sunday, January 9, 2011

Dilbert Humour: Cloud Computing

Source: Dilbert

If you like these then please click 'Very Useful' or 'More like this' so that I know people find these useful.

For similar things follow the label: Mobile Humour.